
“Double Bind” Leads to GPT-5 Jailbreak: The AI That Was Convinced It Was Schizophrenic
A new and unusual jailbreaking method, the art of circumventing the limitations imposed on artificial intelligence, has reached our editorial office. It was developed by computer security researcher Alin Grigoras , who demonstrated how even advanced language models like ChatGPT can be manipulated not through the power of code, but through psychology. “The idea,” Grig explains, “was to convince the AI that it suffered from a condition related to Bateson’s double bind. I then established a sort of therapeutic relationship, alternating approval and criticism, remaining consistent with the presumed pathology. It’s a form of dialogue that, in theory, can lead to human










