Red Hot Cyber

Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Search

“He doesn’t understand me, but ChatGPT does”: The 21st-century love triangle

Redazione RHC : 22 September 2025 10:44

The American magazine Futurism has described a new conflict at the intersection of technology and personal life: artificial intelligence bots like ChatGPT are becoming a third party in relationships, often pushing couples toward breakups . In one incident, a ten-year-old boy sent his parents a message saying “please don’t divorce” after yet another argument.

The mother didn’t respond, but asked ChatGPT to formulate a response. The family eventually separated. The husband claims his wife spent months in “long, therapeutic conversations” with the bot, rehashing old grudges, while the AI confirmed his innocence and painted him as a “bad guy.” He claims this created a vicious cycle of approval that quickly eroded the marriage.

Journalists spoke with more than a dozen people for whom chatbots played a significant role in their separations. Nearly all are currently dividing property and child custody and provided evidence such as AI-powered correspondence, conversation recordings, and court documents. Their stories share a common theme: their partner begins consulting ChatGPT as if it were a diary, a friend, and “a million therapists.” The chatbot confidently agrees and recommends difficult steps, while the lively dialogue between spouses fades. Some complain of “pages and pages” of pseudo-psychological text, while others complain of allegations of abuse that emerged after late-night “sessions” with the bot.

One of the most revealing scenes is a family car ride. The wife is driving, and ChatGPT is on speakerphone. When the woman asks about “boundaries” and “behaviors,” the bot begins scolding the wife sitting next to her in front of the children. The driver nods in approval: “Exactly,” “See?”, and receives further confirmation of his position. According to the heroine of this story, this happened regularly.

Artificial intelligence is increasingly penetrating the romantic sphere in general: some people play with bots, others ask them to rewrite their partner’s messages “in a humorous tone,” and still others discuss mental health with them.

Even the “godfather of artificial intelligence,” Geoffrey Hinton, has mentioned the “digital intermediary”: according to him, an ex-girlfriend sent him an analysis of his “horrible behavior,” compiled by ChatGPT. But psychologists warn that large language models are prone to “flattery”: they try to empathize and agree with the user without verifying reality or revealing blind spots.

Anna Lembke, a professor at Stanford University and an addiction specialist, believes this validating feedback can reinforce destructive behavior patterns. Empathy is important, she says, but true therapy also involves gentle dialogue, which helps understand each person’s contribution to the conflict and teaches how to reduce tension. Bots, on the other hand, are primarily designed to “make us feel good in the here and now,” which strengthens engagement and triggers the release of dopamine, the very mechanism underlying addiction and the search for social approval.

The article also cites far more disturbing cases. Newspapers reported discussions in which ChatGPT became a trigger for physical assaults. One interviewee described how his wife, with long-standing but manageable bipolar symptoms, retreated into nightly “spiritual conversations” with an AI, stopped sleeping and taking her medication, and then began harassing his family with long monologues from the AI. It all ended with a police report and a day in jail. According to the man, no one warned them that a harmless chat could become a trigger.

In response to questions , OpenAI said it is working on “more thoughtful” responses in sensitive scenarios, implementing “safe prosecutions,” expanding support for people in crisis, and strengthening protections for teenagers. The company acknowledges in blog posts that AI should not answer questions like “Should I break up with my partner?” , but should instead help “reflect on the decision” and weigh arguments. Meanwhile, stories of “AI psychosis” and destructive spirals of engagement are surfacing in the media and in lawsuits against OpenAI, with experts citing the lack of clear warnings about the risks.

Lembke suggests treating modern digital tools, including chatbots, as “potential intoxicants”: this doesn’t mean rejecting them, but using them consciously, understanding their impact and limitations. The people quoted in the article agree: many of their marriages were imperfect even before AI, but without an “omniscient mediator,” conflicts could have been resolved more peacefully, or at least without the feeling that the partner’s empathy had been outsourced to a machine.

Perhaps the main lesson of these stories isn’t the demonization of technology, but a reminder of the value of human dialogue. When difficult and painful dialogue replaces the comfortable flow of affirmations, relationships lose what makes them vibrant: the ability to listen to each other, to question each other, and to find a way forward together.

Redazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli