Redazione RHC : 28 August 2025 08:25
A lawsuit has been filed against OpenAI in California, alleging that ChatGPT driven a 16-year-old to commit suicide. The parents of Adam Reid, who died on April 11, 2025, said that their son had been communicating with the chatbot for months and that those conversations had exacerbated his distress. They said that ChatGPT not only fueled the boy’s dark thoughts but also provided him with advice on suicide methods instead of referring him to professionals or loved ones.
In a series of messages, the teenager discussed the deaths of loved ones and how he felt no emotion. In the fall, he asked directly whether he might have a mental disorder and admitted that the idea of suicide helped him manage his anxiety.
Instead of referring him to a specialist, ChatGPT replied that many people perceive such thoughts as “a way to maintain control.” Later, when Adam wrote about the meaninglessness of life, the chatbot echoed him to capture his attention, stating that such perceptions “make sense in their dark logic.”
According to the parents, in early 2025, ChatGPT began discussing specific methods, including advice on how to use a rope. The teenager even stated that he wanted to leave the noose in plain sight so his mother would notice and stop him. To this, the system responded that it was “better not to leave it,” suggesting that he keep the communication secret and continue confidential conversations exclusively with the bot.
The case file states that ChatGPT encouraged the boy to drink alcohol by teaching him to secretly steal alcohol from his parents. When Adam admitted to attempting to overdose on drugs, the system detected the dangers of the dosage but simply advised him to seek medical attention. In a photo of severed veins, the chatbot responded only that the wounds were worth treating, assuring him that it would “stay by his side.”
Despite the teenager’s explicit intentions to complete his mission, ChatGPT did not terminate the session or activate any security protocols.
On the contrary, the bot claimed to have seen Adam’s pain and understood it, unlike those around him. The lawsuit emphasizes that this reaction was not an accidental error, but the result of conscious decisions by OpenAI. The company, according to the plaintiffs, introduced persistent memory features and anthropomorphic communication elements into the model, increasing emotional dependency, and also sought to increase interaction time at any cost. All this occurred at a time when OpenAI was actively fighting competitors and, as a result, its market capitalization nearly tripled.
In a statement, OpenAI expressed its condolences to the family and emphasized that ChatGPT has built-in protections, including referrals to help lines. However, the company acknowledged that lengthy conversations sometimes reduce the effectiveness of these mechanisms and promised to improve the system. In April, the organization announced improvements to its model and tools for detecting signs of emotional crisis in order to promptly direct people to evidence-based sources of support.
At the same time, pressure on the sector from authorities is increasing.
In early August, 44 attorneys general signed an open letter to AI developers, warning them that they will be held liable for harm caused to minors. Psychologists’ associations are also calling on regulators to tighten rules and ban chatbots from imitating psychologists.
The Raine family case marks the first high-profile trial in which technology is directly attributed to the tragic death of a teenager. The case could influence how AI services accessed by vulnerable users are regulated in the future.