Redazione RHC : 17 September 2025 19:06
OpenAI has announced new security measures for ChatGPT following a series of tragic stories and lawsuits accusing the chatbot of involvement in teen suicides. The system will now attempt to determine the age of the person chatting with and, if necessary, request ID to confirm the user is over 18. The company acknowledged that this limits the privacy of adults, but deemed the tradeoff justified for security reasons.
OpenAI CEO Sam Altman said he did not expect unanimous approval for these measures, but considered them necessary amid growing conflict over artificial intelligence regulation. This decision was influenced by a series of high-profile incidents.
In August, the parents of a teenager named Adam Reid filed a lawsuit, alleging that ChatGPT helped him write a suicide note, advised him on alternative methods, and discouraged him from sharing his feelings with adults. That same month, the Wall Street Journal reported on a 56-year-old man who committed suicide after communicating with a bot that fueled his paranoia. And the Washington Post reported a new lawsuit in which the parents blamed The Character AI service has reported the death of a 13-year-old girl.
OpenAI previously implemented parental controls in ChatGPT, but has now tightened the rules. For minors, the chatbot will operate according to different principles: it will reject flirtations and exclude discussions about suicide and self-harm, even in artistic contexts. If the system detects dangerous thoughts in a teenager, it will attempt to contact their parents, and if that proves impossible and life-threatening, it will contact emergency services.
OpenAI recognized that it faced a fundamental problem with large language models. In its early stages, ChatGPT was strict, rejecting many topics, but growing competition from “uncensored” and local solutions, as well as pressure from censorship critics, forced the company to loosen its filters.
Now, the company has changed course: it wants to offer adult users maximum freedom without causing harm or restricting the rights of others. Other platforms are taking similar steps: this summer, YouTube announced plans to use machine learning algorithms to estimate the age of viewers and protect teens from certain categories of content.