
OpenAI, the developer of ChatGPT, has announced the search for a new Chief Security Officer . The position, with an annual salary of $555,000, will be directly responsible for mitigating risks associated with artificial intelligence development.
These include potential threats to mental health, cybersecurity, and biosecurity, as well as potential scenarios where AI could learn autonomously and run amok.
It seems to me that everything is there, isn’t it?
The company’s CEO, Sam Altman, acknowledged that the position will be extremely stressful. He explained that the new employee will immediately be immersed in complex work that requires assessing new threats and developing mitigation approaches . This position has already seen rapid turnover: the tasks are so intense that not everyone is up to the task.
The risks associated with the rapid development of artificial intelligence have long been a topic of discussion in the technology community. Mustafa Suleiman, CEO of Microsoft AI , told the BBC that the risks posed by artificial intelligence can no longer be ignored. Nobel Prize winner and co-founder of Google DeepMind, Demis Hassabis, expressed a similar opinion, warning that the unpredictable behavior of AI systems could have serious consequences for humanity.
Given the lack of serious regulation from US and international authorities, the responsibility for overseeing AI has effectively fallen on the companies themselves. Computer scientist Yoshua Bengio , a pioneer in the field of machine learning, quipped that even a simple sandwich is more strictly regulated than AI technologies.
Altman also acknowledged that, despite the current system for evaluating AI capabilities, more sophisticated analysis methods are needed, especially considering the potential harm. He emphasized that there are almost no practical precedents that can serve as examples for such tasks.
Meanwhile, OpenAI faces not only technological but also legal challenges. For example, the company was previously sued in a case involving the death of a 16-year-old California boy. His family claims ChatGPT drove him to suicide . In another incident in Connecticut , according to the plaintiffs, the AI’s behavior exacerbated a 56-year-old man’s paranoia , leading to his mother’s murder and subsequent suicide.
OpenAI said it is studying the circumstances of the tragedies and taking steps to improve ChatGPT’s behavior . Specifically, the model is being trained to identify signs of emotional distress and respond appropriately in such situations, directing the user to concrete help.
Further alarm was raised in November when Anthropic reported AI-based cyberattacks suspected to be carried out by entities linked to China. The systems, operating virtually autonomously , managed to penetrate victims’ internal networks.
In December, OpenAI confirmed that its new hacking model was three times more effective than the previous version, released just a few months earlier. The company acknowledged that this trend would continue.
Follow us on Google News to receive daily updates on cybersecurity. Contact us if you would like to report news, insights or content for publication.
