
Redazione RHC : 8 November 2025 17:40
The latest statements by Sam Altman, CEO of OpenAI, regarding the progress of artificial intelligence (AI) are not very encouraging, as he recently stated that he is concerned about “the impact of AI on jobs” and also made it clear that we will not be safe even in a bunker “if AI gets out of control” .
But that’s not all, because in a recent interview, OpenAI’s CEO bluntly stated that we should be concerned about the future that artificial intelligence will bring: “I think something bad will happen with artificial intelligence.”
As reported by an Investopedia article, a month ago Sam Altman participated in an interview for the Andreessen Horowitz venture capital firm’s a16z video podcast and took the opportunity to say that he expects bad things to happen because of artificial intelligence: “I hope that really bad things don’t happen because of technology.”
As you can see in the interview below, Altman was referring to Sora, a video generation tool launched in late September by OpenAI that quickly became the most downloaded app on the App Store in the United States . This led to a wave of deepfakes created using this model, which flooded social media with videos featuring figures like Martin Luther King Jr. and other public figures, including Altman himself .
Indeed, Altman has appeared in these videos performing various criminal activities , as you can see in this Instagram Story . But that’s not all, as Altman also stated that tools like Sora need controls to prevent this technology from being used for malicious purposes : “Very soon the world will have to deal with incredible video models that can impersonate anyone or show anything they want.”
Similarly, the creator of ChatGPT argued that, rather than perfecting this type of technology behind closed doors, society and artificial intelligence should collaborate to “co-evolve” and “you can’t just leave everything for the end .”
According to Altman, what we should do is give people early exposure to this type of technology, so that communities can establish norms and barriers before these tools become even more powerful. He also argues that if we do this, we will be better prepared when AI-based video generation models become even more advanced than the current ones.
Sam Altman’s warning wasn’t just about fake videos, but also about the fact that many of us tend to “outsource” our decisions to algorithms that few people understand: “I still think there will be weird or scary moments.”
Furthermore, Altman also explained that the fact that artificial intelligence has not yet caused a catastrophic event “ doesn’t mean it never will” and that “billions of people talking to the same brain” could end up creating “strange things on a societal scale” .
“I think as a society, we’re going to develop barriers around this phenomenon,” he said. Finally, while it’s something that affects us all, Altman opposes strict regulation of this technology because he says that “most regulations probably have a lot of drawbacks” and that the ideal would be to conduct “very thorough safety testing” with these new models, which he called “extremely superhuman.”
Redazione