Red Hot Cyber, The cybersecurity news

Red Hot Cyber

Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Search
Between AI and fear, Skynet teaches: “We’ll build bunkers before launching AGI.”

Between AI and fear, Skynet teaches: “We’ll build bunkers before launching AGI.”

Redazione RHC : 12 October 2025 09:06

The quote, “We’ll definitely build a bunker before we launch AGI,” that inspired the article, was attributed to a Silicon Valley leader, though it’s unclear who exactly he meant by “we.”

The phrase perfectly captured the paradox of our times , and the irony is evident: those who are advancing the world’s most sophisticated artificial intelligence are the same ones who are terribly worried about its repercussions.

While they continue their research, they are simultaneously devising escape strategies. The situation is similar to that of someone who builds a dam knowing it will eventually fail, but instead of reinforcing it, prefers to buy a boat.

The bunkers of the super-rich and the fear of AGI

At a summer meeting in 2023, Ilya Sutskever, co-founder of OpenAI and the brilliant mind behind ChatGPT, made an intriguing statement to his researchers: “We will definitely build a bunker before releasing AGI” … and then “ Of course, it will be optional whether or not to enter the bunker.”

His enigmatic statement was interrupted by a researcher who asked what it meant. Sutskever followed up with a response that left everyone stunned: ” Before we launch the AGI, we will definitely build a bunker.”

According to LinkedIn founder Reid Hoffman, a significant portion, at least 50%, of the extremely wealthy individuals in Silicon Valley have already purchased what is called “doomsday insurance.”

Amazon founder Jeff Bezos purchased two $147 million mansions on Indian Creek Island in Florida. Oracle billionaire Larry Ellison also purchased a property on the Hawaiian island of Lanai. PayPal co-founder Peter Thiel chose New Zealand. Alibaba founder Jack Ma, filmmaker James Cameron, and financial guru William Foley have all built post-apocalyptic bunkers in remote locations.

Dame Wendy Hall, professor of computer science at the University of Southampton, disagrees with the more gloomy predictions. She argues that, according to the scientific community, artificial intelligence technology is significantly advanced but still far from human intelligence . To achieve true AGI, significant further progress would be needed. It’s therefore overdramatic. The timescales, in particular, are perplexing.

But let’s calmly analyze the issue.

Statements on Artificial General Intelligence

When will AGI, artificial general intelligence, comparable to humans in its breadth of capabilities, emerge? Optimists say it will be very soon. OpenAI CEO Sam Altman said in December 2024 that it would happen “sooner than most people think.” DeepMind co-founder Sir Demis Hassabis estimates the timeframe between five and ten years. Anthropic founder Dario Amodei prefers to talk about “powerful AI” and predicts it could happen as early as 2026.

Skeptics counter that “the milestones are constantly being moved”: according to Dame Wendy Hall, professor at the University of Southampton, it all depends on who you ask. The technology is impressive, but it’s still far from human intelligence. Cognizant CTO Babak Hojat agrees: several fundamental innovations are needed first. And don’t expect AGI to emerge “instantaneously”: it’s not a matter of a single day, but a long road, with dozens of companies pursuing different approaches.

Part of this excitement is fueled by the idea of the next stage: AGI, or superintelligence, which will surpass humans. As early as 1958, Hungarian-American mathematician John von Neumann was credited with first formulating the ” singularity ,” the point beyond which the pace and nature of computer development eludes human comprehension.

In their book 2024 Genesis, Eric Schmidt, Craig Mundie, and the late Henry Kissinger discuss a superpowerful technology that makes decisions and controls so effectively that humans are gradually relinquishing control. In their logic, the question is not “if,” but “when.”

What AGI will bring: benefits and fears

Proponents paint a dazzling picture. Artificial intelligence (AGI) will supposedly help find cures for deadly diseases, overcome the climate crisis, and unlock virtually limitless sources of clean energy. Elon Musk has spoken of a possible era of “universal high income,” in which AI will become so accessible that everyone will have their own “R2-D2 and C-3PO.”

In his vision, everyone will have healthcare, housing, better transportation, and sustainable abundance. But there’s a downside to this dream. Can we prevent such a system from being abused by terrorists or from automatically concluding that we ourselves are the planet’s biggest problem?

Tim Berners-Lee, the creator of the World Wide Web, warns that if a machine is more intelligent than a human, it must be contained and, if necessary, “shut down.” Governments are trying to build protective barriers. In the United States, a presidential executive order was issued in 2023 requiring certain companies to share security test results with authorities, although some provisions were later weakened as “hindering innovation.”

Two years ago, the UK launched the AI Safety Institute, a government organization that studies the risks of advanced models. In this context, the super-rich are discussing “apocalypse insurance”—from homes at the edge of the world to private shelters—though even here, the human factor is disrupting everything.

We are still far from this

Some even consider the entire discussion misguided. Cambridge professor Neil Lawrence calls the very concept of AGI as absurd as “an artificial universal vehicle.” The right means of transportation always depends on the context: people fly to Kenya, drive to university, and walk to the cafeteria. There isn’t and never will be a one-size-fits-all car: why expect the opposite from AI?

Lawrence believes that talking about AGI distracts attention from the real changes already underway: for the first time, ordinary people can talk to a machine and understand what it really means. This is changing everyday life, which means we need to work hard to ensure the technology works for the benefit of its users.

Current systems are trained on huge datasets and are excellent at recognizing patterns, from tumor markers in images to the likely next word in a sentence . But they don’t “hear” them, no matter how convincing their answers seem.

According to Babak Hojat, there are “clever” ways to make large language models appear to have memory and learning capabilities, but these tricks are far from human-level . IV.AI CEO Vince Lynch warns that high-sounding claims about artificial intelligence are simply a publicity stunt . If you build “the smartest thing in the world,” the money will come. In practice, the journey isn’t measured in two years: it requires enormous computing power, immense human creativity, and endless trial and error.

The human brain is even more powerful

Yet, in some respects, machines already surpass us in the breadth of their applications. Generative AI can go from medieval history to complex equations in a minute. Even developers don’t always understand why the model responds a certain way, and some companies report improvements in their systems. Biology remains at the forefront: the human brain contains approximately 86 billion neurons and approximately 600 trillion synapses, incomparably more than artificial architectures. The brain doesn’t need pauses between interactions; it continually restructures its view of the world .

If you tell someone that life has been discovered on an exoplanet, they will immediately integrate it into their view of reality. A linguistic model “knows” this only to the extent that you keep repeating it . LLMs lack metacognition , the ability to be aware of their own knowledge. Humans have it, and it is often described as consciousness . It is a fundamental element of intelligence that has not yet been replicated in the laboratory.

Behind the grandiose predictions and warnings, it seems, lies a simple truth: artificial intelligence is already transforming daily life and business processes, and talking about “real” artificial intelligence is convenient for fundraisers and agenda-setters.

Whether and when a singularity will occur remains an open question. But the quality of the tools we create now—their security, transparency, and usefulness to people—depends on much more than debates about silos and dates.

Redazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli