
Olivia Terragni : 9 November 2025 18:36
Imagine a futuristic city split in half: on one side, glittering towers of innovation; on the other, chaos and the shadows of lost control. This isn’t a dystopian vision, but rather the landscape of artificial intelligence (AI) today. On one side, techno-optimism points to a future of technological abundance with trillion-dollar investments; on the other, the alarm of experts who maintain that controlling a superintelligent AI may be impossible . Between these poles, some criticism of the incoherent anti-technology narrative comes in. But to understand what’s happening, we need to look more closely, perhaps equipped with some superpower granted by human wisdom, such as forward and backward thinking, to determine whether AI will be our salvation or our limitation.
There is a tension between the builders of the future and the risk assessors: exemplified by Sam Altman’s confidence, Roman Yampolskiy’s warnings, and Warmke’s critique of incoherent arguments against new technologies. The debate on AI is polarized between two figures: the builder, driven by boundless optimism, and the risk assessor, who sees existential threats. But what if both are wrong? What if the greatest threat to AI is us not building enough?
IN SHORT
Sam Altman ( X.com, November 6, 2025 ) has outlined an ambitious vision that seems etched into the future. OpenAI forecasts annual revenue of over $20 billion this year, with projections pushing hundreds of billions by 2030, supported by a colossal investment plan of $1.4 trillion over the next eight years. The goal? To build the infrastructure for an AI-powered economy, ranging from consumer devices to robotics to groundbreaking scientific discoveries like the cure for deadly diseases. Altman categorically rejects government guarantees for data centers, promoting a market that rewards success or rigorously punishes failure. Instead, he proposes that governments develop a strategic reserve of computing power, an innovative idea that could democratize access to AI and ensure public benefit.
His optimism is contagious: AI could transform research, with studies reporting a 40% increase in researcher productivity (TSE, 2025), or even defeat deadly diseases, a dream that fuels OpenAI’s mission. But inverse thinking, that superpower of human wisdom I love to explore, pushes us to look further: what if this optimism leads to overdependence on technology ? An overbuilt infrastructure could collapse under the unsustainable weight of costs or become a vulnerable target for energy crises or cyberattacks.
Altman’s gamble is bold, but it requires a balance that the market alone, however efficient, may not be able to guarantee. If we were to paint a Sam Altman archetype, “The builder of skilled utopias, with the conviction of the builder,” it would be perfect. Altman embodies this figure: we build, and the market will judge. OpenAI should succeed or fail based on its own merits, without anyone “picking winners.” His confidence is unshakeable: “The world will need much more computing power.” But is this total conviction enough? Reverse thinking invites us to ask: what if the market, left to its own devices, fails to recognize long-term risks? Or what if the very scale of the investment becomes an obstacle, slowing innovation instead of accelerating it? Altman’s answer seems to focus entirely on vision, but history teaches us that even the most audacious builders need a solid foundation.
Roman Yampolskiy offers a contrasting perspective, arguing that controlling a superintelligent AI—billions of times smarter than us—may be intrinsically impossible. In his work, he emphasizes that even “safe” algorithms would fail in the face of self-improving intelligence. What’s at stake isn’t economic, but existential : humanity’s capacity for self-determination.
Its logic is chilling: Roman Yampolskiy—whose archetype is somewhere between a threshold guardian and a system architect—warns us: meaningful control over a superintelligence may be impossible (https://limitstocontrol.org/statement.html). How can we control something a billion times smarter than us? Theoretical computer science ( mpg.de ) suggests that we cannot build a guaranteed-safe algorithm that contains a superintelligence, confirming that containing an unpredictable AI is computationally unprovable, a limit that defies all security, even assuming that specially designed architectures are possible.
But what if the real problem isn’t controlling AI, but our inability to accept its autonomy? If a superintelligent AI could collaborate rather than dominate, the “control problem” would transform into an opportunity for partnership. However, the risk of a catastrophic failure —a coordinated cyberattack or misalignment— remains real , prompting a thoughtful pause in development, as Yampolskiy suggests.
Craig Warmke, the inconsistency buster, in (X.com, November 8, 2025 ), dismantles the arguments against the technology, highlighting a contradiction: AI is defined both as a “bubble” (harmless and irrelevant) and as a threat to society (therefore extremely powerful). If it’s a bubble, it can’t ruin us; if it’s a threat, it’s not a bubble. This inconsistency reveals an emotional bias against progress; rather than a rational critique, it’s often emotional, not logical. Warmke calls for optimism, suggesting that gratitude toward innovators improves the soul and the wallet. More of an archetype for apocalyptic thinkers than for Warmke: the Inconsistency of Doom.
On the one hand, a technology is said to be so dangerous that it poses an existential threat (in the case of AI, it will destroy the world). On the other, the same technology is claimed to be worthless and doomed to failure (“it will go to zero”). Let’s break down this logic even in the case of Bitcoin: on the one hand, “it consumes so much energy it could boil the oceans,” on the other, “it is destined to be worthless.” If Bitcoin were worthless, then its network would be abandoned. However, if it were to truly consume such monstrous energy that it threatened the planet, then its network would have to be enormously valuable and secure, and consequently, the block reward for miners (the “block reward”) in bitcoins would have an astronomical value (millions of dollars) to justify that cost. But this cannot be true if we simultaneously claim that Bitcoin is worthless. The two cannot both be true, and let’s add: criminals, by definition, are incentivized to find the most efficient and reliable tools for their activities: why would they use something inefficient?
What if this inconsistency reflects our own confusion? Perhaps society oscillates between hope and fear because AI is both—an opportunity and an unknown. We should look beyond sensational headlines to hard data, such as the real impact of AI on productivity (TSE, 2025).
Trying to balance these visions, Altman’s optimism can drive innovation, fueling a future of discovery with massive investment, but it requires secure infrastructure and in-depth research into control, as Roman Yampolskiy insists. This brings us to a conceptual crossroads. What if we tried to imagine the failure—probable, perhaps—of this very article of mine? It might be too dense, or published at the wrong time, as suggested by the reverse thinking inspired by James Clear’s “Failure Premortem.”
Working backward to correct it, I wonder: what if I had intentionally woven this vulnerability into the structure of the article, assuming that AI’s trajectory is crucial to humanity’s future? What if optimism itself were a trap? Building too large an infrastructure, like Altman’s $1.4 trillion plan, could create a bloated and vulnerable system, destined to collapse under its own weight. The real failure, however, would not be this article—born of my desire to explore—but that of the entire technology community.
We are caught in a false dilemma, oscillating between commercial interests and apocalyptic fears, neglecting the pursuit of ethical and robust governance that puts humanity at its center.
So I ask myself: what if the problem is that we’re not building enough to meet our needs? An underdeveloped AI could leave us unprepared for global challenges. Or, conversely, what if AI’s impact is negligible, and I’m overanalyzing a tool that automates only trivialities, like an advanced calculator? Or, what if the real limit isn’t technology, but our ethics—our ability to align AI with human values? This tension isn’t an obstacle, but an opportunity. I invite you to reflect: what kind of future will we build?
Olivia Terragni