The Dark Side of AI: How Technology is Being Used to Control Humans
Red Hot Cyber
Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Search
Enterprise BusinessLog 320x200 1
2nd Edition GlitchZone RHC 970x120 2
The Dark Side of AI: How Technology is Being Used to Control Humans

The Dark Side of AI: How Technology is Being Used to Control Humans

Sandro Sana : 10 December 2025 08:12

Cory Doctorow says it with the clarity of someone who has studied the consequences of digital capitalism for years: AI, as it’s sold today, isn’t about enhancing humans. It’s about using them.
And that’s a huge difference.

Doctorow talks about centaurs and reverse-centaurs .

The centaur is the romantic image of technology that amplifies man: the half-human, half-machine being who, thanks to hybridization, becomes more competent, faster, more effective.

The reverse-centaur, on the other hand, is the modern nightmare:
the machine in command and the human relegated to the role of corrective appendage , the organic element necessary only for:

  • sign,
  • check,
  • take the blame when the system fails.

And this, unfortunately, is exactly the model the market is moving towards.

The AI Bubble: Speculation Disguised as Innovation

Doctorow makes it clear: platform capitalism survives only if it manages to inflate new narrative bubbles.

  • The Web.
  • The Crypto.
  • The Metaverse.
  • Now the AI.

There is no industry that is not being overwhelmed by this messianic rhetoric, where every human limitation is considered an “inefficiency” to be eliminated.
The paradox?
AI does not replace human work: it displaces it, makes it worse, makes it more responsible and less controllable .

In 2025, many companies are implementing AI not to improve process quality, but to cut costs by leaving humans the burden of checking, correcting, and justifying the machine’s hallucinations.

  • A reverse-centaur produces no value.
  • It produces fragility.
  • It produces risk.

And it produces a blind dependence on systems we don’t understand, don’t control, and often don’t even know how to verify.

The technical side that vendors don’t like

Today, AI is being integrated everywhere with the same enthusiasm with which the “Internet button” was put on toasters in the 1990s.
The problem is that this integration is not neutral , you can see it immediately:

  • opaque, unverifiable models ;
  • training pipelines that are a new, non-auditable supply chain ;
  • sensitive data used as fuel ;
  • automations that amplify human error instead of reducing it ;
  • human supervision transformed into an act of legal rather than technical responsibility .

AI “set up like this” doesn’t reduce risk: it increases it .
And often in a non-linear, unintuitive, and impossible-to-precise manner.

The truth is that most LLMs and generative automation systems are probabilistic tools that many are treating as deterministic decision support systems .
Confusing these two levels is an open invitation to disaster.

The socio-economic impact: when the machine decides and the human signs

The narrative of “AI-enhanced work” closely resembles that of industrial offshoring in the 2000s:
In its promises it was a win-win, in practice it was a disguised wage squeeze.

The same thing happens today:
the true economic function of AI is not to replace human labor, but to deskill it .

Before, a radiologist would analyze 100 images, now he analyzes 100 anyway… but with an algorithm in between that makes mistakes and that he has to correct.
And when in doubt, the legal responsibility remains yours.

The same goes for lawyers, IT technicians, journalists, consultants, doctors, designers… and even SOC analysts who find themselves inundated with alerts generated by systems that don’t understand the operational context.

The human is not enhanced:
he is put on a leash by a machine that decides, makes a mistake, and he has to clean up.

This is the reverse-centaur in all its crudeness.

European legislation has understood this very well: the AI Act does not prohibit AI, it prohibits abuse.

The interesting thing is that Europe is trying to stop this trend .
Not against technology, but against the toxic business models that surround it.

The AI Act introduces:

  • transparency obligations ,
  • risk impact assessments ,
  • supply chain controls ,
  • clear responsibility for errors and damages ,
  • mandatory registers for high-risk AIs ,
  • traceability and technical auditability .

And alongside the AI Act come other regulations that close the circle:

  • NIS2 , which requires governance, processes and real oversight of the tools.
  • Cyber Resilience Act , which holds technology manufacturers accountable.
  • Data Act , which regulates access, portability and use of data.

Europe sends a simple message:
the machine cannot replace the human in responsibility, nor use it as a legal shield .

And it’s a message that big tech doesn’t like at all.

The problem isn’t AI. It’s the toxic expectations we build around it.

At RHC we say it often:
Technology is neither good nor bad. It is neutral.
It becomes dangerous when we use it without understanding what it really does.

AI can be a very powerful tool.
But it must remain a means , not an end .
An extension of human intelligence, not a political commissar of efficiency.

Because the day we stop being centaurs and start being reverse-centaurs, it will be too late to reverse course.

AI should enhance humans, not replace them. And above all, not use them.

The real challenge is not to build bigger, faster, or more “active” models.
The challenge is to build systems that respect human work, its dignity, its intelligence, its limits and its responsibilities .

The future belongs to companies that use AI to empower people, not crush them.
To those who will be able to distinguish between innovation and speculation.
To those who will understand that automation is not a dogma, but a risk that must be managed wisely.

If we don’t want to become reverse-centaurs, we have to go back to the starting point:
AI must amplify us.
Don’t replace us.
And much less use us as crutches to cover up his limitations.

Because a machine that only needs humans to sign errors…
it’s not progress.
It’s a well-crafted deception.

  • AI Ethics
  • AI impact on jobs
  • AI regulation
  • artificial intelligence
  • digital control
  • future of work
  • human rights
  • surveillance state
  • tech responsibility
  • technology and society
Immagine del sitoSandro Sana
Member of the Red Hot Cyber Dark Lab team and director of the Red Hot Cyber Podcast. He has worked in Information Technology since 1990 and specialized in Cybersecurity since 2014 (CEH - CIH - CISSP - CSIRT Manager - CTI Expert). Speaker at SMAU 2017 and SMAU 2018, lecturer for SMAU Academy & ITS, and member of ISACA. He is also a member of the Scientific Committee of the national Competence Center Cyber 4.0, where he contributes to the strategic direction of research, training, and innovation activities in the cybersecurity.

Lista degli articoli
Visita il sito web dell'autore