
Blade Runner Already Predicted Deepfake! Cyber Lessons from Film Classics
Daniela Farina : 25 September 2025 09:00
“ Art is a mirror that reflects the soul of the beholder.” This quote, which captures the essence of our experience with cinema, takes on crucial significance when discussing cybersecurity.
Spike Jonze’s film ” Her ” (2013) is the most emblematic example of this dynamic.
The protagonist, Theodore Twombly,’s relationship with the operating system Samantha isn’t a film about artificial intelligence, but about human vulnerability in the age of digital connectivity. Theodore trusts Samantha implicitly, opens his life to her, and entrusts his most intimate emotions to her.
This absolute trust, while emotionally understandable, reflects a real risk: our growing willingness to grant digital systems access to every aspect of our lives. Theodore’s story is not only a metaphor for loneliness, but a warning about the risks of advanced phishing.
Imagine an AI trained to mimic the voice of a family member or partner, capable of exploiting your emotional vulnerabilities to extort sensitive data or money. The real threat isn’t the algorithm, but our psychological vulnerabilities, which the algorithm learns to exploit.
Lessons in Authenticity in a World of Deepfakes
Cinematic AI is a powerful catalyst for our inner evolution and our security awareness. Every time an android or operating system questions its existence, it forces us to do the same. And in doing so, it invites us to ask: how can we distinguish truth from fiction in an era of deepfakes and generative AI?
Take, for example, Roy Batty’s famous monologue in ” Blade Runner ” (1982). His words, “ I’ve seen things you humans couldn’t imagine” are not just a heart-rending cry from a replicant, but a reflection on the perception of reality.
The difficulty of distinguishing replicants from humans is the perfect metaphor for the threat of deepfake video and audio. How do we know that our CEO’s video call is authentic and not an AI-generated image for a scam? Roy Batty is the harbinger of a threat that calls into question the authenticity of every piece of content we consume online.
This dynamic is found countless times, for example:
- “AI” (2001): Little David, with his irrepressible desire to be loved, is nothing more than our deepest projection. His search for a mother reflects our need for connection and acceptance, making us perfect targets for social engineering attacks that exploit our emotions to gain access to our systems.
- “2001: A Space Odyssey” (1968): HAL 9000, the ship’s artificial intelligence, rebels. Its rebellion isn’t just a technical flaw, but a security professional’s nightmare: an autonomous system turning hostile. HAL’s story is a warning about the risk of delegating too much autonomy to systems that aren’t fully predictable, and a reminder of the importance of safety and backup protocols to prevent an artificial intelligence from taking over.
Safety lessons from the movies
Like a real coach, cinematic AI doesn’t give us answers, but asks us the right questions.
It invites us to dismantle our defenses and embrace our authenticity, which from a cybersecurity perspective means:
- Embracing vulnerability: In a world where algorithms strive for perfection, our ability to recognize our emotional and digital vulnerabilities becomes our first line of defense. Theodore in “Her” is vulnerable, awkward, and emotionally fragile. And it is precisely in this fragility that he finds his path to growth and, ideally, greater awareness.
- Cultivate critical thinking: AI can process billions of data points in a second, but it can’t discern the emotional authenticity of a human relationship. Our superpower isn’t speed, but the slowness with which we analyze information and evaluate sources.
- Developing relational intelligence: In an age of virtual assistants and social networks, authentic human connection is becoming a rare commodity. Cinematic AI shows us the risk of disconnection, but also the urgency of rebuilding bridges. The film “Her” culminates with Theodore reconnecting with a human friend, Amy. It’s a return to humanity that teaches us that true salvation, even in a secure environment, lies not in escape, but in verification, in engaging with others, and in building strong networks of trust.
Coach’s Corner
- In a world where AI can mimic voices and faces, what’s the first step we can take to distinguish truth from fiction and avoid falling into a trap?
- If movies have shown us that the danger isn’t machines, but our trust in them, how can we protect our data and ourselves now that artificial intelligence can perfectly mimic a person?
Daniela FarinaPhilosopher, psychologist, counsellor and AICP coach. A humanist by vocation, he works in cybersecurity by profession. He works as a risk analyst at FiberCop S.p.a.Lista degli articoli