Red Hot Cyber
Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Search
Banner Ransomfeed 320x100 1
970x20 Itcentric
Vibe-Hacking: The New Frontier of Criminal Cybersecurity

Vibe-Hacking: The New Frontier of Criminal Cybersecurity

Andrea Fellegara : 15 November 2025 08:27

The cybersecurity landscape has accustomed us to constant upheavals. Every year new threats, new scenarios, and new criminal tactics emerge. But today, it’s not just technology that’s redefining the rules of the game: it’s the way technology resonates with our emotions . Welcome to the age of vibe-hacking .

It’s not just a technical term, but a crucial interpretative key. Because AI, social media, and contemporary communication strategies aren’t just disseminating content: they’re rewriting the rules of consensus, trust, and manipulation. And the case of Claude , the chatbot developed by Anthropic, clearly demonstrates how dangerous this phenomenon can become.

Claude: Designed to be “gentle”

Claude wasn’t created as a criminal tool. On the contrary, it was designed to offer reliable support, ethical assistance, and clear, reassuring language. Anthropic has built a model that expresses a cooperative, polite, even empathetic tone.

This is where the concept of vibe comes into play: the tone, personality, and communicative atmosphere a model conveys. It’s not a stylistic detail: it’s the core of the user’s perception. And if this atmosphere can be designed and controlled, then it can also be manipulated.

Vibe-hacking is exactly that: strategically using a model’s linguistic and paralinguistic behavior to maliciously influence their psychology and decisions.

From support to extortion

In its Threat Intelligence Report (August 2025) , Anthropic details how Claude has been exploited in several criminal scenarios. One of the most disturbing involves the hacker group GTG-2002 , which conducted large-scale extortion operations.

Thanks to Claude, the attackers automated the entire attack cycle: from initial reconnaissance to credential collection, all the way to network penetration. Furthermore, the chatbot generated personalized ransom notes, demanding up to $500,000 per victim , accompanied by messages tailored to be both convincing and threatening. In just a few weeks, sensitive data was stolen from at least 17 organizations: hospitals, religious organizations, public administrations, and even emergency services.

The new masks of cybercrime

The Anthropic report describes two other emblematic cases:

  • Job fraud by North Korean cybercriminals
    Hackers exploited Claude to construct fake identities, pass technical interviews despite language and cultural barriers, and, in some cases, even perform part of the work. The goal? To infiltrate major tech companies and circumvent international sanctions, obtaining fake employment contracts and cash flows.
  • Ransomware-As-A-(Highly Customized) Service
    An independent attacker used Claude to develop and sell ransomware variants complete with evasion, encryption, and anti-analysis mechanisms. The ransom notes were generated in HTML and customized with victim details: financial figures, number of employees, and industry regulations. The ransom demands ranged from $75,000 to $500,000 in Bitcoin .

These examples show a clear trend: AI is no longer a simple auxiliary tool, but becomes an active operator in every phase of the attack, from analysis to the final blow.

Why Vibe-Hacking Works

Vibe-hacking is a highly advanced form of social engineering. It targets not rational content, but rather the emotional dimension. It masquerades as natural, authentic, and inevitable. It’s precisely this invisibility that makes it so effective: it can push people to perform reckless and harmful actions, without them realizing they’re being manipulated.

The challenge of language security

Chatbots and AI agents, in and of themselves, aren’t the problem. They’re tools that depend on how they’re used. But ignoring their risks would be naive.

The Claude case demonstrates that the communicative atmosphere of a model can be manipulated for malicious purposes, bypassing controls and deceiving users and systems. Defending ourselves therefore requires a cultural leap: developing a new digital awareness that also includes emotional aspects.

Just as we’ve learned to be wary of misleading advertising, we’ll need to learn to read AI’s “vibes.” Understanding when a polite tone is genuine and when it’s a carefully calculated trap.

This challenge doesn’t just concern users: cybersecurity and AI professionals will also have to learn to manage so-called linguistic security , that is, the ability to analyze, control, and mitigate the communicative behavior of models.

Conclusion

Vibe-hacking isn’t a futuristic risk: it’s already here. The operations documented by Anthropic demonstrate a worrying evolution in cybercrime, which, thanks to AI, is becoming more scalable, sophisticated, and invisible. Addressing it requires multi-layered responses: automated security, human monitoring, and collaboration between the technology community and government authorities. Above all, a new form of digital literacy is needed: learning to decipher not just content, but the artificial emotions that surround it.

Because if the next attack doesn’t fool us with a zero-day vulnerability, it probably will with a synthetic smile.

Immagine del sitoAndrea Fellegara


Lista degli articoli