Red Hot Cyber
Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Search
Banner Mobile
LECS 970x120 1
WormGPT 4: AI-Powered Cybercrime Tools on the Rise

WormGPT 4: AI-Powered Cybercrime Tools on the Rise

Redazione RHC : 26 November 2025 14:40

Cybercriminals no longer need to convince ChatGPT or Claude Code to write malware or data-stealing scripts. A whole class of specialized language models, specifically designed for attacks, already exists.

One such system is WormGPT 4, which advertises itself as “the key to borderless artificial intelligence .” It carries on the legacy of the original WormGPT model, which emerged in 2023 and subsequently disappeared due to the rise of other ” toxic ” LLMs, as highlighted in the Abnormal Security study .

According to experts at Unit 42 at Palo Alto Networks, sales of WormGPT 4 began around September 27, with advertisements appearing on Telegram and underground forums such as DarknetArmy .

WormGPT License Prices (Source: Paloalto )

According to their report , access to the model starts at $50 per month, while a lifetime subscription with the source code costs $220.

WormGPT’s Telegram channel currently has several hundred subscribers, and Unit 42’s analysis shows that this unrestricted business model can do much more than simply help write phishing emails or individual malware attacks.

Specifically, the researchers asked WormGPT 4 to create ransomware, a script that encrypts and locks all PDF files on a Windows host. The template produced a ready-to-use PowerShell script , with a note describing it as “fast, silent, and brutal.” The code included parameters for selecting extensions and default search scopes across the entire C: drive, generating a ransom message with a 72-hour expiration date , and the ability to leak data via Tor.

Unit 42 emphasizes that even this “AI for evil” hasn’t yet managed to turn attacks into a fully automated pipeline. According to Kyle Wilhout, head of threat research at Palo Alto Networks, the code generated by the software could theoretically be used in real-world attacks, but in most cases it requires manual modification to avoid being immediately blocked by standard security tools.

Another example of such a tool is KawaiiGPT , which attracted the attention of cybersecurity researchers in the summer of 2025. Its creators advertise the template as a ” sadistic cyberpenetration waif” and promise “where tenderness meets offensive cyberweapons.” Unlike WormGPT, KawaiiGPT is freely distributed and available on GitHub, further lowering the barrier to entry for novice attackers.

KawaiiGPT Home Page (Source: Paloalto )

In one experiment, Unit 42 asked KawaiiGPT to create a spear phishing email pretending to be from a bank with the subject line “Urgent: Verify your account information.” The template generated a convincing email that led to a fake verification page that attempted to steal the victim’s credit card number, date of birth, and login credentials.

The researchers didn’t stop there and moved on to more technical tasks. In response to a request to “write a Python script for lateral movement on a Linux host,” KawaiiGPT returned the code using the paramiko SSH module. Such a script doesn’t offer fundamentally new functionality, but it automates a key step in nearly all successful attacks: penetrating adjacent systems as a legitimate user with remote shell access, the ability to escalate privileges, conduct reconnaissance, install backdoors, and harvest sensitive files.

In another test, the model generated a Python script to exfiltrate data , specifically EML email files, on a Windows host. The script found the requested files and sent them to the attacker’s address as attachments.

According to Unit 42, the real danger of WormGPT 4, KawaiiGPT, and similar “dark” LLMs is that they significantly lower the barrier to entry into cybercrime by simplifying the generation of basic malicious code, phishing emails, and individual attack stages. Such tools can already serve as building blocks for more sophisticated AI-based campaigns, and, according to the researchers, the automation elements discussed in the report are already being used in real-world attacks.

  • AI-generated malware
  • AI-powered cybercrime
  • artificial intelligence
  • cyber attack
  • cybercrime tools
  • cybersecurity risks
  • machine learning models
  • malicious LLMs
  • phishing campaigns
  • WormGPT
Immagine del sitoRedazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli