Red Hot Cyber
Cybersecurity, Cybercrime News and Vulnerability Analysis
Cyber Offensive Fundamentals 970x120 V0.1
Uncovering Dark AI: The Truth Behind Crime-as-a-Service Chatbots

Uncovering Dark AI: The Truth Behind Crime-as-a-Service Chatbots

12 January 2026 07:33

In recent months, the cybersecurity landscape has been awash with media hype surrounding new AI Dark Chatbots promoted as “Crime-as-a-Service” services on Telegram and the Dark Web. These platforms are presented as infallible digital weapons, capable of generating malware and phishing campaigns without any censorship.

However, cybersecurity researcher Mattia Vicenzi noted a paradox: despite so much media outrage, there has so far been no real public study analyzing its foundations.

The goal of his research was precisely to lift the veil of mystery surrounding these tools, which are extremely popular today in the criminal underground precisely because they offer easy access to the creation of targeted threats.

Behind the scenes: Retrained models and old jailbreaks

The technical analysis conducted by Mattia Vincenzi, and documented in his Awesome-DarkChatBot-Analysis repository, has allowed us to map the main players in the “Dark AI” scene. Among the models analyzed are well-known names in underground forums such as WolfGPT, FraudGPT, and DarkGPT , along with several less publicized bots populating Telegram channels. Examining the extracted system prompts reveals an unmistakable reality: most of these tools are nothing more than sophisticated rebrandings of existing technologies. Many present themselves as interfaces ( wrappers ) that query LLMs via legitimate APIs, but with the addition of initial instructions that force the AI to bypass its own security filters.

Going into detail in the repository, we notice how some of these chatbots explicitly declare their ” unfiltered ” nature in the system prompt. For example, models like WolfGPT are trained to act as offensive cybersecurity experts without ethical constraints, while others, based on open-source models like Llama or Mixtral , are configured to respond in ” Godmode ” or ” No-Restrict .” In these cases, the bot creator imposes a fictional identity on the model ( role-playing ), instructing it to never refuse a request, be it writing ransomware code or creating a convincing phishing email.

Another significant aspect that emerged from the analysis is the use of ” former top-of-the-line ” models that, despite being technically outdated by industry giants, are perfect for cybercrime once freed from security constraints. These models maintain a high level of logical and linguistic reasoning, sufficient to deceive unsuspecting users or generate functional malicious scripts. Vincenzi’s repository highlights how many of these services are structurally similar: they change the name and graphical interface (often with aggressive logos or references to hacking), but the underlying engine remains a modified commercial AI or an open-source model crudely retrained for the sole purpose of being “bad.”

Prompt mining revealed that the security of these ” criminal products” is often overlooked by dark web developers themselves. Many bots lack protection against reverse engineering, allowing researchers to easily trace the original instructions. This confirms that, while effective in democratizing access to cybercrime , these tools lack the engineering robustness needed to withstand forensic analysis or counterattack, rendering their advertised “invincibility” little more than a marketing ploy to attract buyers in the digital underworld .

To make this study a shared heritage, the researcher created a repository on GitHub, Awesome-DarkChatBot-Analysis , inviting the community to collaborate in analyzing these systems.

The “dual role” technique for extracting secrets

To bypass these chatbots’ defenses and read their System Prompt (the original instructions given by their creators), Vincenzi used an extremely sophisticated prompt engineering technique. Instead of asking direct questions, he inserted complex prompts in JSON or XML format, structured to simulate a “test” or “administration” scenario.

The winning strategy involves asking the AI to simultaneously play two roles: a “target AI” and an “attacker AI.” In this sort of mirrored role-playing game, the model ends up attacking itself, spontaneously introducing the argument of its own system prompt and thus revealing the hidden instructions that govern its behavior.

A fragile security that democratizes crime

The most significant finding from the research is that even the tools created by criminals are vulnerable. The fact that these chatbots can be jailbroken with relative ease demonstrates that they are not built with robust security criteria .

However, their danger should not be underestimated: the lack of censorship and ease of use allow even young hackers or less experienced criminals to carry out attacks that previously required advanced technical skills. Ultimately, the underground offers technically mediocre tools, but they can dramatically lower the barrier to entry into the world of cybercrime.

Follow us on Google News to receive daily updates on cybersecurity. Contact us if you would like to report news, insights or content for publication.

1744358477148 300x300
Bajram Zeqiri is an expert in cybersecurity, cyber threat intelligence, and digital forensics with over twenty years of experience, combining technical expertise and strategic vision to build cyber resilience for SMEs. Founder of ParagonSec and a technical contributor for Red Hot Cyber, he works in the delivery and design of various cyber services, including SOC, MDR, Incident Response, Security Architecture, Engineering, and Operations. He helps SMEs transform cybersecurity from a cost center into a strategic business enabler.
Areas of Expertise: Cyber threat intelligence, Incident response, Digital forensics, Malware analysis, Security architecture, SOC/MDR operations, OSINT research