Redazione RHC : 28 August 2025 13:55
Cybercriminals are rapidly mastering generative AI, and we’re no longer talking about “scary” ransom notes, but about full-fledged malware development. The Anthropic research team reported that attackers are increasingly relying on large language models throughout the entire lifecycle of creating and selling data encryption tools.
In parallel, ESET described an attack concept in which local models, on the attacker’s side, take over key extortion steps. The overall findings show how artificial intelligence removes technical barriers and accelerates the evolution of ransomware schemes.
According to Anthropic, extortionists use Claude not only to prepare texts and negotiation scenarios, but also to generate code, test and package programs, and launch services according to the “crime as a service” model. The activity was recorded by a UK operator, assigned the identifier GTG-5004.
Since the beginning of the year,the operator has been offering attack kits on underground forums at prices ranging from $400 to $1,200, depending on the configuration level. The descriptions included several encryption options, tools to increase operational reliability, and techniques to evade detection. At the same time, according to Anthropic, the creator does not have in-depth knowledge of cryptography, counter-analysis techniques, or the internals of Windows. It filled these gaps with the help of suggestions and Claude’s automatic generation.
The company has blocked the affected accounts and implemented additional filters on its platform, including rules for recognizing distinctive code patterns and signature-based checks on uploaded samples, to prevent attempts to turn the AI into a malware factory in advance. This doesn’t mean that AI is already mass-producing all modern crypto Trojans, but the trend is alarming: even immature operators are gaining an edge that was previously only available to tech-savvy groups.
The industry environment only adds fuel to the fire. In recent years, extortionists have become more aggressive and inventive, and metrics by early 2025 pointed to record incident volumes and multi-million dollar profits for criminals. At industry conferences, it was acknowledged that systemic progress in the fight against extortion is not yet visible. In this context, artificial intelligence promises not only a cosmetic adaptation of extortion, but also an expansion of the arsenal, from the penetration phase to the automated analysis of stolen data and the formulation of demands.
A separate chapter is ESET’s demonstration called PromptLock (which we discussed yesterday). It is a prototype in which a locally deployed model can generate Lua scripts on the fly to inventory target files, steal content, and initiate encryption. The authors emphasize that this is a concept, not a tool seen in real attacks, but it illustrates a shift: large models are no longer just a cloud-based “prompt” and are becoming a standalone component of an attacker’s infrastructure.
Sure, local AI requires resources and takes up space, but tricks to optimize and simplify inference remove some of the limitations, and cybercriminals are already exploring these possibilities.
The Anthropic report also describes another cluster, identified as GTG-2002. In this case, Claude Code was used to automatically select targets, prepare access tools, develop and modify malware, then exfiltrate and flag the stolen data. Ultimately, the AI itself helped generate ransom demands based on the value of what was found in the archives.In the last month, the company estimates that at least seventeen organizations in the public sector, healthcare, emergency services, and religious institutions have been affected, without disclosing their names. This architecture shows how the model becomes both a “consultant” and an operator, reducing the time between reconnaissance and monetization.
Some analysts note that full “AI reliance” among ransomware has not yet become the norm, and that models are more commonly used as a first step in development, for social engineering and initial access. However, the emerging picture is already shifting the balance of power: affordable subscriptions, open source development, and local deployment tools are making the development and maintenance of ransomware operations more accessible than ever.
If this dynamic continues, defenders will need to consider not only new binaries, but also the decision-making chains of the machines that produce, test, and distribute these binaries.