
Redazione RHC : 7 November 2025 17:50
Based on a recent analysis by the Google Threat Intelligence Group (GTIG), a shift has been identified among threat actors over the past year.
Attackers are no longer just leveraging artificial intelligence (AI) to increase attack productivity, but are deploying new AI-based malware into live operations .
This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior during execution.
The report from Google’s threat intelligence team, an update to the January 2025 analysis, “Adversarial Misuse of Generative AI ,” details how cyber threat actors and government-backed cybercriminals are integrating and experimenting with AI across the entire attack lifecycle.
GTIG’s analysis reveals that state-sponsored actors from North Korea, Iran, and China, along with financially motivated criminals, are increasingly abusing Gemini across the entire attack lifecycle, from decoys to
phishing up to command and control configurations.
A newly detected malware tricks the LLM into generating autonomous evasion scripts, producing only the code without extraneous text and logging responses to a temporary file for refinement.
Google TIG notes that while there are commented auto-update features, this indicates active early development. The malware also attempts to spread laterally via removable drives and network shares.
This approach leverages the generative power of artificial intelligence not only for the creation, but also for the continued survival of the malware itself, unlike static malware that relies on fixed signatures that are easily detected by defenders.
The emergence of PROMPTFLUX is in line with the maturing cybercrime market, where AI tools are flooding underground forums, offering capabilities ranging from deepfake generation to vulnerability exploitation at subscription prices.
Once the malware was discovered, Google quickly disabled the API keys and associated projects, while DeepMind improved Gemini’s model classifiers and security measures to block misuse requests .
It can therefore be deduced that threat actors are steadily reducing the barriers for novice perpetrators.
GTIG warns of increased risks, including adaptive ransomware like PROMPTLOCK that dynamically creates Lua scripts for encryption.
The company emphasizes its commitment to responsible AI through principles that prioritize robust security barriers, sharing information through frameworks such as Secure AI (SAIF) , and red-teaming vulnerability management tools.
Innovations such as Big Sleep for vulnerability hunting and CodeMender for automatic patching underscore efforts to proactively counter AI threats.
Redazione