
Redazione RHC : 9 December 2025 07:08
Researchers at Netskope Threat Labs have just published a new analysis on the possibility of creating autonomous malware built exclusively from Large Language Models (LLMs), eliminating the need to encode detectable instructions.
LLMs have rapidly revolutionized the industry, becoming valuable tools for automation, coding assistance, and research. However, their widespread adoption raises a number of critical cybersecurity challenges.
It is now possible to have malware interact with GPT-3.5-Turbo and GPT-4, which establishes the possibility of an autonomous threat powered by LLMs.
Netskope Threat Labs set out to test the feasibility and reliability of fully autonomous malware generated by LLMs.
Their tests confirmed that this type of LLM-based software can generate code dynamically , demonstrating that attackers could eliminate the use of detectable instructions.
However, their reliability analysis revealed that relying on LLM to generate evasion code is operationally inefficient.
The low success rate of these scripts demonstrates that LLM-based malware is currently limited by its very unreliability, posing a significant obstacle to fully automating the malware lifecycle.
Netskope Threat Labs intends to continue this line of research and move to the next phase of creating and validating the requirements needed to build robust, fully autonomous malware using only LLMs.
Redazione