LLM-Powered Malware: The Future of Autonomous Cyber Threats
Red Hot Cyber, il blog italiano sulla sicurezza informatica
Red Hot Cyber
Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Select Italian
Search
Banner Mobile
UtiliaCS 970x120
LLM-Powered Malware: The Future of Autonomous Cyber Threats

LLM-Powered Malware: The Future of Autonomous Cyber Threats

Redazione RHC : 9 December 2025 07:08

Researchers at Netskope Threat Labs have just published a new analysis on the possibility of creating autonomous malware built exclusively from Large Language Models (LLMs), eliminating the need to encode detectable instructions.

LLMs have rapidly revolutionized the industry, becoming valuable tools for automation, coding assistance, and research. However, their widespread adoption raises a number of critical cybersecurity challenges.

It is now possible to have malware interact with GPT-3.5-Turbo and GPT-4, which establishes the possibility of an autonomous threat powered by LLMs.

  • While GPT-4’s built-in defenses prevent direct requests for malicious code, these defenses can be bypassed through role-based prompts, allowing code generation through “Process Injection” and the termination of antivirus/EDR-related processes.
  • GPT-4 and GPT-3.5-Turbo defenses can be easily bypassed, but they fail to generate reliable code for detecting virtual environments, which limits their operational viability.
  • On the contrary, preliminary tests show that GPT-5 significantly improves code reliability and shifts the primary challenge from code effectiveness to the need to overcome advanced security measures.

Netskope Threat Labs set out to test the feasibility and reliability of fully autonomous malware generated by LLMs.

Their tests confirmed that this type of LLM-based software can generate code dynamically , demonstrating that attackers could eliminate the use of detectable instructions.

However, their reliability analysis revealed that relying on LLM to generate evasion code is operationally inefficient.

The low success rate of these scripts demonstrates that LLM-based malware is currently limited by its very unreliability, posing a significant obstacle to fully automating the malware lifecycle.

Netskope Threat Labs intends to continue this line of research and move to the next phase of creating and validating the requirements needed to build robust, fully autonomous malware using only LLMs.

  • #LLM
  • AI security threats
  • AI-powered attacks
  • autonomous malware
  • cyber defense challenges
  • cyber threats
  • cybersecurity risks
  • Large Language Models
  • machine learning malware
  • malware generation
Immagine del sitoRedazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli