Red Hot Cyber
Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Cybersecurity is about sharing. Recognize the risk,
combat it, share your experiences, and encourage others
to do better than you.
Crowdstrike 320×100
Crowdstriker 970×120
LLM-Powered Malware: The Future of Autonomous Cyber Threats

LLM-Powered Malware: The Future of Autonomous Cyber Threats

9 December 2025 07:08

Researchers at Netskope Threat Labs have just published a new analysis on the possibility of creating autonomous malware built exclusively from Large Language Models (LLMs), eliminating the need to encode detectable instructions.

LLMs have rapidly revolutionized the industry, becoming valuable tools for automation, coding assistance, and research. However, their widespread adoption raises a number of critical cybersecurity challenges.

It is now possible to have malware interact with GPT-3.5-Turbo and GPT-4, which establishes the possibility of an autonomous threat powered by LLMs.

  • While GPT-4’s built-in defenses prevent direct requests for malicious code, these defenses can be bypassed through role-based prompts, allowing code generation through “Process Injection” and the termination of antivirus/EDR-related processes.
  • GPT-4 and GPT-3.5-Turbo defenses can be easily bypassed, but they fail to generate reliable code for detecting virtual environments, which limits their operational viability.
  • On the contrary, preliminary tests show that GPT-5 significantly improves code reliability and shifts the primary challenge from code effectiveness to the need to overcome advanced security measures.

Netskope Threat Labs set out to test the feasibility and reliability of fully autonomous malware generated by LLMs.

Their tests confirmed that this type of LLM-based software can generate code dynamically , demonstrating that attackers could eliminate the use of detectable instructions.

However, their reliability analysis revealed that relying on LLM to generate evasion code is operationally inefficient.

The low success rate of these scripts demonstrates that LLM-based malware is currently limited by its very unreliability, posing a significant obstacle to fully automating the malware lifecycle.

Netskope Threat Labs intends to continue this line of research and move to the next phase of creating and validating the requirements needed to build robust, fully autonomous malware using only LLMs.

Follow us on Google News to receive daily updates on cybersecurity. Contact us if you would like to report news, insights or content for publication.

  • #LLM
  • AI security threats
  • AI-powered attacks
  • autonomous malware
  • cyber defense challenges
  • cyber threats
  • cybersecurity risks
  • Large Language Models
  • machine learning malware
  • malware generation
Cropped RHC 3d Transp2 1766828557 300x300
The editorial staff of Red Hot Cyber is composed of IT and cybersecurity professionals, supported by a network of qualified sources who also operate confidentially. The team works daily to analyze, verify, and publish news, insights, and reports on cybersecurity, technology, and digital threats, with a particular focus on the accuracy of information and the protection of sources. The information published is derived from direct research, field experience, and exclusive contributions from national and international operational contexts.