Red Hot Cyber

Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Search
TM RedHotCyber 320x100 042514
Banner Desktop V1
Security alert for AI servers: thousands are at risk

Security alert for AI servers: thousands are at risk

Redazione RHC : 16 September 2025 11:24

Artificial intelligence systems are increasingly under attack. This data emerges from “Trend Micro State of AI Security Report, 1H 2025. The company urges IT professionals and community leaders to follow best practices for implementing secure AI application stacks to prevent data theft, model poisoning, extortion requests, and other attacks.

“Artificial intelligence may be the opportunity of the century for businesses worldwide, but organizations that don’t take adequate precautions could end up experiencing more harm than good. As our latest research reveals, too many AI infrastructures are being built with unprotected or unpatched components, giving cybercriminals a free rein.” States Salvatore Marcis, Country Manager of Trend Micro Italy.

Below are the key AI security challenges identified by Trend Micro research:

  1. Vulnerabilities/exploits in critical components: Organizations developing, deploying, and using AI applications leverage various specialized software components and frameworks, which may contain vulnerabilities found in regular software. The study reveals vulnerabilities and zero-day exploits in key components, including ChromaDB, Redis, NVIDIA Triton, and NVIDIA Container Toolkit.
  2. Accidental Internet Exposure: Vulnerabilities are often the result of rushed development and deployment timelines. This also applies to AI systems, which can be accidentally exposed to the internet, where they are analyzed by cybercriminals. Trend found over 200 ChromaDB servers, 2,000 Redis servers, and over 10,000 Ollama servers exposed to the internet without authentication.
  3. Vulnerabilities in Open-Source Components: Many AI frameworks and platforms use open-source software libraries to provide common functionality. However, open source components often contain vulnerabilities that end up in production systems, where they are difficult to detect. At the recent Pwn2Own in Berlin, which included the new AI category, researchers discovered an exploit for the Redis vector database that stemmed from an outdated Lua component.
  4. Container-Level Weakness: Much of the AI infrastructure runs in containers, which means it is exposed to the same vulnerabilities and security threats that affect cloud and container environments. As noted in the study, Pwn2Own researchers were able to discover an exploit for the NVIDIA Container Toolkit. To mitigate risks, organizations should sanitize input data and monitor runtime behavior.

The developer community and businesses must best balance security with time to market. Concrete measures could include:

  • Improved patch management and vulnerability scanning
  • Maintaining an inventory of all software components, including third-party libraries and subsystems
  • Adopting container management security best practices, including using minimal base images and runtime security tools
  • Configuration controls to ensure that AI infrastructure components, such as server, are not exposed to the Internet

Immagine del sitoRedazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli