Red Hot Cyber
Cybersecurity, Cybercrime News and Vulnerability Analysis
LECS 970x120 1
Whisper Leak: The New Side-Channel Attack That Steals Messages with LLMs

Whisper Leak: The New Side-Channel Attack That Steals Messages with LLMs

10 November 2025 07:37

Microsoft has announced a new side-channel attack on remote language models. It allows a passive attacker, capable of viewing encrypted network traffic, to use artificial intelligence to determine the topic of a user’s conversation, even when using HTTPS.

The company explained that the data leak affected conversations with streaming LLMs, models that send multi-part responses as they’re generated. This mode is convenient for users because they don’t have to wait for the model to fully compute a long response.

However, it is precisely through this method that the context of the conversation can be reconstructed. Microsoft emphasizes that this represents a privacy risk for both individual and corporate users.

Researchers Jonathan Bar Or and Jeff McDonald of the Microsoft Defender Security Research Team explained that the attack becomes possible when an adversary has access to the traffic. This could be an ISP adversary, someone on the same local network, or even someone connected to the same Wi-Fi network.

This attacker will be able to read the message content because TLS encrypts the data. However, they will be able to see the packet sizes and intervals between them. This is sufficient for a trained model to determine whether a request belongs to one of the predefined topics.

Essentially, the attack exploits the sequence of encrypted packet sizes and arrival times that occur during responses from a streaming language model . Microsoft has tested this hypothesis in practice. The researchers trained a binary classifier that distinguishes queries on a specific topic from all the rest of the noise.

As a proof of concept, they used three different machine learning approaches: LightGBM, Bi-LSTM, and BERT. They found that for a range of models from Mistral, xAI, DeepSeek, and OpenAI, the accuracy exceeded 98%. This means that an attacker simply observing traffic to popular chatbots can fairly reliably access conversations where questions about sensitive topics are being asked.

Microsoft emphasized that in the case of mass traffic monitoring, such as by an internet service provider or government agency, this method could be used to identify users asking questions about money laundering, political dissent, or other controlled topics, even if the entire exchange is encrypted.

The authors of the paper highlight a disturbing detail. The longer the attacker collects training samples and the more dialogue examples they present, the more accurate the classification will be. This transforms Whisper Leak from a theoretical attack to a practical one. Following responsible disclosure, OpenAI, Mistral, Microsoft, and xAI have implemented protective measures.

An effective security technique is to add a random sequence of variable-length text to the response. This obfuscates the relationship between token length and packet size, making the side channel less informative.

Microsoft also recommends that users concerned about privacy avoid discussing sensitive topics on untrusted networks, use a VPN when possible, choose non-streaming LLM options, and partner with providers that have already implemented mitigation measures.

In this context, Cisco published a separate security assessment of eight open-source LLM models from Alibaba, DeepSeek, Google, Meta, Microsoft, Mistral, OpenAI, and Zhipu AI. The researchers demonstrated that these models perform poorly in scenarios with multiple dialogue rounds and are easier to fool in longer sessions. They also found that models that prioritized efficiency over security were more vulnerable to multi-step attacks.

This supports Microsoft’s conclusion that organizations adopting open source models and integrating them into their processes should add their own defenses, conduct regular red teaming activities, and strictly enforce system prompts.

Overall, these studies demonstrate that LLM security remains an unresolved issue. Traffic encryption protects the content, but it doesn’t always hide the model’s behavior. Therefore, AI system developers and clients will need to consider these side channels, especially when working on sensitive topics and on networks where traffic can be observable by third parties.

Follow us on Google News to receive daily updates on cybersecurity. Contact us if you would like to report news, insights or content for publication.

Cropped RHC 3d Transp2 1766828557 300x300
The Red Hot Cyber Editorial Team provides daily updates on bugs, data breaches, and global threats. Every piece of content is validated by our community of experts, including Pietro Melillo, Massimiliano Brolli, Sandro Sana, Olivia Terragni, and Stefano Gazzella. Through synergy with our industry-leading partners—such as Accenture, CrowdStrike, Trend Micro, and Fortinet—we transform technical complexity into collective awareness. We ensure information accuracy by analyzing primary sources and maintaining a rigorous technical peer-review process.