In a breakthrough security disclosure, Microsoft has revealed details of a novel side-channel attack that can identify the topics of conversations between users and AI chatbots, even when the exchanges are encrypted under HTTPS.
Codenamed “Whisper Leak,” the attack allows an eavesdropper monitoring encrypted Transport Layer Security (TLS) traffic to infer the subject matter of an AI chat session by analyzing packet sizes and timing sequences — effectively “listening” without ever decrypting the content.
“This leakage could pose serious risks to the privacy of user and enterprise communications,” warned Microsoft Defender Security Research Team members Jonathan Bar Or and Geoff McDonald, who detailed the finding.
How Whisper Leak Works
Unlike typical data breaches, Whisper Leak doesn’t steal messages directly. Instead, it exploits how large language models (LLMs) stream their responses in real-time. The sequence and timing of encrypted packets sent during this process can be analyzed using machine learning classifiers to guess what users are asking — such as whether a chat involves sensitive political, financial, or security topics.
The researchers trained models like LightGBM, Bi-LSTM, and BERT to classify traffic patterns between users and LLMs from major providers including OpenAI, Microsoft, Mistral, DeepSeek, xAI, and Alibaba.
Shockingly, most of these models achieved accuracy scores above 98%, meaning an attacker on a shared Wi-Fi network, ISP layer, or compromised router could reliably flag chat topics even though the traffic remains encrypted.
Models from Google (Gemma) and Amazon showed higher resistance, likely due to token batching that obscures timing patterns — but were not fully immune.
Real-World Implications
The findings raise new concerns about AI privacy and surveillance in both consumer and enterprise contexts. If weaponized, state-level actors or cybercriminals could monitor encrypted AI usage to track users discussing specific or politically sensitive themes.
“If a government agency or ISP were monitoring traffic to popular AI chatbots, they could reliably identify users asking questions about certain monitored subjects — even though all the traffic is encrypted,” Microsoft cautioned.
This revelation underscores how even encrypted systems can inadvertently leak metadata, offering adversaries valuable behavioral insights.
How AI Companies Are Responding
Following responsible disclosure, OpenAI, Microsoft, Mistral, and xAI have implemented mitigations to counter Whisper Leak’s threat. One effective countermeasure involves adding randomized text sequences of variable length to responses, which helps mask token lengths and scramble timing patterns.
Microsoft has also urged users to take additional steps:
Avoid discussing highly sensitive topics on public Wi-Fi.
Use VPNs to add extra encryption layers.
Opt for non-streaming models when possible.
Rely on providers that have adopted security mitigations.
A Growing Class of AI Side-Channel Attacks
Whisper Leak builds on a growing body of research revealing vulnerabilities in AI models. Previous studies documented how adversaries could infer plaintext token lengths or steal inputs by measuring timing differences during cached inference — attacks like InputSnatch.
In tandem with Microsoft’s findings, a separate Cisco AI Defense study revealed that open-weight models such as Llama 3.3 and Qwen 3 remain highly vulnerable to multi-turn adversarial manipulation, while safety-oriented models like Google Gemma 3 show more balanced performance.
“These results underscore a systemic inability of current open-weight models to maintain safety guardrails across extended interactions,” Cisco’s researchers concluded.
The Bigger Picture: AI Privacy at Risk
Since ChatGPT’s public debut in 2022, researchers have increasingly warned that AI systems leak more than just words — they reveal behavioral, contextual, and even emotional cues through metadata.
As enterprises rush to integrate LLMs into workflows, experts now stress that cybersecurity must evolve to address not just hacking or data theft, but traffic inference and privacy erosion.
“Encryption isn’t a magic shield anymore,” one security analyst said. “As Whisper Leak shows, even the silence between packets can speak volumes.”
