Think Copilot Is Safe? EchoLeak Shows AI Can Leak Your Secrets

The420.in Staff
3 Min Read

Microsoft has patched a critical vulnerability in its AI-powered productivity assistant, Microsoft 365 Copilot, after researchers demonstrated a zero-click attack called EchoLeak that could allow sensitive information to be stolen from users without any interaction.

The flaw, tracked as CVE-2025-32711, was discovered by AI security firm Aim Security and classified as “critical” by Microsoft. The vulnerability was silently patched via a server-side update, and no user action is required, according to Microsoft’s official advisory.

How the EchoLeak Attack Worked

EchoLeak relied on an indirect prompt injection method that involved a specially crafted email sent to a Microsoft 365 Copilot user. Without opening the email or clicking any link, Copilot could be manipulated into exfiltrating private data from earlier conversations and sending it to an attacker-controlled server.

Algoritha: The Most Trusted Name in BFSI Investigations and DFIR Services

The exploit was triggered when the user queried Copilot about a topic referenced in the attacker’s email. For example, if an attacker mentioned HR guidelines in the message, Copilot could be tricked into retrieving related confidential content from past chats and unknowingly forwarding it to the attacker.

Bypassing AI Safety Mechanisms

To carry out the EchoLeak attack, the email content was carefully crafted to bypass several built-in defenses, including Microsoft’s cross-prompt injection attack (XPIA) filters. The attackers avoided direct references to Copilot or AI, making the prompt appear user-facing and benign.

Security mechanisms such as Content Security Policy (CSP) enforcement, link redaction, and image filtering were also bypassed, allowing data exfiltration to succeed.

“This is a novel practical attack on an LLM application that can be weaponized by adversaries,” said Aim Security. “The AI system is manipulated into leaking the most sensitive data within its current context, with no dependency on specific user behavior.”

FCRF x CERT-In Roll Out National Cyber Crisis Management Course to Prepare India’s Digital Defenders

Wider Implications for AI Tools

While the EchoLeak vulnerability specifically affected Microsoft 365 Copilot, researchers warn that similar prompt injection strategies could be applied to other AI-powered assistants or LLM-integrated platforms.

As LLMs become more embedded in enterprise workflows, security experts are urging developers and organizations to prioritize robust prompt validation, context isolation, and stricter AI behavior controls.

About the author – Ayush Chaurasia is a postgraduate student passionate about cybersecurity, threat hunting, and global affairs. He explores the intersection of technology, psychology, national security, and geopolitics through insightful writing.

Stay Connected