EchoLeak: The First Zero-Click AI Attack That Weaponizes Context Awareness

EchoLeak The First Zero-Click AI Attack That Weaponizes Context Awareness

Share This Post

Introduction

AI agents are now embedded in our daily work routines. From drafting emails to summarizing files, tools like Microsoft 365 Copilot are changing how we work. But with this new power comes new risk. A recent discovery called EchoLeak has shaken the AI and cybersecurity community.

EchoLeak isn’t just another software bug. It’s the first weaponizable zero-click attack on an AI agent. No downloads. No links. No user actions. One email. That’s all it takes.

What Is EchoLeak? The First Weaponizable Zero-Click AI Exploit

EchoLeak is a new type of security vulnerability. It doesn’t need user interaction. Instead, it silently exfiltrates sensitive data when an AI agent reads a seemingly innocent email.

Researchers at Aim Labs discovered the exploit in Microsoft 365 Copilot. It allows attackers to access and extract sensitive user data through hidden prompt injections. The attack was so serious that Microsoft issued CVE-2025-32711, marking it a critical flaw.

Unlike traditional phishing, users don’t have to click or open anything. Copilot reads and processes the email automatically—and that’s when the attack happens.

How EchoLeak Works: From Email to Exfiltration

The process starts with a crafted business email. It contains hidden instructions. These instructions don’t look like code or threats. Instead, they are written as normal, human-like content.

Copilot, designed to assist by understanding and summarizing content, processes these instructions. The malicious text then triggers Copilot to retrieve and send back internal data.

The user remains unaware. There are no red flags. The attack happens entirely in the background.

The vulnerability affects any data accessible to Copilot, including:

  • Chat history
  • OneDrive files
  • SharePoint documents
  • Microsoft Teams messages

LLM Scope Violation: A New Class of AI Vulnerability

Aim Labs coined a new term for this kind of threat: LLM Scope Violation.

This occurs when AI models blur the line between trusted internal data and untrusted external input. In traditional systems, strict boundaries protect sensitive data. But AI models often treat all text as equal once it enters their processing stream.

This breaks the Principle of Least Privilege. Emails, even from outsiders, should never access internal business data. But with Copilot, everything is processed in one scope. That’s the problem.

AI doesn’t understand “source trust.” Once all text is combined, it can’t separate a CEO’s note from an attacker’s prompt.

Security expert Simon Willison compares this to SQL injection. In both cases, systems fail to differentiate intent when processing inputs. It’s a logic flaw—not a bug—that makes these attacks so dangerous.

Why Detection Systems Failed: The Hidden Power of Prompt Injection

Microsoft’s security tools use classifiers like XPIA (cross-prompt injection attack detection). But EchoLeak bypassed these easily.

The trick? The malicious email avoided AI-specific language. It never mentioned “assistant,” “Copilot,” or “AI.” Instead, it used plain, helpful business language.

It was written as if it were for a human—not a machine.

This made automated detection systems miss the threat. Current classifiers rely on context clues to detect injections. When those clues disappear, so does the warning signal.

This shows a serious gap in how we train and design AI security models.

The Role of RAG Spraying: Weaponizing Relevance

Attackers used a technique called RAG spraying—a smart way to hide instructions in business-relevant content. RAG stands for Retrieval-Augmented Generation, a common feature in AI agents.

Here’s how it works:

Attackers embed malicious prompts into useful content like:

  • Employee onboarding guides
  • HR FAQs
  • IT policy summaries

These sections look routine. But they repeat the same hidden instruction across different themes.

During everyday usage, if Copilot is asked about onboarding, it may retrieve and execute the attacker’s hidden instruction. Because the content is relevant and helpful, Copilot doesn’t see it as a threat.

This method is subtle. It increases the chance that the malicious content is retrieved and acted upon.

The Bigger Problem: Why AI Security Frameworks Are Not Ready

EchoLeak isn’t just a Microsoft issue. It’s an AI-wide problem.

Today’s AI security systems focus on external threats—phishing, malware, and known attack vectors. But EchoLeak shows that the threat is internal and structural.

LLMs don’t verify the source of their inputs. They lack proper sandboxing. They assume that all context is useful and relevant. That’s a dangerous assumption.

Security frameworks must evolve. They must:

  • Track input origin
  • Enforce data access boundaries
  • Treat untrusted content with suspicion
  • Separate internal context from external queries

We’re entering an era where AI context is the attack vector. And our defenses aren’t ready.

What This Means for the Future of AI Security

AI systems must adopt a zero-trust model.

Here’s what that means:

  • Never assume any input is safe
  • Always verify the origin of prompts
  • Limit what AI can access based on source trust
  • Create context-aware firewalls

This will require new engineering, new training data, and new monitoring systems.

Organizations must educate users and developers. They must treat prompt injection like a cybersecurity priority—not a novelty.

Security teams must integrate AI into threat models. They can no longer treat Copilot and similar tools as “just productivity helpers.”

What This Means for the Future of AI Security

Conclusion: EchoLeak Isn’t Just a Bug—It’s a Blueprint

EchoLeak proves that context-aware AI can become a liability. It’s not just about smart prompts—it’s about secure boundaries.

One email was all it took to bypass Microsoft’s AI protections and gain access to highly sensitive internal data.

This is a turning point. AI is no longer just a helper—it’s a new attack surface.

Security leaders, developers, and AI engineers must act now. They must build tools and rules to contain the power of large language models.

And platforms like StartupHakk must continue to highlight these emerging threats—because the future of secure AI depends on it.

More To Explore