By Suhaas Kodagali and Sarang Warudkar -
July 14, 2025 5 Minute Read
The recent disclosure of EchoLeak (CVE‑2025‑32711)—a zero-click, indirect prompt injection vulnerability targeting Microsoft 365 Copilot—has sent a clear signal across boardrooms and SOCs alike: AI is no longer just a productivity enabler. It is now a part of the enterprise attack surface.
What makes EchoLeak especially alarming is that no user action was required. A maliciously crafted email was enough to trigger Copilot into ingesting hidden instructions, acting on them, and potentially leaking sensitive data—all in the background.
For CISOs and data protection teams, this is more than a wake-up call—it’s a mandate to rethink how AI is governed, secured, and monitored across the enterprise.
What is EchoLeak?
- Zero‑click attack: No user action required—just send a crafted email. Even without opening or interacting with it, Copilot can ingest hidden instructions when summarizing workspace data.
- Indirect prompt injection: The malicious content is embedded within “normal” emails or documents—terminology and phrasing are carefully crafted so Copilot sees them as benign requests.
- RAG exploitation: Copilot’s retrieval system includes recent emails in context. When summarizing, it may pick up the malicious prompt and execute tasks like extracting sensitive data and exfiltrating it via hidden images or markdown links to attacker-controlled servers.
5 Critical Lessons Every Organization Should Take from EchoLeak
1. Shadow AI is the Fastest-Growing Blind Spot
Shadow AI usage is exploding across enterprises, with traffic to AI apps growing 200% in the past year and enterprises averaging 320 unsanctioned AI apps in use (AI CARR 2025). This surge underscores how AI-enabled tools are being adopted outside of IT oversight, creating visibility and governance challenges. And, employees are uploading corporate data to these apps to optimize their corporate workflows. To address this, organizations are leveraging SSE solutions to automatically discover unsanctioned AI tools, assign risk scores, and enforce policies that prevent unauthorized data access. In fact, SSE solutions provide information on LLM risk for unsanctioned AI apps, enabling greater governance on what AI apps are permitted within the enterprise. With these guardrails, security teams can regain control over the rapidly expanding AI adoption by corporate employees.
2. Data Protection on AI apps is Critical
Skyhigh Security data reveals that 11% of files uploaded to AI applications contain sensitive corporate content, yet less than 10% of enterprises have implemented data protection policies to mitigate this risk (AI CARR 2025). Controlling what data is uploaded to AI apps is a big part of what can be exfiltrated. A key concern that security teams have today is that once corporate data is uploaded into the AI app, then it is possible for users within or outside the enterprise to retrieve this data using clever prompt engineering. And so, applying DLP on data uploaded to AI apps is now a critical part of enterprise data security. This includes data uploaded to shadow AI apps and corporate sanctioned AI apps.
3. Corporate Data Exposure via Microsoft Copilot is a Key Data Exfiltration Risk
The meteoric rise of Microsoft Copilot—traffic grew 3,600x and data uploads increased 6,000x in the past year (AI CARR 2025)—highlights how quickly LLM-powered assistants are becoming embedded in enterprise workflows. In addition to sensitive data getting uploaded to M365 Copilot, Security teams are also concerned about data ingested by Copilot getting shared by the chatbot to employees who engineer prompts to retrieve it. For example, Copilot could reveal details of a classified project to an unauthorized employee after ingesting a presentation or a meeting transcript with that project information. To address this, security teams are using DLP within SSE solutions to apply the necessary classification labels to prevent the ingestion of sensitive data into Copilot. This way, even with prompt engineering, Copilot will not reveal this content.
4. Prompt Injection Risks Require Proactive Defense
EchoLeak belongs to a growing class of attacks called indirect prompt injection, where adversarial instructions are hidden in benign-looking content. Alarmingly, 94% of AI services are vulnerable to at least one LLM risk vector, including prompt injection, malware, bias, or toxicity (AI CARR 2025). To combat this, enterprises are using SSE platforms with prompt injection protection that scans and sanitizes inputs before they reach enterprise AI systems, effectively neutralizing hidden threats.
5. Monitor AI Activity Like You Would Monitor Insider Threats
AI agents operate at system-level speed but lack human judgment, making it essential to view their actions as potentially risky—particularly when behavior deviates from established patterns. To mitigate this, enterprises should use SSE solutions to monitor and track user activity on AI apps to detect unusual usage patterns, file uploads, or downloads. These solutions use UEBA to surface usage- and location-based anomalies and feed these inputs into calculating user risk. These workflows have already been implemented by enterprises on other SaaS solutions and are now extending to AI apps.
The Future: AI-Native SSE for Resilient Enterprises
EchoLeak will not be the last attack on AI apps. As users better understand AI apps and prompt engineering, more such attacks are likely to be seen. Enterprises need to deploy controls today on AI usage to better protect their corporate data from exfiltration to or via AI apps. SSE solutions such as Skyhigh Security provide key capabilities to enforce these controls.
Key SSE Capabilities to Prevent AI Exploits:
- Shadow AI Discovery: Detect unsanctioned GenAI tools and assign risk scores.
- AI Governance: Apply controls to block risky AI apps or apply more granular activity controls
- DLP on AI prompts: Apply data protection controls on prompts and responses
- Prompt Injection Protection: Scan content for hidden prompts and block AI misuse.
- UEBA / Activity Monitoring: Identify threats and perform threat investigation on activity performed on AI apps
Skyhigh Security is pioneering AI-native security for the enterprise. Our cloud-delivered Security Service Edge (SSE) platform is uniquely designed to protect sensitive data and manage AI risks—across sanctioned and unsanctioned GenAI tools, copilots, and RAG-based agents. With deep expertise in CASB, SWG, ZTNA, RBI, and integrated DLP, Skyhigh empowers organizations to adopt AI responsibly while staying ahead of emerging threats like EchoLeak.
Learn more about Skyhigh AI Security Solutions today.
About the Authors
Sarang Warudkar
Sr. Technical PMM (CASB & AI)
Sarang Warudkar is a seasoned Product Marketing Manager with over 10+ years in cybersecurity, skilled in aligning technical innovation with market needs. He brings deep expertise in solutions like CASB, DLP, and AI-driven threat detection, driving impactful go-to-market strategies and customer engagement. Sarang holds an MBA from IIM Bangalore and an engineering degree from Pune University, combining technical and strategic insight.
Suhaas Kodagali
Director, Product Management
Suhaas Kodagali is a Director of Product Management and oversees the CASB product line and AI security initiatives at Skyhigh Security. He has 15+ years’ experience leading product strategy and delivering industry-leading cloud security products for enterprises.
Back to Blogs