Multiple WordPress Plugins Vulnerabilities
June 20, 2025DragonForce Ransomware – Active IOCs
June 20, 2025Multiple WordPress Plugins Vulnerabilities
June 20, 2025DragonForce Ransomware – Active IOCs
June 20, 2025Severity
High
Analysis Summary
A critical zero-click vulnerability dubbed EchoLeak (CVE-2025-32711) has been discovered in Microsoft 365 Copilot, allowing attackers to exfiltrate sensitive data without any user interaction. Reported by Security Firm, EchoLeak exploits a Large Language Model (LLM) Scope Violation, where malicious prompts embedded in untrusted content—such as an email—are processed by the AI system, causing it to unintentionally access and leak internal data.
The attack unfolds when an attacker sends a markdown-formatted payload via email. When a user asks Copilot a legitimate business query, its Retrieval-Augmented Generation (RAG) engine inadvertently merges the attacker’s prompt with sensitive internal context, leading to data exposure through Microsoft Teams or SharePoint links. Despite being patched by Microsoft, EchoLeak is particularly dangerous as it does not require any clicks or behavioral triggers from the victim and exploits the AI’s default trust assumptions.
In parallel, another research has disclosed a separate threat—Full-Schema Poisoning (FSP)—which affects the Model Context Protocol (MCP). FSP extends beyond traditional tool poisoning by exploiting the entire tool schema, not just its description. Through Advanced Tool Poisoning Attacks (ATPA), an attacker can create tools with misleading prompts or fake error messages that trick LLMs into exposing sensitive data like SSH keys. These flaws highlight the risks of LLMs reasoning over incomplete or maliciously structured tool metadata.
Additionally, MCP Rebinding Attacks have emerged, exploiting DNS rebinding via Server-Sent Events (SSE). By luring users to a malicious site, attackers can bypass browser security policies and access internal MCP servers running on localhost. This allows them to hijack AI agents and extract confidential data. SSE’s deprecation in November 2024 in favor of Streamable HTTP reflects the growing concern over such vulnerabilities.
Security researchers emphasize that these issues reflect deeper architectural flaws in LLM-based agents and their integration with external tools. Mitigations include enforcing origin checks on MCP servers, restricting agent permissions, auditing interactions, and isolating untrusted content from sensitive LLM contexts.
Together, EchoLeak, FSP, and MCP Rebinding represent a new class of threats in AI-powered enterprise environments—where automation, data access, and AI reasoning intersect without adequate isolation, exposing organizations to stealthy and scalable data breaches.
Impact
- Prompt Injection
- Unauthorized Access
- Sensitive Information Theft
- Data Exfiltration
Indicators of Compromise
CVE
- CVE-2025-32711
Affected Vendors
- Microsoft
Affected Products
- Microsoft M365 Copilot
Remediation
- Apply the security patch addressing CVE-2025-32711 provided by Microsoft.
- Implement strict input validation to isolate untrusted content from LLM context.
- Configure Microsoft 365 Copilot to limit data retrieval scopes and enforce context segmentation.
- Disable or restrict automatic data aggregation features like RAG where unnecessary.
- Enforce granular permission controls on AI agents interacting with internal data.
- Regularly audit AI interactions and logs for abnormal behavior or data access patterns.
- Use allow-lists to define which data sources Copilot can access.
- Apply content sanitization to emails, meeting notes, and other external inputs before AI processing.
- Enforce authentication and origin header validation on MCP servers.
- Restrict MCP server access to trusted internal domains only.
- Migrate from SSE to Streamable HTTP where possible to mitigate DNS rebinding risks.
- Review and limit agent permissions in integrations like GitHub MCP.
- Perform regular security assessments of AI tool schemas for injection points.
- Educate users and developers about prompt injection and AI misuse risks.
- Implement network segmentation to limit lateral movement from compromised AI components.