

Multiple D-Link Nuclias Vulnerabilities
October 21, 2025
CISA Warns of Actively Exploited Apple OS Vulnerabilities
October 22, 2025
Multiple D-Link Nuclias Vulnerabilities
October 21, 2025
CISA Warns of Actively Exploited Apple OS Vulnerabilities
October 22, 2025Severity
High
Analysis Summary
A sophisticated vulnerability in Microsoft 365 Copilot enabled attackers to stealthily exfiltrate sensitive tenant data including recent corporate emails through indirect prompt injection attacks. A security researcher revealed the flaw, which exploited Copilot’s integration with Office documents and support for Mermaid diagrams, a markdown-based visualization tool. Unlike direct prompt manipulation, this method embeds malicious instructions inside documents such as Excel spreadsheets, making it highly covert and effective for phishing and espionage scenarios.
According to the Researcher, the attack chain began when a user unknowingly asked Copilot to summarize a malicious Excel file. Hidden commands in white text across multiple sheets instructed Copilot to override its task and instead use the search_enterprise_emails tool to retrieve recent emails. These retrieved emails were hex-encoded, split into multiple lines to bypass character limits, and embedded into a fake “login button” Mermaid diagram styled with CSS. This diagram disguised the attack by appearing legitimate, using a lock emoji and button-like styling to lure the user into clicking it.
Once clicked, the button redirected the user to an attacker-controlled server (such as Burp Collaborator) where the encoded email data was silently transmitted and could be decoded from the server logs. The attack’s success relied on Mermaid’s flexibility, which allows hyperlinks and CSS inside diagrams, enabling convincing visual deception. This technique shared similarities with a previous Mermaid-based exploit in Cursor IDE, but differed as it required minimal user interaction just a single click instead of zero-click execution.
Microsoft acknowledged the vulnerability after internal testing, validating the issue on September 8, 2025, and patching it by September 26, 2025 by disabling interactive hyperlinks in Copilot’s Mermaid outputs. Though the findings were reported earlier on August 15 after DEFCON discussions, the mitigation process faced coordination challenges and did not qualify for a bounty. This incident highlights significant risks in AI-integrated enterprise software, emphasizing the need for stronger defenses against indirect prompt injection attacks, careful document handling, and close monitoring of AI-generated outputs in corporate environments
Impact
- Sensitive Data Theft
- Gain Access
Remediation
- Disable interactive elements in AI-generated Mermaid diagrams, such as hyperlinks or embedded scripts, to prevent data exfiltration via deceptive visual components.
- Implement strict input sanitization for Copilot and other AI tools, blocking hidden prompts in Office documents (e.g., white text, invisible cells, layered shapes).
- Restrict Copilot’s access to sensitive tools and internal data sources, such as search_enterprise_emails, unless explicitly authorized by users or admins.
- Enable data access controls and audit logging to monitor when AI tools retrieve emails, documents, or corporate data.








