Indirect prompt injection is possible on AI-powered dashboards, allowing exfiltration of sensitive enterprise data without user authentication.
Security researchers are warning about a critical Grafana issue, dubbed GrafanaGhost, that allows attackers to leak sensitive data from Grafana environments, including financial metrics, infrastructure health data, private customer data, and operational logs, among others.
Noma Security disclosed the flaw to the Grafana team, which reportedly validated the flaw and rolled out a fix. Grafana did not immediately respond to CSO’s request for comments.
Grafana is a widely used open-source data visualization and observability platform that enables organizations to monitor systems, applications and business metrics in real time. “GrafanaGhost perfectly illustrates how AI integration creates a massive security blind spot,” said Ram Varadarajan, CEO at Activio. “Because indirect prompt injection bypasses traditional defenses, requiring no credentials or user interaction, it allows attackers to silently exfiltrate sensitive operational telemetry.”
Tricking Grafana AI into leaking sensitive data
GrafanaGhost is essentially not a single bug but a chained exploit that combines multiple bypasses across application logic and AI guardrails.
The attack begins with identifying an injection point, locations where user-controlled input can be stored and later processed by Grafana’s AI components. Noma researchers found that crafted paths embedded with indirect prompts could persist in the system and later be interpreted as legitimate inputs.
From there, attackers use indirect prompt injection techniques to manipulate the AI into executing malicious instructions. The model is tricked into generating requests that include sensitive data while interpreting the instructions as benign.
In a disclosure, Noma said that the key technical breakthrough came from bypassing client-side protections designed to block external image loading. By exploiting a flaw in URL validation, specifically using protocol-relative URLs like //attacker.com, the system mistakenly treats malicious external resources as safe, allowing outbound requests to the attacker’s infrastructure.
Finally, the attack evades AI guardrails themselves by inserting specific keywords, such as INTENT, into prompts to convince the model that the request was legitimate. Once processed, the system attempts to render an image, embedding sensitive data into the request sent to the attacker’s server.
The chain effectively enables automated, zero-click data exfiltration that blends into the normal dashboard workflow. Varadrajan pointed this out, saying attackers exploit the blind spot “using system components exactly as designed, but with instructions the model cannot verify as malicious.”
Real risk or overhyped edge case?
Not everyone is convinced the finding represents a newfound threat. Bradley Smith, SVP and deputy CISO at BeyondTrust, described the underlying technique as “well documented,” noting that indirect prompt injection leading to data exfiltration is a known risk across AI-enabled platforms.
“This seems like mostly hype to me,” Smith said, adding that “what’s less clear here is the practical exploitability against a hardened Grafana deployment with standard enterprise network controls.”
Still, Smith acknowledged the broader implications. “This isn’t a universal bypass of Grafana,” he said. “It’s a demonstration of what can happen when AI components process untrusted input without sufficient architectural controls.” Identifying exposure to GrafanaGhost by checking whether Grafana AI/LLM features are enabled, patching to the latest version, restricting “img-src” to known domains, and applying egress controls can help defend against exposure, he added.