Security researchers are warning that applications using AI frameworks without proper safeguards can expose sensitive information in basic, yet critical, non-AI ways.
According to a recent Cyera analysis, widely used AI orchestration tools, LangChain and LangGraph, are vulnerable to critical input validation flaws that could allow attackers to access sensitive enterprise data.
In a recent blog post, the cybersecurity company outlined how a newly discovered flaw in LangChain, along with two similarly-themed previously reported ones, can be exploited to retrieve different categories of data, including local files, API keys, and stored application state. “The biggest threat to your enterprise AI data might not be as complex as you think,” Cyera researchers said in the post.
The issues often hide in the “invisible, foundational plumbing” that connects AI to business workflows, the researchers argued, adding that all the flaws are now fixed by the tools’ maintainers but need to be applied immediately across integrations to avoid impact. Path Traversal becomes the latest in a series of input validation bugs
Cyera disclosed a new path traversal vulnerability and analyzed it alongside two previously reported flaws, showing how each maps to specific components in LangChain and LangGraph and enables access to a different class of data.
The path traversal issue, tracked as CVE-2026-34070, arises from how a LangChain feature resolves file paths when loading prompt templates or external resources. By supplying crafted input, an attacker can traverse directories and read arbitrary files from the host system, potentially exposing configuration files and credentials. The flaw received a severity rating of CVSS 7.5 out of 10.
One of the older flaws, an unsafe deserialization flaw identified as CVE-2025-68664, stemmed from the handling of serialized objects within the LangChain framework. The issue lets an application process untrusted serialized data, allowing an attacker to inject malicious payloads interpreted as trusted objects, enabling access to sensitive runtime data such as API keys and environment variables. The flaw had received a critical 9.3/10.0 rating when it was disclosed in December 2025.
The other older flaw, an SQL injection vulnerability in LangGraph’s checkpointing mechanism, was found to allow manipulation of backend queries. Exploiting this flaw could grant access to stored application data, including conversation history and workflow state tied to AI agents. Tracked with the CVE ID CVE-2025-67644, the flaw was assigned a high-severity rating of CVSS 7.3 out of 10.
Together, Cyera researchers pointed out, the three flaws (along with others of the kind) highlight how widely used AI frameworks can expose different layers of enterprise data, effectively turning LangChain and LangGraph into a new attack surface.
Back to the basics
The exploit technique described in the report relies on insufficient input validation and unsafe handling of data across key integration points in AI pipelines. In each case, attacker-controlled input, whether through prompts, serialized payloads, or query parameters, can influence how the framework interacts with the filesystem or database.
For the most recent path traversal bug, the risk is driven by a lack of strict path validation and sandboxing. Mitigations include enforcing allowlists for file access and restricting directory boundaries. In the case of deserialization, the issue lies in treating external data as trusted.
Cyera recommends avoiding unsafe deserialization methods and ensuring that only validated, expected data structures are processed. For SQL injection, the company recommended using parameterized queries and strengthening input sanitization. Across all three cases, the guidance aligned with established secure coding practices.