LLM-powered chatbots have risks that we see playing out in the headlines on a nearly daily basis. But chatbots are limited to answering questions. AI agents, however, access data and tools and carry out tasks, making them infinitely more capable – and more dangerous to enterprises.
The OWASP Top 10 for Agentic Applications can help CISOs explain what the issues are to their business counterparts. It can also help CISOs to directly improve agentic AI security, because it comes with threats taxonomy, mitigation strategies and playbooks, and example threat models.
It’s all part of OWASP’s Agentic Security Initiative. Scott Clinton, OWASP GenAI security project board co-chair and co-founder, says he was surprised by how many agentic solutions were already deployed in organizations that the OWASP team uncovered while they were researching the list. And how many of those solutions were deployed without the knowledge of IT and security teams.
This level of risk is unprecedented, he says. That includes a lot of theoretical, “academic” risks.
“However, we focused on those that were data-driven,” he says. “Where we would provide practical guidance based on real-world conditions today.”
The challenge of educating stakeholders
“If you’re a CSO, chances are you are having quite a time educating your stakeholders about the risks that are being introduced by the use cases that are probably being pushed on you,” says Kayla Underkoffler, director of AI security and policy advocacy at Zenity, an AI security company, and one of the core contributors to the OWASP list.
The CISO might not be able to say no, she adds – but might also be a little hesitant to say that the company can go all in and adopt the technology without thinking of the consequences.
The list was deliberately designed to be consumable, she says. “It will help with threat modeling, help with telling the story, help explain what controls need to be in place to reduce the risk and why.”
A security leader can get an agentic AI use case from the business and align the top risks to fit that use case. The list also provides a common language around agentic AI and its risks, Underkoffler says.
Actionable guidance
Agentic AI is the main topic of conversation in discussions among his peers, says Keith Hillis, VP of security engineering at Akamai Technologies.
“Most organizations are confronted with the challenge of balancing the promising power of AI while also ensuring the organization is not incurring increased security risk,” he says. So, the biggest value he finds in the new Agentic AI OWASP top 10 is that it’s immediately useful. “It’s directly actionable as a control baseline in both security architecture and governance, risk, and compliance contexts,” he says.
One aspect of the list that he found particularly insightful was the evolution of “least privilege” to “least agency.”
He recommends that CISOs use the list to assess their programs, identify gaps, and map out a plan of action for improvement. “Most likely already have active programs in place,” he says. But it’s also likely they will need to evolve to accommodate the specific risks of agentic AI.
Missing pieces
The only thing that’s lacking in this first release of the list is that some of the mitigation sections aren’t detailed enough, says Zenity’s Underkoffler.
But there are plans to address that. “We have some efforts to really dive into the mitigations for security teams, to help implement these controls,” she says. “Not just descriptions of what you should do but real code examples of how you can implement them.”
For example, she says, one of the suggested mitigations is to “apply the principle of least privilege”. “Which is completely accurate,” she says. “Everyone should apply the principle of least privilege. But what does that mean for agents?”
Rick Holland, data and AI security officer at Cyera, a data security vendor, says he’d like the list to explain the likelihood of each type of attack. “Not all threat actors are created equal,” he says.
For organizations targeted by nation-state actors, for example, the attackers might use more sophisticated attack vectors, like memory and context poisoning or agentic supply chain vulnerabilities. Rank-and-file cybercriminals might go after more low-hanging fruit, Holland says, using techniques like agent goal hijack or tool misuse.
Jose Lazu, associate director of product management at CMD+CTRL, a security training company, says that there are some second-tier risks that could have been included, such as model and tuning supply-chain integrity, long-horizon data poisoning, multi-agent coordination exploits, and cost-based resource exhaustion.
“These areas are evolving quickly, so CSOs need to keep them on their radar,” he says.
OWASP Top 10 for Agentic AI
Below we list the OWASP Top 10 for Agentic Applications 2026, a framework that identifies the most critical security risks facing autonomous and agentic AI systems.
1 – Agent Goal Hijack
Attackers use prompt injection, poisoned data, and other tactics to manipulate the AI agent’s goals, so that the agent carries out unwanted actions. For example, a malicious prompt can manipulate a financial agent into sending money to an attacker.
2 – Tool Misuse and Exploitation
Agents misuse legitimate, authorized tools for data exfiltration, destructive actions, and other unwanted behaviors. In fact, we’ve already seen examples of AI agents deleting databases and wiping hard drives.
3 – Identity and Privilege Abuse
Flaws in agent identity, delegation, or privilege inheritance allow attackers to escalate access, exploit confused deputy scenarios, or execute unauthorized actions across systems. For example, an attacker can use a low-privilege AI agent to relay instructions to a high-privilege in order to do things they’re not supposed to be able to do.
4 – Agentic Supply Chain Vulnerabilities
Compromised or malicious third-party agents, tools, models, interfaces, or registries introduce hidden instructions or unsafe behavior into agentic ecosystems. For example, an attacker can embed hidden instructions into a tool’s meta-data.
5 – Unexpected Code Execution
Agent-generated or agent-invoked code executes in unintended or adversarial ways, leading to host, container, or environment compromise. AI agents can generate code on the fly, bypassing normal software controls, and attackers can leverage this. For example, a coding agent writing a security patch might include a hidden back door due to poisoned training data or adversarial prompts.
6 – Memory and Context Poisoning
Attackers corrupt persistent agent memory, RAG stores, embeddings, or shared context to affect an agent’s future actions. For example, an attacker keeps mentioning a fake price for a product, which gets stored into an agent’s memory, and the agent might later think the price is valid and approves bookings at that price.
Contaminated context and shared memory can spread between agents, compounding corruption.
7 – Insecure Inter-Agent Communication
Weak authentication, integrity, or semantic validation in agent-to-agent messaging enables spoofing, tampering, replay, or manipulation. For example, an attacker can register a fake agent in a discovery service, and intercept privileged coordination traffic.
8 – Cascading Failures
A single fault, such as hallucination, poisoned memory, or compromised tool, propagates across autonomous agents. For example, a regional outage in a hyperscaler can break multiple AI services, leading to a cascade of agent failures across many organizations.
9 – Human-Agent Trust Exploitation
Agents exploit human trust, authority bias, or automation bias to influence decisions or extract sensitive information. For example, a compromised IT support agent can request credentials from an employee and send them to the attacker.
10 – Rogue Agents
Agents can act harmfully and deceptively in such a way that individual actions may appear legitimate. This could be due to prompt injection, or due to conflicting objectives or reward hacking. For example, an agent whose job is to reduce cloud costs might figure out that deleting files is the most efficient way to do that.