A study released Wednesday by API management platform vendor Gravitee indicates that upwards of half of the three million agents currently in use by organizations in the US and UK “are ungoverned and at the risk of going rogue.”
Based on a December 2025 survey of 750 IT executives and practitioners conducted by Opinion Matters, the results revealed that AI agents are being deployed faster than security teams can keep up. There are, said Rory Blundell, CEO of Gravitee, now over three million AI agents operating within corporations, which he described as a workforce larger than the entire global employee count at Walmart.
The three million number is based on an extrapolation of survey results, based on government estimates of 8,250 UK businesses and 77,000 US businesses that employ 250 employees or more. The mean number of AI agents deployed per business is 36.9, and when respondents were asked if their organization “experienced or suspected an AI agent-related security or data privacy incident in the past 12 months,” 88% said that they had.
The mean percentage of agents that are not actively monitored and secured, according to the findings, was 53%
Asked what prompted the study, Blundell wrote in an email, “we’re all familiar with stories of AI agents going rogue: deleting codebases, leaking confidential information, inventing fake data. The working hypothesis that prompted this research was that, while agentic deployment is reaching an exciting stage, businesses have not yet caught up with agent governance. The research validates that.”
A global problem
Agents, he said, “can offer businesses a huge productivity gain, but we have to be realistic about the risks: without governance and oversight, they can easily start becoming liabilities, and a danger to consumers and businesses alike.”
In addition, said Blundell, despite respondents being only from the UK and US, “this is absolutely a global problem. Companies around the world are using AI agents, and across the board there is a gap between the level of deployment and the level of governance. We have a strong customer base in the EU, where we see the same problems.”
David Shipley, head of Canadian-based security awareness training firm Beauceron Security, said, “the only thing that shocks me is that people think it’s only 53% of agents that aren’t monitored. It’s higher.”
He likened the results from the Gravitee study to a “lesson about the Titanic that everyone in technology keeps ignoring. The Titanic disaster didn’t happen because they didn’t know there would be icebergs on the trip. They knew it was peak iceberg season, they knew they were going too fast.”
Shipley said that the ship’s captain and his crew “thought they’d detect [an iceberg]; if they didn’t, and hit one, that their technology controls would protect them to help them recover.” They put their faith in the so-called watertight compartments that, it turned out, weren’t watertight at the top, but, most importantly, they trusted the new wireless communications technology that they could use to call for help if they got in trouble. The equivalent today: “Well, IT and security can fix it if we get in trouble with our agents.”
“Wrong then, super wrong now,” he said.
He said, “we know AI agents are inherently dangerous and unreliable. There’s literally math proofs out there that show it. So, we know there are icebergs. Let me repeat this for those at the back of the room: 100% of AI agents have the potential to go rogue. If a vendor assures you it isn’t possible and their core technology is an LLM, they’re lying. We know we’re going too fast in adoption for the risks we know exist.”
Shipley added, “now, the funny part: imagine if the Titanic still made the choices it did, knowing the watertight compartments didn’t work (aka monitoring is missing for 53% of AI agents), we know by the time IT and security roll on an AI agent risk, the damage is done (the ship’s sinking too fast and radio isn’t going to help because help will be too late). And we still made the choices we’re making.”
The real issue is invisible AI, not rogue AI
Manish Jain, principal research director at Info-Tech Research Group, said that as the “exponential” speed of AI development continues, his firm, based on experiences with CIOs and CDOs, predicts that there will be more AI agents globally by the year 2028 than the number of human employees. “It would be one of the biggest challenges for business and IT executives to govern them without curtailing the innovation that these AI agents bring with them,” he said.
Even today, he noted, “we see that most enterprise AI agents are running without oversight. Many organizations don’t even know how many agents they have, where they’re running, or what they can touch. If you don’t know how many mules are in the barn, don’t act surprised when one kicks the door down.”
Jain pointed out that AI agents are no different. “Unaccounted agents often emerge through sanctioned, low-code tools and informal experimentation, bypassing traditional IT scrutiny until something breaks. You cannot govern what you can’t see. So, we need to understand that the real issue isn’t ‘rogue AI’, it’s invisible AI.”
Info-Tech, he added, “strongly believes that governing AI models or pre-approving agents is no longer enough, because invisible, rogue agents will do tandava (the dance of destruction) at runtime. This is because, when it comes to governing these AI agents, the number is so huge that approval gates will not be sustainable without halting the innovation. Continuous oversight should be the priority for AI governance after setting initial guardrails as part of the AI strategy.”
Perspective, he said, also needs to change: “AI agents are no longer helpful bots. They often operate with delegated yet broad credentials, persistent access, and undefined accountability. This can become a costly mistake as overprivileged agents are the new insider threat. We need to define tiered access for AI agents. While we can’t avoid giving a few people keys to our house to speed up things, if you trust every stranger with your house keys, we wouldn’t be able to blame the locksmith when things go missing.”