CISOs have a burgeoning identity crisis on their hands.

According to Verizon’s 2025 Data Breach Investigation Report, cyber attackers have switched up their initial access vectors of choice, with stolen credentials a leading cause of data breaches, triggering 22% of all intrusions and 88% of basic web application attacks. These findings followed Varonis researchers’ conclusion that 57% of cyberattacks in 2024 started with compromised identities.

No matter the source of research, it is undeniable that many, if not most, significant cyber intrusions increasingly begin with an identity failure.

These failures will likely get worse and more frequent — and at machine speed — as use of agentic AI rises. This radical technology shift, in which AI agents increasingly act autonomously, impersonate humans, and make decisions faster than existing governance processes and practices can accommodate, will force cybersecurity leaders to overhaul how they monitor and manage identity systems in their organizations.

Or as Jim Alcove, CEO of Oleria and head of the SINET Identity Working Group, recently wrote: “Our current frameworks, protocols, and operational processes for identity and access were never intended to handle the speed, scale, and complexity of AI.”

Experts say that CISOs must quickly revamp their approach to managing identity by going beyond login or access authorization, instead placing identity management at the core of their enterprises.

The collapse of traditional identity models

Current identity and access models were built to grant access and authorization levels to human beings and not autonomous software, such as the proliferating number of AI agents. Experts say the human-centric identity models that hand out usernames, roles, and access levels will likely collapse when faced with thousands of autonomous agents making requests every second.

“We are inviting something that simulates human behavior into our environment, and no one is thinking about how to authenticate and authorize this new individual,” Ric Smith, president of products and technology at Okta, tells CSO. “The analogous thing would be you just take a random person off the street, walk them into your building, and let them loose, because technically that’s what people are doing as a result of developing LLMs or developing on LLMs.”

Worse, incorporating AI agents into existing identity models only adds a layer of complexity to identity environments that have already proved problematic, says Steve Stone, SVP of threat discovery and response at Sentinel One.

“The direction AI is taking is going to accelerate the already existing identity challenge,” Stone tells CSO. “So we’re going to take a problem that’s currently fairly difficult and widespread and we’re just going to really throw gas on that fire when it comes to AI.”

Compounding the problem are the typical interaction layers involved with AI, according to Stone. “There’s a real machine identity problem because you’re interacting with AI and those technologies often through APIs and other mechanisms,” he says. “That identity piece is not just how you log into the machine; it is also how your machines are communicating with the machines.”

Not only that, but few organizations are equipped to deal with how quickly identity challenges will emerge. “We talk about intrusions now, and it used to be months into weeks and then it was weeks into days, and now we’re really into hours,” Stone says. “When we talk about AI agents, we’re going to have to make decisions that impact companies in seconds. There is not going to be time to go through the incident response playbook.”

“Suddenly, we have these tools now that can aggregate tens and hundreds of thousands of components of information,” Pete Clay, CISO of aerospace company Aireon and former CISO of Deloitte, tells CSO. “Identity was really designed just to make sure that you could see the Word document that I sent you. It was never designed to work at the speed and with the velocity that we’re asking identity to work with in the AI era.”

Identity as a trust fabric

Most organizations currently rely on a welter of identity and access management systems for a variety of reasons. Some systems might be tied to a specific vendor’s technology; some might be legacy systems from mergers or acquisitions; some might be in place due to legal or regulatory requirements.

“What happens even before we get to the agentic AI era is that identity today is actually in silos,” Vijay Gajjala, VP of product at identity security platform Oleria, tells CSO. “You have people who are still using on-prem identity, Active Directory, whatever. You also have people using cloud identity like Entra, Google Identity, and Okta. There isn’t a single way to answer the question of who has access to what. This is itself a fundamental problem.”

That’s why the SINET Identity Working Group — which includes a host of internet infrastructure and security pioneers, including Heather Adkins, VP of security engineering at Google; Jason Lee, former CISO of Zoom and Splunk; Michael Montoya, CTO at F5; and many others — lays out a vision for what it calls an AI Trust Fabric, an “autonomous, self-healing system [that] depends entirely on trust.”

This fabric consists of robust identity and protocols, where every entity has a unique and proofed identity. The protocols that are part of this fabric “must cryptographically prove both the ownership of a token and the origin of the identity in a sound, verifiable manner.”

The group’s vision involves dynamic access and authorization that does away with static bearer tokens that often prove to be a liability. At the same time, the group suggests that authorizations should be finely grained and configurable via APIs for least-privileged agent access to tools, systems, and data.

Moreover, access should be configurable on the fly and should not be a simple yes or no, but instead should reflect a dynamic composition based on all relevant entities in the chain. Finally, the fabric should make delegations of access explicit when an AI agent acts on behalf of a human or another AI agent and be built on specific revocation and just-in-time access policies.

In essence, “We don’t want to give agents agency” when it comes to identity, Carey Frey, VP and CSO of TELUS and a SINET working group member, tells CSO.

“We think of a human having access to something maybe for days, months, or years,” he adds. “But these agents could literally come and go in seconds or hours, and then they might spawn sub-agents and be in a whole network of other agents all around the world, and they could go off and start doing things which humans may never be able to catch up with.”

Better identity management to address AI’s known risks

An identity trust fabric could go a long way to preventing AI’s known risks. According to the SINET group, better identity management could be a proactive risk mitigation against several emerging AI threats, including:

  • CI/CD pipeline vulnerabilities, which consist of malicious code injected in LLMs that could poison an AI from inception
  • Prompt injection, where attackers craft subtle, malicious inputs to manipulate an AI agent’s behavior
  • AI takeover/manipulation, which gives a threat actor control over an AI model’s output or decision-making
  • Data poisoning, where attackers deliberately inject corrupted or misleading data into an AI model’s training dataset
  • Model and training data disclosure, which is when attackers use carefully crafted prompts to trick AI agents into revealing sensitive information such as proprietary code, confidential business data, or personal information that the model was never meant to share
  • Model extraction or IP theft, where attackers continuously query APIs to reconstruct model behavior, stealing IP or disclosing proprietary, sensitive training data

Of all these threats, experts point to prompt injection as the most likely risk. “We do have the prompt injection problem,” Ely Kahn, chief product officer at SentinelOne, tells CSO. “It’s extremely easy for an adversary to find some exposed web asset or resource, put a malicious prompt in it, and then wait for an AI system to read that malicious prompt.”

“Then that AI system is tricked into starting to expose sensitive data,” he adds. “And I think we’re on the precipice of where we’re going to start seeing AI security-related attacks like prompt injections every week in the news headlines.”

How CISOs should prepare for the new identity era

The need for CISOs to implement improved identity systems or build something akin to an identity fabric will arrive quickly, although experts say it’s critical to have fundamental cybersecurity hygiene measures in place before even thinking about tackling a more comprehensive identity program.

“The analogy I use is if you don’t have good hygiene, then anything new that you do would be bad,” Oleria’s Gajjala says. “If you don’t have good body hygiene and all of a sudden you bought a thousand-dollar suit, that doesn’t change the fact that you have bad hygiene.”

Once the security basics are in place, preparing for the coming AI identity challenges should be a deliberate process that is not to be rushed. “You literally have to start from ground zero and think about how I am granting access to the data that I care about and how I measure that, and then how do I automate that in a way that I stay on top of this problem all the time,” Aireon’s Clay says.

As is always the case when introducing new security programs into the organization, CISOs should work with decision-makers to pave the way for changes. “What we want CISOs to do is to work with their enterprises to say, we really need to have these solutions and put in place those security standards and models for identity and authentication before adopting new solutions,” says Frey, of TELUS.

Like any other major security effort, “it always starts in the most boring and horrible place ever, which is governance,” Clay says. “You have to really start to understand what I am trying to protect and how I am trying to protect it before you start building tools and processes and everything else. Then that governance process is: A user can do this, this administrator can do that, this person can do this.”

Read More