Move over shadow IT; shadow AI is the new risk on the scene. The explosion of available AI tools, leadership’s enthusiasm for the new technology, the push for employees to do more with less, nascent governance and the sheer speed at which AI is evolving has created the perfect environment for shadow AI to flourish.

“Every CISO I talk to has discovered some form of shadow AI,” says Andrew Walls, vice president analyst at Gartner.

Vendors are turning AI capabilities on in their products, often without communicating that to their customers. Employees are using these embedded AI capabilities, whether CISOs are aware or not. And, of course, employees are turning to AI tools that have either not been vetted or have been explicitly banned by their employers.

CISOs might learn about these cases through staff reporting or through tools designed to detect AI use. But discovering shadow AI is just the first step. CISOs need to understand the context in which it is being used, the attendant risks and how to adapt governance going forward.

Assess the risk

Once CISOs become aware of a specific instance of shadow AI, the first step is to understand the associated risk. “The first instinct is to react. And that’s never a good thing in cybersecurity,” says Olivia Rose, IANS faculty member and founder of Rose CISO Group. “You need to think through your answer holistically and look at the level of risk to the organization before you respond and address the issue.”

How sensitive is the data? What is the AI tool provider doing with that data? How is it being stored? Is it being used to train an AI model? “It’s not the AI part of shadow AI that concerns them. It’s the data that’s being provided to an AI by the employee,” says Walls.

And the ultimate question for CISOs: Did this instance of shadow AI lead to a breach?

While discovering a breach is never the ideal outcome, CISOs aren’t entirely in uncharted waters. Their organizations should have defined incident response plans to follow, even if the breach in question stems from the use of shadow AI.

“I’ve managed a number of security incidents and major incidents and data breaches. They’re never the same, even before AI. So, how do you handle an AI breach? Depends on what was breached, how it was breached, what type of data was breached, legal, regulatory impact of that breach,” says Vandy Hamidi CISO of BPM, a tax, advisory and accounting firm.

While data breaches are a prominent concern, they aren’t the only potential outcome of AI. “AI risk is not only digital risk, it can become physical very, very quickly,” says Pablo Ballarin, co-founder and vCISO at Balusian and ISACA member. Does the use of shadow AI open the door to operational disruption, wasted resources or safety issues? Answering these questions is also a part of the necessary risk assessment.

Understand why AI is being used

If CISOs want to manage shadow AI effectively, they need to understand why it keeps popping up. The immediate reaction may be to shut down the use of shadow AI, but there must be more to the response than that.

“Our focus is understanding why they’re using it, educating them on the risks of using an unapproved AI tool, identifying whether or not we already have tools in the organization that can meet those needs and then, obviously, redirecting them with a…serious reminder of if it’s not approved for use,” says Hamidi.

Employees are likely engaging with shadow AI because it is making them more productive. Could the business benefit from dragging that AI out of the shadows and into the light? Are employees using an unapproved tool because they don’t know that something similar, and already vetted by the business, is available?

CISOs at companies that take a more draconian stance on AI may find themselves struggling to manage just how many instances of shadow usage pop up.

“If a company as a whole is slow on the adoption curve, it effectively forces the use of shadow AI,” says Hamidi.

Shut it down or integrate it

Once CISOs have a grasp on the risk introduced by shadow AI and why it is being used, they can work with other enterprise leaders to determine whether to shut it down or pursue its approval for use.

If the tool needs to be shut down – if it caused a breach this will almost certainly be the case – CISOs will need to figure out how to get it done and prevent the same use case from happening again.

“You have to look at mitigation strategies to prevent recurrence, whether that’s education of the employee, more coherent policy and acceptable use guidelines, or whether it’s a technological fix through some sort of blocking or filtering mechanism,” says Walls.

If shadow AI represents a potentially valuable use case for the business, it is time for that tool to undergo a formal review process by more than just the security team.

“Our PMO process includes a formal information security review, a legal review, data privacy review. It includes a return on investment as well to see if this tool makes sense. And then it either gets approved or it doesn’t,” Hamidi shares.

The use of AI, shadow or otherwise, is a delicate balancing act with risk on one side and benefit on the other. Shadow AI may have a legitimate business use case that is boosting productivity, but if the risk outweighs that benefit, CISOs must protect their enterprises. And productivity may take a hit.

“Depending on the risk level of the tool they’re using, sometimes that’s a cost that we have to bear,” says Hamidi.

Review and update AI governance

Every instance of shadow AI uncovered is an opportunity, even if it does cause a breach. CISOs can advocate for more resources to support the ongoing task of shadow AI management. “Never let a good breach go to waste. You can leverage it to get budget, resources, support for the cybersecurity organization,” says Rose.

Regardless of shadow AI’s outcome – blocking or integration – its discovery calls for employee education. Do they know what AI tools are already available to them? Do employees know what is considered shadow AI? Do they know the risks of using shadow AI?

That information should be clearly communicated to employees. “If there are not clear guidelines as to what’s okay and what’s not okay, well then, employees are going to do what they think is best,” says Walls.

While clear communication is important, so is its delivery. CISOs do not want to create a culture in which employees are afraid to use AI. Instead, they can push enterprises to have clear pathways for employees to introduce potential tools for evaluation.

“What I don’t want to do is punish people for using AI for increasing their productivity. If they have legitimate business reasons and they want to use it, we have processes in place for them to get it approved,” says Hamidi.

Punishment is not the frontline response to managing shadow AI, but accountability needs to be a part of how the technology is used in an organization. Employees need to understand the consequences of using AI tools: approved and unapproved. They need the training to use AI tools responsibly and a clear picture of what can happen if they continue to turn to unapproved tools. “Work with your HR team to define repercussions for repeat behavior,” says Rose.

The committee tasked with AI governance needs to build that culture of accountability; it cannot be solely the responsibility of the security team. “If that responsibility is not clearly assigned to every single person who touches an AI, then it’s very possible that when the blame game starts, there’s no obvious home for it. And the CISO might be in that line of fire,” says Walls.

For now, AI governance may require its own set of policies, but Walls anticipates that approach will change with time. “Build the AI policy, build the AI security policy, the generative AI policy, the agentic AI policy and guidelines and so forth. They’re necessary right now but in two to three years they will merge with all other technology governance and become one piece,” he says.

Even as AI governance evolves, CISOs will remain at the center of the conversation. Shadow AI is a security risk, and it isn’t going away anytime soon.

“Shadow AI, like shadow IT to a certain extent, cannot be fully avoided. It has to be managed,” says Hamidi.

Read More