AI is now everywhere within enterprises. Many CISOs I speak with feel stuck between wanting to move forward and not knowing where to begin. The fear of getting both security’s use of AI and securing AI within the organization wrong often stops their process before it begins. That said, unlike other big technology waves such as cloud, mobile and DevOps, we actually have a chance to put guardrails around AI before it becomes fully entrenched in every corner of the business. It’s a rare opportunity, one we shouldn’t waste.

From AI fatigue to some much-needed clarity

A big part of the confusion comes from the word “AI” itself. We use the same label to talk about a chatbot drafting marketing copy and autonomous agents that generate and implement incident response playbooks. Technically, they’re both AI, but the risks are nowhere near the same. The easiest way to cut through the AI hype is to break AI into categories based on how independent the system is and how much damage it could do if something went wrong.

On one end, you have generative AI, which doesn’t act on its own. It responds to prompts. It creates content. It helps with research or writing. Most of the risk here comes from people using it in ways they shouldn’t — sharing sensitive data, pasting in proprietary code, leaking intellectual property and so on. The good news is that these problems are manageable. Clear acceptable-use policies, training people on what not to put into GenAI tools and implementing enforceable technical controls will handle a big chunk of the security considerations with generative AI.

The risk grows when companies let GenAI influence decisions. If the underlying data is wrong, poisoned or incomplete, then the recommendations built on top of that data will be wrong too. That’s where CISOs need to pay attention to data integrity, not just data protection.

Then there’s the other end of the spectrum: agentic AI. This is where the stakes are raised. Agentic systems don’t just answer questions — they take actions. They sometimes make choices. Some can trigger workflows or interact with internal systems with very little human involvement. The more independent the system, the bigger the potential impact. And unlike GenAI, you can’t rely on “better prompts” to fix the problem.

If an agentic AI drifts into “bad behavior,” the consequences can land extremely fast. That’s why CISOs need to get ahead of this category now. Once the business starts depending on autonomous systems, trying to bolt on safeguards afterward is almost impossible.

Why CISOs actually have an opening here

If you’ve been in security long enough, you’ve probably lived through at least one technology wave where the business moved ahead and security was asked to play catch-up. Cloud adoption is one recent example. And once that train left the station, there was no looking back and there was certainly no slowing down.

AI is different. Most companies – even the most forward-thinking ones – are still figuring out what they want from AI and how to best deploy it. Outside of tech, many executives are experimenting without any real strategy at all. This creates a window for CISOs to set expectations early.

This is the moment to define the “unbreakable rules,” shape which teams will review AI requests and put some structure around how decisions are made. Security leaders today have more influence than they did in earlier technology shifts, and AI governance has quickly become one of the most strategic responsibilities in the role.

Data integrity: Foundational to AI risk

When people talk about the CIA triad, “integrity” usually gets the least airtime. In most organizations, applications handle integrity quietly in the background. But AI changes how we think about it.

If the data feeding your AI systems is compromised, incomplete, incorrect or manipulated, then the decisions built on top of that data can affect financial processes, supply chains, customer interactions or even physical safety. The job of the CISO now includes making sure AI systems rely on trustworthy data, not just protected data. Those two aren’t the same thing anymore.

A simple, tiered approach to AI governance

To make sense of all the different AI use cases, I recommend a tiered approach. It mirrors how many companies already handle third-party risk: the higher the risk, the more scrutiny and controls you apply.

Step 1: Categorize AI usage

A practical AI governance program begins by categorizing each use case according to two core metrics: the system’s level of autonomy and its potential business impact. Autonomy spans a spectrum, from reactive generative AI to assisted decision-making, to human-in-the-loop agentic systems and ultimately to fully independent AI agents.

Each AI use case must be evaluated for its impact on the business, categorizing the impact simply as low, medium or high.  Low-impact, low-autonomy systems may require only lightweight oversight, whereas high-autonomy, high-impact use cases demand formal governance, rigorous architectural review, continuous monitoring – and in some cases, explicit human oversight or the addition of a kill switch. This structured approach allows CISOs to quickly determine when stricter controls are needed and when concepts such as zero-trust principles should be applied inside AI systems themselves.

Step 2: Define table-stakes controls for all AI

Once risk tiering is in place, CISOs must ensure that foundational controls are consistently applied across all AI deployments. Regardless of the technology’s sophistication, every organization needs clear and enforceable acceptable use policies, security awareness training that addresses AI-specific risks and technical controls that prevent data leakage and undesirable behavior. Basic monitoring for anomalous AI activity further ensures that even low-risk generative AI use cases operate within safe and predictable boundaries.

Step 3: Determine where AI review will occur

With these foundations established, organizations must determine where AI governance will actually occur. The right forum depends on organizational maturity and existing structures. Some companies may integrate AI reviews into an established architecture review board or a privacy or security committee; others may need a dedicated, cross-functional AI governance body. Regardless of the structure chosen, effective AI oversight requires input from security, privacy, data, legal, product and operations. Governance cannot be the responsibility of a single department — AI’s impact reaches across the entire enterprise, and so must its oversight.

Step 4: Establish unbreakable rules and critical controls

Finally, before any AI use case is approved, the organization must articulate its non-negotiable rules and critical controls. These are the boundaries that AI systems must never cross, such as autonomously deleting data or exposing sensitive information. Some systems may require explicit human oversight, and any agentic AI that can bypass human-in-the-loop mechanisms must include a reliable kill switch.

Least-privilege access and zero-trust principles should also apply within AI systems, preventing them from inheriting more authority or visibility than intended. These rules should be dynamic, evolving as AI capabilities and business needs change.

AI isn’t optional anymore, but good governance can’t be optional either

CISOs don’t have to become machine-learning experts or slow the business down. What they do need is a clear, workable way to judge AI risks and keep things safe as adoption grows. Breaking AI down into understandable categories, pairing that with a simple risk model and getting the right people involved early will go a long way toward reducing the overwhelm.

AI will reshape every corner of the enterprise. The question is who will shape AI. For the first time in a long time, CISOs have the chance to set the rules, not scramble to enforce them.

Carpe diem!

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Read More