AI is being leveraged across organizations to boost productivity, accelerate innovation and optimize business processes. The problem is that adoption has outpaced discipline. Only a minority (23.8%) of organizations have formal AI risk frameworks in place, which is precisely how unauthorized, “shadow AI” takes root, leading to untracked data exposure, compliance friction and poor decisions built on unreliable outputs.

An AI risk assessment and management methodology, such as the NIST AI Risk Management Framework, and visibility into your environment, is absolutely critical for safe AI use. It surfaces shadow AI and puts the necessary controls in place to enable safe, mature AI adoption.

We noticed something was off when a new security tool started lighting up with alerts. Our first thought was that we misconfigured a rule, until we dug a little deeper and realized the alerts all pointed to the same issue: production API keys in outbound traffic.

The source wasn’t a compromised system or a malicious actor. It was one of our own product managers, trying to troubleshoot a production issue with the help of an AI tool, and unknowingly pasting production API keys into prompts.

We had invested heavily in education around safe AI usage. We had trained our developers extensively to avoid using public LLMs for sensitive data, especially secrets and credentials. What we didn’t do was include product managers in that training.

Why? Because they “weren’t supposed to be writing code.”

With AI tools lowering the barrier to coding and debugging, non-engineering roles now have the ability to interact with production data in ways that used to be unlikely. The risk didn’t come from bad intent or negligence. It came from a gap between how we thought work happened and how it actually does today.

Here’s a five-step approach to put a robust AI-risk management framework in place:

1. Uncover and inventory shadow AI

Employees often use public model APIs, browser-based prompt tools and unsanctioned or ungoverned internal chatbots to boost productivity without considering the risk of exposing sensitive data.

AI usage is not difficult to identify; you just need to be looking in the right place and asking the right questions. Targeted questionnaires paired with traffic analysis and inspection can uncover usage and provide visibility.

Start by preparing a comprehensive inventory to gain visibility into the AI systems in use. This is already becoming a regulatory expectation, e.g., the EU AI Act. Then prepare questionnaires on AI use cases relevant to different business units (e.g., financial reporting, contract reviews, resume parsing, marketing ideation) to identify areas of risk, such as AI being used for decision-making. Map these use cases to actual network calls through traffic inspection or log analysis. This helps quantify the volume and types of calls crossing your organization’s perimeter, enabling a concrete governance model.

2. Standardize assessment via industry benchmarks

After discovery, the goal is to assess exposure in a way that business leaders can act on. The NIST AI risk management framework gives you a practical lens through its four functions: govern, map, measure and manage.

Start with governance by assigning clear ownership, decision rights and acceptable-use rules for data handling and AI outputs. Next, map real usage, including how the AI model is used, who uses it, what data it is fed and the workflows or decisions it influences.

From there, you measure risk in practical terms by looking at three inputs together: the most likely ways things fail (prompt-driven data leakage, hallucinations that introduce false facts, biased outputs that create compliance or reputational exposure), the potential business impact if those failures occur (fines, contractual exposure, IP loss, litigation, churn, plus the time and spend required to remediate), and the likelihood of occurrence (how often users submit high-risk data, overall prompt volume and usage spikes during peak workloads).

Finally, manage priorities by applying security protocols proportionate to the risk. Enforce tighter guardrails where impact and likelihood are high; apply lighter guidance where they’re less. For instance, a finance team uploading forecast models into a free AI service is a clear high-impact, high-likelihood case.

3. Implement a layered defense strategy

People, process and technology working in sync are an effective bulwark against AI risk. Train teams on data classification and leave no ambiguity about not sharing PII or confidential information in public AI tools. Reinforce this behavior with tabletop exercises that show how AI-related hallucinations can quietly derail decisions. For example, by inventing “growth drivers” that distort a forecast and trigger real financial mistakes.

Next, streamline the operational workflow for rolling out and maturing AI prompt/data-sharing governance through incremental rollout. Begin in “advice mode,” which flags risky prompts and helps you tune data-sharing thresholds. As you learn from usage patterns and reduce false positives, standardize the controls and transition to blocking or sanitizing flagged prompts where appropriate.

Finally, implement the platform layer to control and monitor at scale. Start with DLP coverage for AI traffic, then add AI-specific monitoring and intrusion-prevention capabilities that analyze prompt syntax and semantics, score risk in real time and alert or intervene when interactions look suspicious.

4. Enforce human-in-the-loop oversight

While accelerating AI adoption, the elephant in the room that we often lose sight of is bad outputs moving straight into production workflows.

The NIST framework emphasizes ‘human-in-the-loop’ to guard against failures caused by plausible but incorrect AI outputs. If these outputs influence legal positions, financial decisions or customer communications without a human review, we are looking at a potential slew of bad decision-making across key business functions.

The recommended approach is to have a qualified human gatekeeper who has explicit accountability vis-à-vis specific outputs, for example:

  • Route drafts to counsel for verification of clauses, obligations, definitions and jurisdiction-specific wording before anything is shared externally.
  • Senior analysts should sign off to validate assumptions, formulas, source data and version control before the numbers inform forecasts or reporting.

5. Translate risk reduction into business growth

McKinsey research on digital trust suggests that companies leading on trust are about 1.6 times more likely than others to achieve a 10% or higher annual growth rate in both revenue and EBIT.

Ideally, the AI risk governance should be pitched as a critical business initiative with clear operational value. Assessment ensures fewer shadow AI tools are in use, fewer sensitive-data prompt events, fewer incidents, fewer audit findings to remediate, and less rework caused by unreliable outputs.

When you translate these improvements into hours saved, reduced external counsel/audit effort and incident-response costs not incurred, AI risk management makes business sense.

A practical risk management framework

Treating shadow AI risk management as a strategic imperative is the right mindset for implementing a practical risk management framework. Start your shadow AI risk management journey by:

  • Inventorying AI usage
  • Applying a structured risk assessment methodology
  • Establishing and enforcing layered controls
  • Ensuring human oversight
  • Continuous measurement

This approach gives you clear visibility into AI usage and enforces layered defenses to help your team make the best of AI. You move from pilot-stage AI experiments to enterprise-scale adoption backed by discovery, risk mapping and scalable defenses.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Read More