When generative AI (GenAI) hit the consumer market with the release of OpenAI’s ChatGPT, users worldwide flocked to the product and started experimenting with the tool’s capabilities across industries. The release also sent an instant panic through the hearts of information security professionals whose job is to protect organizations from risks, including the loss or theft of sensitive data — including personally identifiable information (PII), protected health information (PHI) and sensitive corporate data and intellectual property.
Before we jump into protection mode, we must first ask ourselves: “What is it we are trying to protect with GenAI?” I see 3 primary objectives: 1) sensitive corporate data and intellectual property, 2) PII, PHI and 3) malware, maliciously generated code, etc.
What’s wrong with the tools we have?
Traditional enterprise data loss prevention (DLP) tools (such as Forta, Symantec, Netscope, Trellix, Microsoft, etc.) have been around for years, but are expensive, cumbersome to implement and require lots of care and feeding by IT professionals to make them effective in an organization. They offer comprehensive solutions typically built around data-centric and network-centric DLP, which integrates into data sources and monitors the network and any egress points. As a result, only large organizations with plenty of resources have the capability of deploying legacy DLP tools.
Fast forward to today with the combined risks associated with GenAI solutions. Unmanaged GenAI solutions and the consumer products offered by GenAI leaders — such as OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s CoPilot and Anthropic’s Claude — allow users to upload documents, analyze information and generate a variety of outputs (text, audio, video, graphics, etc.). The risk to organizations is simple: staff uploading and analyzing sensitive data that includes PII, PHI or company proprietary or intellectual property puts organizational data at risk.
Most organizations today have GenAI policies and guidelines, but most lack the technology tools to implement those policies. I see a couple of good options for protecting sensitive data and cybersecurity risks in the GenAI world that include:
Solution 1: GenAI enterprise model
Implement enterprise licenses for approved GenAI solutions (such as ChatGPT Enterprise or Microsoft CoPilot 365, which is integrated into existing O365 tenants). Enterprise GenAI solutions typically include a robust set of built-in security tools that allow organizations to secure their data and implement DLP controls within the enterprise GenAI solution itself.
That said, these are expensive and typically run between $30 to $40 per user per month. For an organization of 4,000 staff, that’s $1,440,000 per year. With this approach, training can be optimized to the specific approved enterprise tools.
And of course, to reduce the risks of other non-approved GenAI tools — block them with modern-day internet content filtering tools like Cisco’s Umbrella, iBoss, DNSFilter or WEB Titan. The downside with this option is that organizations may risk locking out solutions that staff what, thus potentially stifling innovation. IT organizations must learn to read the room on what helps the business succeed and then figure out how to secure it. I consider this to be the risk-averse option.
Solution 2: GenAI open model
Implement GenAI DLP controls into your XDR/MDR (extended detection response/managed detection response) security solution to detect, analyze and respond to sensitive data loss risks. The core difference between modern-day XDR and traditional DLP solutions is that XDR combines multiple tools (endpoint, network security and threat intelligence) and DLP into the security solution, typically via an agent.
This option allows for more innovation to occur within your organization by not picking just one or two GenAI enterprise solutions and instead opening options to staff. That said, economies of scale for training go out the window as it’s difficult to train for dozens of different solutions within the enterprise.
Tier-1 solutions like Sentinel One, Microsoft and CrowdStrike offer robust DLP modules as part of their cybersecurity platforms, leveraging robust AI engines to detect and prevent sensitive data leaks from non-enterprise GenAI tools or any other tools for that matter. These tools can also secure your agentic AI by defining guardrails through threat and data protection and automated response across the full AI attack surface.
This approach shifts the layer of data loss risk from enterprise tool implementation to the endpoint. It also relieves the burden of leveraging an internet content-filtering tool to block non-enterprise GenAI solutions — allowing innovation to occur with less risk. XDR DLP is also much more cost-effective and runs between $30k and $50k per year for an organization of 4,000 staff. I consider this the risk-aware option.
Software solutions and vendors continue to innovate and evolve. The shift from enterprise DLP and internet content filtering or blocking solutions to XDR DLP modules as part of a cybersecurity platform demonstrates the integration of tools and capabilities as we enter 2026.
CIOs and CISOs must keep their focus on emerging tools that foster innovation (such as GenAI), while implementing policies and technologies to mitigate the risk of untamed or non-enterprise GenAI solutions. The remaining risks of GenAI (malware and maliciously generated code) can be handled by a combination of XDR and code security scanning solutions. As a result, XDR/MDR DLP is a solid, cost-effective option for the bulk of GenAI risks.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?