Dale Hoak found himself asking a question that has become familiar to CISOs through the decades: What am I missing?
More specifically, Hoak, CISO at software firm RegScale, was wondering what he might be missing around his company’s AI deployments.
“The business was moving so fast in using AI, so initially we had some visibility gaps,” he says.
Hoak believed his monitoring capabilities weren’t strong enough to identify all the risks and threats associated with the company’s newest AI uses. So he repositioned existing tools and invested in new ones, including products that use intelligence to monitor enterprise AI use, to gain the visibility he needed — a process that took about six months.
“Over time I figured out what to look for using logging and SIEM and AI tools, and I feel like we now have the gaps covered,” he notes.
Still, he remains apprehensive.
“I’m always a little wary,” he admits, about what his security operations might not see.
CISOs are right to be concerned. AI is expanding the organization’s attack surface while introducing new types of risk such as those stemming from prompt injection and data poisoning attacks. Security leaders know that. But, as Hoak points out, CISOs are also contending with AI-related security blind spots as their organizations race to implement and scale the technology.
According to the AI Security Exposure Survey 2026 Report from security software maker Pentera, 67% of CISOs report limited visibility into where and how AI is operating across their environments.
Additionally, 48% of CISOs cited limited visibility into AI usage as a top challenge in securing AI systems, making it their second biggest challenge in this space. (Lack of internal expertise, cited by 50%, came in No. 1.)
Myriad blind spots
Nitin Raina, global CISO of consultancy Thoughtworks, highlights multiple scenarios that create such visibility gaps. One is shadow AI.
“Initially about 12 to 18 months back, we saw people using [unsanctioned versions of] ChatGPT or Gemini or buying their own niche AI tool. That has slowed down, but it’s still one of the risks,” Raina says.
Another is the introduction of AI capabilities by software makers whose products are already in use at the company. “The vendors we use are adding AI capabilities and sometimes we don’t have entire visibility into that,” he says, despite his security team’s work to learn how those vendors are handling data and AI-related vulnerabilities.
The models supplied by providers also create blind spots, Raina adds, as CISOs typically can do some level of review but cannot perform deep dives into the models to determine whether there are issues that could skew outcomes to unacceptable levels or send data to places where it shouldn’t go.
Yet another, Raina says, is agentic AI, whose risks include hallucinations or prompt injections as well as failures that due to their speed and autonomous actions can be difficult to detect with conventional security tools.
Many compare the security situation around AI to the early days of cloud, when CISOs similarly experienced shadow deployments, unknown risks, and visibility challenges.
The challenges today are more significant, says Nick Kakolowski, senior research director at IANS Research. Executives are scared of falling behind in the race to use AI for competitive advantage, so they’re willing to take more risks, he says. That has led to rapid-fire AI implementations and deployments outside of normal procurement channels. As a result, “blind spots are kind of everywhere.”
CISOs also often lack full visibility into fourth-party AI systems and the risks that use entails.
Ditto for the accuracy of the outcomes that employees are getting with some AI engines. “No one understands fully how to assess the outcomes of AI and the quality of the content being created by AI,” Kakolowski says. “We’re not going to be able to evaluate the quality and trustworthiness of the outputs of AI, and we don’t know how to equip our people to do so effectively.”
Likewise for AI-generated code, which is increasingly being created outside of development teams thanks to the ease of using AI for such purposes. “They’re using vibe coding, and CISOs may not know where that AI-generated code is being integrated,” Kakolowski says.
CISOs also may not know if AI agents grant access privileges to other agents as they execute workflows, creating yet another blind spot.
And security execs may be in the dark about the ethical implications of their organization’s AI capabilities. “CISOs often get pulled into things that are on the ethical side of risk, and this issue of ethical AI is starting to emerge as one of them,” Kakolowski adds.
Another area where CISOs may not have a clear view: where their organizations draw the line on blind spots introduced by their AI strategies. “Guessing at the organization’s risk tolerance is a high-level blind spot,” Kakolowski says, noting that CISOs wanting to close visibility gaps need to start by defining “what the organization considers reasonable versus unreasonable. That helps CISOs figure out the next step.”
Gaining visibility
CISOs say they’re aware of the consequences of having blind spots, with data leaks and problematic AI outputs being common ones.
They’re now working to gain the needed visibility to prevent such issues, says Aaron Momin, CISO and chief risk officer for Synechron, a digital consulting and technology services firm.
“The business has a mandate to adopt AI, but the trouble with this is that the business has been moving at lightspeed and CISOs are just catching up,” Momin adds.
Like other security chiefs, Momin is leaning on a well-formed security strategy, security and AI frameworks, and a clear understanding of the company’s risk appetite and risk tolerance to do that work. He’s also leaning on people, process, and technology to secure his organization’s AI deployments and improve visibility.
Still, he acknowledges blind spots could remain, explaining that traditional security tools, such as URL filtering and data loss prevention (DLP) solutions, provide a layer of control but don’t deliver the comprehensive view of AI use that CISOs need.
“They’re not necessarily sufficient. They could get to maybe 80% or 90% of what you need, but to get higher visibility, you have to add additional tools,” Momin says.
That, though, presents another challenge for CISOs.
“Those tools have to be matured, have to be extended, have to be broader to get full visibility,” Momin says. “Now some vendors are upgrading the capabilities [offered in their security tools,] and new tools are coming on the market. And they’re starting to give you full visibility.”
Thoughtworks’ Raina has a similar take to improving visibility, endorsing a multiprong approach to ensure his security team has a full picture of the organization’s AI deployments, their vulnerabilities, and their risks. That approach combines administrative, governance, and technology controls — a combination that has a long history of success in security.
But experts say that tried-and-true combination is not enough to gain full visibility when it comes to AI.
According to Pentera’s survey, no CISOs reported full visibility and no shadow AI. One-third said they had good visibility with shadow AI likely, while 66% said they had limited visibility with shadow AI a known issue, and 1% said they had no visibility.
Full visibility may not be possible — at least not at present, says Jared Oluoch, professor and director of Eastern Michigan University’s School of Information Security and Applied Computing. Today’s tools and security strategies limit blind spots but do not eliminate them completely. “They can minimize the negative effects,” he adds.
That’s the goal, says Tal Hornstein, CISO of Cast & Crew, a provider of production software, payroll, and services for the entertainment industry.
Like others, Hornstein relies on longstanding security principles, citing the confidentiality, integrity, and availability (CIA) triad as the foundation for his approach to ensure that AI works within established guardrails and that he can observe its behavior.
Hornstein is also looking to emerging technologies to deliver better observability and enforcement. But he acknowledges that security tech doesn’t enable full visibility at this time. “They are not fully mature yet,” he says.
That has to be enough for now, he adds, saying CISOs can’t let visibility challenges slow down AI adoption.
“AI is the most amazing technology, and whoever doesn’t use it will be left behind,” Hornstein says. “So, it’s important for me as a CISO and as a business leader to not put up barriers and block AI but to build up guardrails that allow the organization to move at the velocity it wants and the amount it wants while providing risk mitigation.”