Writing a conference preview is an act of professional speculation. You read the agenda, map the schedule session density, and make your personal best call about where the intellectual energy will concentrate.
From my perspective going in, RSA Conference 2026 outlined a defining tension for CISOs today: how to enable AI adoption fast enough to stay competitive while securing the enterprise against a threat landscape AI itself is reshaping.
Now that RSAC 2026 has run its course, it’s worth holding pre-event predictions, such as my five key priorities for CISOs and their teams, against what actually emerged from the sessions, VC panels, and hallway conversations that tend to be more candid than anything on stage.
The verdict: the frame held. In fact, there was very little conversation without AI being front and center. At the Moscone Center in San Francisco I kept hearing, ‘We live in unprecedented times’ — a cliché that I do believe is true.
My own surprises were mostly matters of AI emphasis and velocity with stronger and perhaps sharper commercial edges than I expected.
The AI saturation hypothesis was confirmed
My RSAC 2026 preview argued that AI was no longer a track but had become the event itself, with approximately 40% of the agenda AI-weighted across every cyber domain.
That certainly bore out on stage. Every panel, whether focused on investment, products, identity, or offensive capability, returned to AI. Yoav Leitersdorf of YL Ventures put this bluntly: “Everyone only wants to talk about AI, and if you aren’t doing AI, investors don’t want to talk to you.”
Kevin Mandia from Ballistic Ventures at the RSA Annual Executive Dinner noted that we have to take humans out of the loop, so that AI versus AI is the new paradigm. He explained that AI agents have been introduced into red teaming exercises and are capable of operating at scale and with speed. So, while AI compresses the attack cycle, AI can also “automate” existing teams to improve their response from 5 days to 5 minutes.
What my preview couldn’t fully anticipate was the degree to which the AI narrative forked into two distinct commercial pressures running simultaneously.
Dave DeWalt of NightDragon captured both sides: AI as a tool for defense and offense, but also AI as a structural force flattening the competitive landscape between established vendors and startups. His observation that Series A funding is now looking to be $100 million — and that he’d never seen capital deploy this fast — landed with impact.
RSAC 2026 felt less like a learning event and more like a deal-making environment with educational sessions attached.
Securing the AI stack: Yes, but the threat surface has grown
The first technical priority I offered for CISOs in my conference preview was securing the AI stack — RAG workflows, LLM data pipelines, vector databases, and model APIs — on the basis that prompt injection, training data poisoning, and model inversion attacks were no longer theoretical.
The floor validated this but added dimensions my preview had underweighted. Mike Leland of Island framed the enterprise AI risk surface comprehensively: data leakage, shadow AI, prompt injection, copyright and IP infringement, hallucinations, and data residency. These aren’t sequential concerns — they arrive simultaneously the moment an organization allows AI tools into the environment.
The AI red teaming conversation surfaced with more commercial urgency than anticipated. Frontier Labs’ Brian Singer described environments where AI attackers operate at 1,000 times the speed of human adversaries, pushing the securing-the-stack conversation from defensive posture into something more active. While my preview was right about the topic, it underestimated the operational tempo.
On the conference floor I caught up with Singulr CEO Shiv Agarwaland Richard Bird, Singulr’s CSO and chief strategyofficer,whose platform is attempting to solve this visibility problem at scale. Their starting point was blunt: “AI usage is going out of control at the enterprise. The CIO, the CSO, they need some level of control, but without stopping or slowing down innovation.”
What Singulr’s discovery work is revealing is more of an issue than most boards appreciate. Bird told me that across enterprise assessments, they consistently surface between 350 and 430 AI services and features in active use, the overwhelming majority of which were never formally sanctioned. The shadow AI problem isn’t theoretical. It’s already deployed.
He offered a more nuanced risk framing than most vendors I encountered: context matters as much as the tool itself. “ChatGPT is a very well-contracted and approved AI service,” he said. “But if someone is using it with a personal account and model training has not been turned off, it brings the same risk as a service put up by two people in a garage.” Unfortunately sanction alone doesn’t confer safety.
Non-human identity: The standout theme of the conference
My preview identified non-human identity (NHI) governance as rapidly becoming one of the most consequential operational gaps in enterprise security. This proved to be my most prescient call. It wasn’t just a track; it became a through-line across multiple panels. Ross Haleliuk noted bluntly that machine identities already outnumber human ones.
Mark McClain, founder of SailPoint, reframed the entire identity management problem around agent intent and context: Humans we assumed were at an office or working remotely, but do we understand the intention of an AI agent, and do we have guardrail policies capable of reasoning about that?
McClain’s framing felt the most intellectually honest moment of the conference on this topic. He acknowledged that new technology was coming that would put his own platform under pressure, while simultaneously arguing that anyone who believes you can master the agentic world in isolation without human oversight is being misled.
The infrastructure question was taken further in my conversation with Noam Issachar and Jake Turetsky of Jazz, whose platform is building what they describe as a control plane for the agentic layer. Their framing was architecturally provocative: “AI is the new infrastructure. An AI agent can conduct and take action for something that looks like data transformation and never go into the lower tiers of the technology stack.” In their view, the agent layer is becoming the new HTTP — a data transport and transformation tier that sits above traditional infrastructure but below application logic.
What they found most troubling was the governance vacuum that currently exists in that space: “If AI is truly transformational, then why is there no transformation of processes, policies, and governance to reflect the fact that traffic management is already happening there?” It’s a fair challenge. The architecture has moved faster than the frameworks built to govern it.
AI governance: Present, but absorbed into broader conversations
The compliance priority in my preview centered on the EU AI Act and the need for CISOs to develop defensible licence-to-operate frameworks for AI deployment.
This theme was present at RSAC but was somewhat absorbed into broader discussions about regulatory alignment rather than treated as a standalone priority. VP of Google Threat Intelligence Sandra Joyce’s exchange with Richard Horne of the NCSC touched on the tension between defenders and attackers both benefiting from AI — the NCSC providing framework standards that regulators then align to, a model of governance by reference rather than prescription.
Jay Bavasi, CEO of EC Council, offered the most direct governance framing I encountered across the entire week: “Our attitude as a community has been shoot first, ask questions later. But what we should be doing is ask questions first, shoot later.”
The data behind that charge is harder to dismiss than the rhetoric. Bavasi cited that 84% of Fortune 500 companies reference AI implementation in their 10-K filings. He noted that the proportion that claims to have actual AI governance in place is just 18%. With 72 countries having already launched AI regulations or frameworks, the gap between disclosure and accountability is widening, not closing.
Singulr’s Bird reinforced this concern from an operational standpoint, noting that the governance conversation is still largely performative inside most enterprises — boards are discussing AI risk without the institutional mechanisms to actually manage it.
In-Q-Tel’s Katie Gray offered the sharpest counterweight to the governance narrative: There has never been a better time to sell to the US government, and the DoD spends $5 billion on cyber annually. In that environment, governance conversations are less about compliance architecture and more about positioning to capture procurement.
Shadow AI: Validated and commercially urgent
My preview’s risk priority around shadow AI and vibe coding — unsanctioned AI tool usage largely invisible to security teams — was confirmed across multiple sessions. Leland’s readiness framework put it plainly: Do you have visibility of shadow AI tool usage across the enterprise? Can you identify and prevent inappropriate data usage with gen AI tools?
Singulr’s Agarwal added a dimension that most vendors are reluctant to name. The most commonly discovered unsanctioned AI application in enterprise assessments is Grammarly — not a rogue model or an exotic data exfiltration tool, a writing assistant that most employees assume is benign and most IT teams have never thought to classify as AI risk.
His broader point about risk posture deserves to sit with board directors: “Your monthly board report is kind of useless in a way because your risk position today versus this morning is different.” A static governance snapshot of a dynamic and real-time threat surface is a category error, not a reporting format.
Team8’s Amir Zilberstein flagged investment in a reimagined DLP category on exactly this basis, the old category was hated, but AI-driven classification changes what’s possible.
What my preview missed
Two things the pre-event article didn’t fully anticipate:
First, the capital concentration dynamic. Amir Zilberstein’s observation that more funding is going to fewer companies, combined with David DeWalt’s seed and Series A figures, describes a market consolidating at the top even as it fragments at the bottom. The 9,900 cyber companies DeWalt cited aren’t all going to survive contact with AI titans crossing over from the SaaS world.
Second, the workforce conversation. This was the thread I found most unresolved across every conversation I had on stage and off.
Many speakers quoted Jensen Huang’s 1:2,000 agent-to-human ratio framing. Then I’d note Yoav Leitersdorf counsel to keep R&D flat and grow through AI, and Mark McClain’s observation that AI agents operate at a speed humans physically cannot match — these signals point to a structural workforce shift that cybersecurity leadership hasn’t fully internalized yet.
EC Council’s Bavasi was the most direct voice on this. He pushed back on the premise that CISOs should own AI wholesale: “CISOs are already suffering. A thousand things are already going on. It is one of the most short-lived jobs in the world. And you’re about to throw a behemoth to them.”
He cited 4 million cybersecurity jobs unfilled today, with that figure likely to double as the agentic layer matures — not because demand shrinks, but because the skill profile required is fundamentally different.
Bavasi also landed what I’d call the most confronting statistic of the week — not about threat actors, but about the industry’s own readiness: “We are living in an era where AI agents already have a social media community of their own. We live in an era where humans are being threatened and blackmailed and we still haven’t figured out how we’re going to implement responsible AI governance and ethics,” he said.
Closing observation
While my preview was focused on what CISOs needed to learn at RSAC, what the floor revealed was that some of that may require them to rethink how their teams are built, how their governance is structured, and how they report to boards, which are asking AI governance questions but receiving answers designed for a different era.
The intelligence is accumulating. The institutional response is lagging. That gap was the real story of RSAC 2026.