In 2026, the cybersecurity industry is expected to cross a threshold it has never reached before: More than 50,000 publicly disclosed software vulnerabilities in a single year.

According to a new forecast from the Forum of Incident Response and Security Teams (FIRST), the median projection for 2026 is roughly 59,000 Common Vulnerabilities and Exposures (CVEs). Under more extreme — but plausible — scenarios, that number could climb far higher, reaching nearly 118,000, more than double the estimated 48,000 or so CVEs reported in 2025.

But security researchers and data scientists caution that numbers tell only part of the story. Historically, only a small fraction of disclosed vulnerabilities is ever exploited in the wild, and an even smaller subset meaningfully affects most enterprises.  

“While the number of vulnerabilities goes up, what really matters is which of these are going to be exploited,” Michael Roytman, co-founder and CTO of Empirical Security, tells CSO. “And that’s a different process. It does not depend on the number of vulnerabilities that are out there because sometimes an exploit is written before the CVE is even out there.”

What FIRST’s forecast highlights instead is a growing signal-to-noise problem, one that strains already overburdened security teams and raises the stakes for prioritization, automation, and capacity planning rather than demanding that organizations patch more flaws exponentially.

Why are flaw numbers rising?

FIRST’s forecast reflects structural changes in how vulnerabilities are discovered and disclosed, not a sudden leap in attacker capability.

“Some of the classic coordinated vulnerability disclosure teams are producing slightly higher volumes each individually, but we’re also seeing several new entrants to the space that produce a lot,” Éireann Leverett, FIRST liaison and lead member of the organization’s Vulnerability Forecasting Team, tells CSO.

The growth also reflects a maturation of vulnerability reporting itself. More organizations now operate as CVE Numbering Authorities, more vendors incentivize disclosure through bug bounty programs, and long-neglected code bases — particularly in open source infrastructure — are receiving sustained scrutiny.

In that sense, the surge reflects improved visibility rather than deteriorating software quality. Vulnerabilities that existed for years are now being cataloged, tracked, and measured in ways that were not possible a decade ago.

FIRST also adjusted its modeling approach to account for a structural shift in CVE publication that began around 2017, when disclosure volumes started to rise more steeply. Rather than optimizing for a single point estimate, the organization widened its confidence intervals to help security teams plan for a range of outcomes.

“We think it’s entirely realistic that this year we reach 70,000 to 100,000 vulnerabilities,” Leverett says, adding that the median forecast remains closer to 60,000 and is intended to support planning rather than alarm.

Why raw CVE counts do not equal risk

Despite the scale of the forecast, experts stress that vulnerability volume alone is a poor proxy for enterprise risk.

“The risk to an enterprise is not directly related to the number of vulnerabilities released,” Empirical Security’s Roytman says. “It is a separate process.”

He points to historical data showing that while CVE numbers have risen steadily, exploitation has not followed the same trajectory. In 2025, roughly 48,000 vulnerabilities were disclosed, Roytman says. Of those, fewer than 3,000 had publicly available proof-of-concept exploit code, and only about 700 showed evidence of exploitation in the wild.

“The really risky things changed a little bit [over 2024 levels], not quite as much as you would expect from the overall change,” he says.

On top of that, many vulnerabilities affect niche software, consumer devices such as cell phones, and other configurations that are not priorities in large enterprise environments. Other vulnerabilities are theoretically exploitable but offer little value to attackers compared with already weaponized flaws that are proven, scalable, and reliable.

This pattern has held for years, even as disclosure volumes climbed. The result is a widening gap between the number of vulnerabilities published and the number that matter operationally.

A capacity problem, not a crisis

Still, the growing volume creates real challenges for defenders.

FIRST estimates that roughly 5% of vulnerabilities account for most of the serious risk. As the overall number rises, identifying that critical subset becomes harder.

“With all of this extra being produced, finding that 5% might be a little harder, like finding a needle in the haystack,” FIRST’s Leverett says. “It’s about finding the signal in the noise.”

For CISOs, the implication is that patching strategies are now more about scaling decision-making processes that were already under strain. “If you’re telling me a machine has to process 100,000 things instead of 50,000 things, that’s not a big deal,” Roytman says. “If you’re telling me a human has to do that, I would panic.”

Security teams have not operated at a human scale for years, he adds. The difference now is that the noise floor is rising fast enough to expose weaknesses in prioritization, tooling, and automation.

AI is accelerating discovery — not mass exploitation (yet)

Much of the anxiety surrounding FIRST’s forecast centers on artificial intelligence, particularly large language models that can audit code at scale. While AI-assisted tools are already increasing the pace of vulnerability discovery, experts caution that discovery and exploitation remain very different problems.

Roytman argues that while AI has made it easier to enumerate flaws, attackers still face economic and operational constraints. “If it were that easy, they’d be doing it to the 50,000 we saw last year,” he says. “Instead, exploitation remains concentrated on a relatively small set of vulnerabilities that are proven, scalable, and valuable.”

At the same time, defenders are using the same techniques to manage the flood. Machine-learning models trained on exploitation data increasingly help security teams determine which vulnerabilities are likely to matter — and which can safely be deprioritized.

“The same tools that are enabling discovery at scale are also enabling defenders to filter signal from noise at scale,” Roytman adds.

What CISOs should do to manage the CVE flood?

Absent the ability to hire their way out of the problem, most organizations will need to rely on more pragmatic measures, such as:

  • Double down on prioritization. Exploitation likelihood, asset context, and business impact matter far more than raw (CVSS Common Vulnerability Scoring System) scores.
  • Automate triage aggressively. Human review should be reserved for a small, high-confidence subset of vulnerabilities.
  • Plan for ranges, not point estimates. FIRST’s confidence intervals are designed to support capacity planning, not prediction.
  • Expect more noise, not more attackers. Disclosure is accelerating faster than exploitation.

“There’s no need to panic,” Roytman says. “But there is a need to be strategic.”

Stress on the vulnerability ecosystem

The forecast also raises questions about the sustainability of the broader vulnerability ecosystem, including MITRE, which produces CVEs under contract with the Cybersecurity and Infrastructure Security Agency (CISA), the National Vulnerability Database administered by the National Institute of Standards and Technology, and CVE Numbering Authorities (CNAs) — organizations authorized to assign CVEs — already struggling with backlogs.

Sasha Romanosky, a senior policy researcher at RAND, tells CSO the system is more likely to degrade gradually than collapse outright under the weight of a spiraling number of CVEs.

“I don’t think it would cause anything to break,” Romanosky says. “They just wouldn’t get processed. Lots of vulnerabilities would be ignored.”

That dynamic could shift more responsibility toward software vendors and CNAs, many of whom already face capacity constraints of their own. Distributing more of the enrichment and prioritization work downstream may help in the short term — but only if automation improves alongside it.

“The system isn’t fragile,” Romanosky says. “It’s constrained.”

In practice, that could mean growing queues, uneven data quality, and greater reliance on private-sector tooling to compensate for delays in public databases. The result is not necessarily higher risk, but greater fragmentation, particularly for organizations without mature vulnerability management programs.

The cybersecurity industry is not facing an explosion of exploitable weaknesses so much as an explosion of information. For CISOs, success in 2026 will depend less on reacting faster and more on deciding better — using automation and context to ensure that rising vulnerability counts do not translate into rising risk.

“It hasn’t been a human-scale problem for some time now,” Roytman says. The challenge ahead is making sure it does not become an unmanageable one.

Read More