AI giant Anthropic has unveiled Project Glasswing, a cybersecurity initiative built around Claude Mythos Preview, a model it describes as “cybersecurity in the age of AI” that can autonomously identify software vulnerabilities at scale.
Rather than release the model publicly, Anthropic is restricting access to a closed consortium of more than 40 companies that includes Amazon, Microsoft, Apple, Alphabet-owned Google, and the Linux Foundation, along with a small group of security vendors such as CrowdStrike, Palo Alto Networks, and Cisco.
“Mythos makes the first domino clearer: Once frontier AI can do large-scale bug hunting, the logic of paying humans for routine discovery starts to break down,” says Jeff Williams, founder of OWASP and CTO of Contrast Security.
According to Anthropic, the goal is to apply these capabilities in a controlled, defensive setting, enabling participating organizations to test and improve the security of widely used software and infrastructure.
The economics of bug hunting shift
In early testing, Anthropic claims the model identified thousands of high-severity vulnerabilities across operating systems, browsers, and other widely used software. Some had persisted despite extensive prior review — including a 27-year-old flaw in OpenBSD, long considered one of the most security-hardened operating systems and widely used in critical infrastructure.
As with many early AI capability claims, the results are largely self-reported and only partially externally verifiable, but they point to a clear direction: Vulnerability discovery is becoming more automated and scalable.
That shift raises questions about how security work is organized and valued.
For OWASP’s Williams, the disruption begins with economics. If AI systems can perform large-scale vulnerability discovery, the rationale for relying on human-driven bug hunting — particularly for routine discovery — erodes.
But the implications extend beyond bug bounty programs. “This does not just threaten bug bounties,” he says. “It threatens the whole idea that security can remain a find-and-fix afterthought. The era of the security backlog is coming to a welcome end.”
From backlog management to exposure-window risk
The issue, as Williams frames it, is not simply how many vulnerabilities exist, but how they are managed. “Mythos makes one thing painfully clear,” he says. “This is not a prioritization problem. It’s an exposure-window problem.”
Traditional vulnerability management has been built around prioritization — ranking issues by severity, exploitability, and business impact, then working through remediation over time.
Williams argues that the limiting factor is no longer how well organizations prioritize, but how long vulnerabilities remain exposed.
Adapting to AI-powered cyber defense
Anthony Grieco, SVP and chief security and trust officer at Cisco, places the development in a broader operational context. In a blog post, Grieco argues that organizations must “rise to the era of AI-powered cyber defense,” reflecting a shift in both the threat landscape and the capabilities required to respond.
Cisco is among the organizations participating in Project Glasswing, joining what Anthropic describes as a collaborative effort to apply advanced AI capabilities to defensive security use cases. Grieco emphasizes that security programs will need to evolve alongside rapidly advancing AI capabilities.
“AI capabilities will continue to advance, the threat surface will evolve, and the organizations that protect the internet will need to operate at the speed of machines and the scale of networks,” Grieco says. “Much of what we are now experiencing would have been unimaginable just a few years ago. There is no finish line, only a commitment to do everything possible to stay ahead of adversaries.”
For security leaders, that combination — more scalable discovery and the need to operate at greater speed — challenges longstanding assumptions about how risk is handled. Backlogs, long treated as an unavoidable operational reality, become harder to justify if vulnerabilities can be identified more quickly and comprehensively.
A shift upstream — and open questions about control
“The future belongs to software factories that can reliably produce secure code and the assurance case to prove it,” Williams says, pointing to a model in which security is built into development processes rather than addressed primarily after deployment.
Grieco’s emphasis on adapting to AI-powered threats aligns with that direction, underscoring the need for organizations to evolve both their tools and their assumptions about how quickly security-relevant conditions can change.
At the same time, questions remain about how broadly these capabilities will spread. Anthropic has chosen to limit access to Mythos Preview, reflecting the dual-use nature of systems that can identify software vulnerabilities at scale but could also accelerate their exploitation.
“It’s highly questionable that Anthropic will be able to limit the malicious uses of this model,” Williams says.
Anthropic has committed $100 million in model usage credits to Project Glasswing, with participants expected to contribute additional usage during the research preview. Claude Mythos Preview will be available through the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.
The company has also pledged funding to open-source security efforts, including donations to Alpha-Omega, OpenSSF, and the Apache Software Foundation to support maintainers responding to these changes. Maintainers interested in access can apply through the Claude for Open Source program.