The US government is preparing to authorize a version of Anthropic’s Claude Mythos model for use by major US federal agencies, amid concerns that the AI model could rapidly spot cybersecurity vulnerabilities and offer the ability to exploit them.

Federal Chief Information Officer Gregory Barbaccia at the White House Office of Management and Budget (OMB) told officials at Cabinet departments on Tuesday that the OMB was setting up protections to allow federal agencies to begin using the model, reported Bloomberg, citing an internal memo.

The memo did not commit specific agencies to deployment or provide a timeline, the report said.

“We’re working closely with model providers, other industry partners, and the intelligence community to ensure the appropriate guardrails and safeguards are in place before potentially releasing a modified version of the model to agencies,” Barbaccia wrote in the email, according to the report.

The OMB move comes while the Department of Defense’s supply-chain risk designation against Anthropic, issued on March 3, remains in force. The D.C. Circuit refused to stay the designation on April 8, keeping Anthropic barred from defense contracts while civilian agencies are now being positioned for access.

The White House and Anthropic did not immediately respond to requests for comment.

Defining the guardrails

The memo’s reference to a modified version of the model points to open questions about what agency deployment would actually look like. Anthropic announced Claude Mythos Preview on April 7 under Project Glasswing, a controlled-access program for select technology and financial organizations.

The company then said the model identified thousands of zero-day vulnerabilities across every major operating system and browser in internal testing and stated it did not plan to make the model generally available.

“For a federal deployment to be defensible, the modifications must cover specific assurance dimensions,” said Neil Shah, VP for research and partner at Counterpoint Research. “The software code base being scanned should remain sovereign within an isolated and air-gapped environment, and the data should not be used to retrain the base model.” Additional steps could include transparency requirements and human-in-the-loop review before any bug fix is applied, he said, to make the deployment more controlled.

Enterprise implications

Those same assurance questions translate directly to enterprise procurement. The OMB move signals that federal cyber defense is pivoting toward frontier models that can find vulnerabilities faster than human teams can patch them, and the rift between the Pentagon and the White House carries a lesson for private-sector buyers, Shah said.

“The rift between the two government entities is a lesson on how important it is to control the deployment of potent AI capabilities which could be misused,” he said, calling for a multi-layered control framework spanning discovery, classification, security, assurance, and action.

The asymmetry extends beyond US borders. European agencies have largely been blocked out of early access, with only the UK AI Security Institute granted the ability to test the model. If the OMB authorization proceeds on the terms Barbaccia described, defensive AI capability inside the US federal government would advance ahead of European counterparts, while the Pentagon designation against the same vendor continues to move through the courts.

A civilian workaround to the Pentagon ban

The modified version approach is how Anthropic is navigating around the Pentagon position without losing control of the model, Shah said.

“The Anthropic modified version thereby circumvents the Pentagon’s black and white approach and helps other entities adopt the model as a security enclave for civilian and enterprise sovereignty with agreed-upon guardrails,” Shah said. He added that the arrangement sets a precedent for Anthropic’s future adoption across other government entities and enterprises.

Federal access to Anthropic has been in flux for weeks. A US District Court in California granted Anthropic a preliminary injunction on March 26 against a parallel civilian designation, a ruling that gave contractors breathing room to reassess AI supply chains.

Anthropic is now simultaneously blacklisted from military procurement, enjoined from removal across civilian systems, and under discussion for expanded access through OMB. Contractors face operational difficulty identifying where specific AI models sit inside their stacks, a challenge that has reshaped supply-chain risk across federal AI deployments.

Read More