For years, US cybersecurity guidance rested on a reassuring premise: New technologies introduce new wrinkles, but not fundamentally new problems. Artificial intelligence, according to that view, is still software, just faster, more complex, and more powerful.

The controls that protect traditional systems, the thinking went, can largely be adapted to protect AI, too. That assumption surfaced at a recent National Institute of Standards and Technology (NIST) workshop on AI and cybersecurity.

“AI systems in many ways are just smart software, fancy software with a little bit extra,” Victoria Pillitteri, supervisory computer scientist in the Computer Security Division at NIST, told attendees as she summarized that long-standing view. “That means we can leverage the robust body of [cybersecurity] knowledge that already exists with some modifications, with some considerations, but we do not and should not start from scratch,” she added.

But as discussions during the event turned to AI agents and adversarial manipulation, that concept began to fray. Experts described ways in which AI strains the fundamental assumptions those frameworks rely on, namely that systems behave deterministically, that boundaries between components are stable, and that humans remain firmly in control.

Those concerns are now moving beyond internal discussion and into public standards development. On Jan. 8, NIST’s Center for AI Standards and Innovation (CAISI) issued a formal Request for Information (RFI) on the secure practices and methodologies of AI agent systems, one of the most challenging aspects of AI when it comes to identity management and cybersecurity.

The RFI focuses on AI systems capable of taking autonomous actions that affect real-world environments and explicitly asks for input on novel risks, security practices, assessment methods, and deployment constraints.

For CISOs, what should matter is that NIST is shifting from a broad, principle-based AI risk management framework toward more operationally grounded expectations, especially for systems that act without constant human oversight. What is emerging across NIST’s AI-related cybersecurity work is a recognition that AI is no longer a distant or abstract governance issue, but a near-term security problem that the nation’s standards-setting body is trying to tackle in a multifaceted way.

NIST’s wide-ranging cybersecurity and AI portfolio

Although the purpose of the workshop was to solicit feedback specifically on NIST’s preliminary Cybersecurity Framework Profile for Artificial Intelligence (Cyber AI Profile), which is an adaptation of the community profiles emerging from NIST’s Cybersecurity Framework, experts addressed many other NIST practices and methodology initiatives that deal with AI-related threats and security opportunities.

These efforts show how NIST is attacking AI security from multiple angles — development, deployment, identity, privacy, and adversarial abuse — and include:

AI Risk Management Framework. Released on Jan. 26, 2023, NIST’s AI RMF was developed to better manage risks to individuals, organizations, and society associated with AI. “What we’re trying to do with the AI Risk Management Framework is understand how we trust AI, which operates in many ways differently in some of these tasks that we know very well,” particularly regarding how high-impact applications affect cybersecurity, Martin Stanley, principal researcher for AI and cybersecurity at NIST, said at the workshop.

Center for AI Standards and Innovation (CAISI). NIST’s CAISI serves as the “industry’s primary point of contact within the US government to facilitate testing and collaborative research related to harnessing and securing the potential of commercial AI systems,” said Maia Hamin, a technical staff member of CAISI, the center that develops best practices and standards for improving AI security and collaboration. It also “leads evaluations and assessments of US and adversary AI systems, including adoption of foreign models, potential security vulnerabilities, or potential for foreign influence,” she told workshop attendees.

NIST AI 100-2 E2025, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. This NIST report, published in March 2025,provides a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). “Adversarial machine learning or adversarial AI is the field that studies attacks on AI systems that exploit the statistical and data-driven nature of this technology,” NIST research team supervisor Apostol Vassilev said at the workshop. “Hijacking, prompt injection, indirect prompt injection, data poisoning, all these things are part of the field of study of adversarial AI,” he clarified.

Dioptra. Dioptra is a NIST software test platform for assessing the trustworthy characteristics of AI. “You have multiple dimensions along which you want to analyze these as you want to identify how accurate they are for a particular task,” Harold Booth, NIST supervisory computer scientist, said at the event. “You want to be able to identify how robust they are to various kinds of attacks,” Booth said. “You want to know how well they do against various kinds of data sets.”

NIST SP 800-218A, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models: An SSDF Community Profile. The AI SSDF community profile adds “practices, tasks, recommendations, considerations, notes, and informative references that are specific to AI model development throughout the software development life cycle.” NIST’s Booth told the workshop attendees, “This particular profile is very focused on what is new with respect to doing development for AI systems. So all the concerns that exist for normal software development still pertain. But what we were really focused on was what’s new.”

PETs Testbed. NIST’s PETs Testbed provides the capability to investigate privacy-enhancing technologies (PETs) and their respective suitability for specific use cases, helping organizations evaluate and manage privacy risks. Gary Howarth, who leads the privacy engineering program at NIST, said that within a few weeks, NIST will release a new version of its privacy framework that is complementary to AI risk management and cybersecurity threat modeling.

NIST Special Publication 800-63 Digital Identity Guidelines. NIST recently updated its 2017 guidelines on digital identity to better embrace the process and technical requirements for meeting digital identity assurance levels, given the rapid pace of digital technical change. Ryan Galluzzo, identity program lead for NIST Applied Cybersecurity Division, stressed at the workshop that “AI agents are starting to change the kind of context and conversation around traditional cybersecurity controls. Within the context of this project, our intent is really to focus on those issues of access, those issues of how to identify agents that are operating within my enterprise.

The limits of ‘AI is just software’

NIST’s instinct to frame AI as an extension of traditional software allows organizations to reuse familiar concepts — risk assessment, access control, logging, defense in depth — rather than starting from zero. Workshop participants repeatedly emphasized that many controls do transfer, at least in principle.

But some experts argue that the analogy breaks down quickly in practice. AI systems behave probabilistically, not deterministically, they say. Their outputs depend on data that may change continuously after deployment. And in the case of agents, they may take actions that were not explicitly scripted in advance.

For CISOs, the risk is not that AI is unrecognizable, but that it appears recognizable enough to lull organizations into applying controls mechanically. Treating AI as “just another application” can obscure new failure modes, particularly those involving indirect manipulation through data or prompts rather than direct exploitation of code.

“AI agent systems really face a range of security threats and risks,” CAISI’s Hamin said at the workshop. “Some of these overlap with traditional software, but others kind of arise from the unique challenge of combining AI model outputs, which are non-deterministic, with the affordances and abilities of software tools.”

CISOs should watch out for framework fatigue

In kicking off the workshop, NIST senior policy advisor Katerina Megas explained that NIST reached out to the CISO community to ask them what they need in terms of AI security guidance.

“Before we started down any path, we spoke to the CISO community, and we asked them, ‘So how are you all dealing with artificial intelligence? How is this affecting your day-to-day? Is this something that keeps you up at night?’ And overwhelmingly, the answer was yes, this is absolutely something that is top of mind for us. Our leadership is asking us, what are we doing?” she said at the event.

But the CISOs also told NIST that they were overwhelmed with AI documentation. A lot of these publications had some overlap, but were not identical, Megas said. “If you were a consumer of all of these documents, it was very difficult for you to look at them and understand how they relate to what you are doing and also understand how to identify where two documents may be talking about the same thing and where they overlap.”

“If the guidance is super long, then people may not actually use it,” one workshop attendee, Naveen Konrajankuppam Mahavishnu, co-founder and CTO at Aira Security, tells CSO, suggesting that much of the material can be reduced to more digestible components.

“We can have a very detailed version, maybe a hundred pages long, but also have some sort of checklist that kind of summarizes the entire 100-page paper or something into a few pages where people can easily consume it, and then they can start implementing it,” Mahavishnu says.

Read More