With prompt injection and other attack pathways consistently surfacing across agentic AI deployments, security watchdogs have stepped in, collectively, to draw some hard boundaries.

A joint advisory from the US Cybersecurity and Infrastructure Security Agency (CISA) and international partners has called for tighter control over permissions, stronger monitoring, and a more deliberate rollout strategy, urging organizations to treat agentic AI with caution.

“Organizations cannot just drop agents into production and hope the guardrails hold,” said Piyush Sharma, CEO and co-founder of Tuskira, agreeing with CISA’s instructions. “They need to understand what each agent can access, how it behaves, what systems trust its outputs, and which attack paths become reachable if it is manipulated.”

The advisory outlined design and development guidelines for organizations to follow before the implementation of AI agents. A few of these included strong authentication using Secure by Design principles, system transparency to flag deceptive indicators, least privilege across workflows, secure development principles as per DevSecOps fundamentals, and regular testing of incident response plans, among a host of others.

The advisory was co-authored by the Australian Signals Directorate’s Australian Cyber Security Centre, Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre, and the UK’s National Cyber Security Centre.

Least privilege and tight boundaries

One of the clearest through-lines in the advisory was the need to constrain what agentic AI can access.

“Privilege risks are a key concern for agentic AI, and strict adherence to the principle of least privilege is critical,” CISA said in the advisory. “Privileges assigned to agents directly determine the level of risk they can introduce. Poor management of privileges can expose organisations to privilege compromise, scope creep, identity spoofing, and agent impersonation.”

The agencies emphasized enforcing least-privilege principles, isolating agent capabilities, and rigorously defining what data, tools, and systems each agent can interact with.

This is easier said than done, especially as agents are increasingly wired into APIs, internal systems, and external services. “Every tool, data source, memory store, and permission an agent touches becomes another possible way in for attackers,” Sharma noted.

To tackle this, the advisory recommends organizations maintain a clear inventory of agent capabilities and dependencies, while also validating how agents interpret and act on inputs. This includes guarding against prompt injection and ensuring that agents don’t blindly trust external content or instructions.

Continuous monitoring with human-in-the-loop control

While the first half of the advisory focused on limiting what agents can do, the second was about watching what they actually do, reacting quickly when things go sideways.

“Operators should implement continuous monitoring and auditing to maintain awareness of AI agent operation and ensure traceability for decisions and actions,” CISA added. “Continuous auditing processes improve security measures and ensure alignment with governance standards (such as risk management, oversight, and usage restrictions).”

CISA and its international partners also recommended integrating human control and oversight into agentic AI workflows to ensure they are approved for non-sensitive, low-risk tasks. For this, the agencies suggested live monitoring during task execution, human approval for decision-making steps, and auditing upon task execution.

Experts agree that visibility is critical. “Security teams need continuous visibility into how agents behave, what systems they touch, and when their actions deviate from expected patterns,” said Nick Tausek, Lead Security Automation Architect at Swimlane. “Building human approval into high-risk workflows and automating containment is paramount for taking action when agent behavior crosses a line.”

Putting it all together, the advisory detailed core risk areas, from prompt injection and data exposure to tool misuse and privilege creep, urging organizations to lock down privileged access, validate inputs and outputs, monitor agent behavior, and tightly control how these systems interact with data, tools, and other services.

Read More