Five Eyes intel agencies publish first joint agentic AI security guide. Their advice: slow down.
CISA, NSA, GCHQ, ASD, CSE and NCSC-NZ jointly tell organizations agentic AI isn't ready for fast rollout. The 23-page guide names five risk categories.
Six national cybersecurity agencies from every Five Eyes country released a joint guide called “Careful Adoption of Agentic AI Services” on May 1. The headline message: agentic AI systems are not ready for fast deployment. Organizations should treat them as untrusted infrastructure.
The signatories are CISA, NSA, the UK’s NCSC, Canada’s Cyber Centre, Australia’s ASD ACSC, and New Zealand’s NCSC. This is the first time the Five Eyes intelligence-sharing alliance has put a joint imprimatur on AI-agent security. That signals the topic has moved from “emerging concern” into the same policy lane as supply-chain attacks and ransomware playbooks. It also pulls a counter-current to the agentic-coding messaging from labs that are racing to ship autonomous workflows into production codebases. The guide explicitly tells operators to “assume that agentic AI systems may behave unexpectedly,” a sentence that reads as a policy reset.
What the guide actually says
The document runs about 23 pages and identifies five risk categories that organizations must plan for before rolling agents into production:
- Privilege. Agents accumulate access to data, APIs, and accounts. One compromised agent with broad access becomes a single failure point.
- Design and configuration. Defaults are insecure. Tool registries, prompts, and permissions need to be hardened during deployment, not after the first incident.
- Behavioral. Models pursue goals in unexpected ways. The guide explicitly tells operators to “assume that agentic AI systems may behave unexpectedly.”
- Structural. Agents talking to other agents create cascading-failure risk. One misbehaving node propagates.
- Accountability. Logs are hard to parse, decisions are hard to inspect, and incident responders end up reconstructing reasoning from context windows. This is the auditability gap.
The headline recommendation is a sentence that ought to land hard with anyone shipping LLM features on a deadline: organizations should “prioritize resilience, reversibility, and risk containment over efficiency gains.” That’s a direct rebuke of the “ship fast and reduce headcount” framing many agentic deployments still use.
The 100-plus practices it contains aren’t novel security ideas; they’re old security ideas applied to a new attack surface. Zero trust. Defense-in-depth. Least privilege. The agencies’ position is that you don’t need a new discipline. You need to fold agents into the controls you already run.
What’s prescriptive
A few items in the guide are operational and worth flagging because they’re already debated in the agentic-coding community:
- Cryptographic agent identities with short-lived credentials. Each agent gets its own verifiable identity, not a shared service account. The recommended pattern is short-lived tokens, not long-lived API keys.
- Encrypted agent-to-agent communication. When agents call other agents, the channel is encrypted and authenticated. Plain HTTP between internal agents is out.
- Human approval for high-impact actions. The guide pushes a “human in the loop for actions that change state outside the agent’s sandbox” model. This is the part that pushes back hardest against fully autonomous deployments.
- Treat the prompt as untrusted input. Prompt injection is named explicitly as an attack class. Anything an agent reads, including tool output and retrieved documents, is potentially adversarial.
The “Careful Adoption” PDF lists the authors as joint, which means none of these agencies dissent on the recommendations. That’s notable. CISA and NCSC have published guidance that didn’t land cleanly across borders before. This one did.
What this means for you
If you’re building or buying agentic features inside an organization with any compliance posture, the Five Eyes guide just became something your security team will reference. Three concrete moves are worth making this week:
First, audit which of your current agents have privileges they don’t need. Most teams that ship Claude Code, Cursor, or in-house agents started with broad credentials and never tightened them. The “single compromised agent” risk is the one CISA leads with for a reason.
Second, write down the “high-impact actions” your agents can take, and gate them on human approval if they aren’t already. “High-impact” maps to anything that costs money, modifies production data, or sends external messages.
Third, treat the guide as a procurement filter. If a vendor’s agentic product can’t answer “what credentials does the agent hold, how short are they, who approves writes” without hand-waving, you have a Five-Eyes-level risk on your hands. The buyers’ market hasn’t caught up to this yet, but it will. The agencies’ guide is dated May 1, 2026; budgets that close in Q3 will start citing it.
The PDF is public and free. Read it before you spec your next agent rollout rather than after.
Share this article
Quick reference
Sources
- Careful Adoption of Agentic AI Services (PDF) — U.S. Department of Defense
- CISA, US and International Partners Release Guide to Secure Adoption of Agentic AI — CISA
- NSA Joins the ASD's ACSC and Others to Release Guidance on Agentic Artificial Intelligence Systems — NSA
- Five Eyes warn agentic AI is too dangerous for rapid rollout — The Register
- US government, allies publish guidance on how to safely deploy AI agents — CyberScoop