Securing Agents and Autonomous Behavior
If the model layer is about predicting text, the agent layer is about taking action. Agents introduce planning, decision‑making, tool invocation, memory, retrieval, and multi‑step reasoning, effectively giving AI systems the ability to operate like digital employees. ![]()
This massively expands the attack surface. To address these challenges, the Center for Internet Security® (CIS®), Astrix Security, and Cequence Security partnered to develop actionable cybersecurity guidance tailored to AI environments. The work extends the globally recognized CIS Critical Security Controls® (CIS Controls®) into environments where autonomous decision‑making, tool and API access, and automated threats introduce new risks.
Targeted Guidance for Securing How Modern AI Systems Operate
The result is three new CIS Companion Guides: the AI Large Language Model (LLM) Companion Guide, the AI Agent Companion Guide, and the Model Context Protocol (MCP) Companion Guide. Together, they help enterprises adopt AI responsibly and securely while staying aligned with the CIS Controls they already use.
The second of the three guides, the AI Agent Companion Guide, addresses the risks introduced when AI systems move beyond generating text to taking action. This guide focuses on securing the “agent layer,” where autonomy, tool execution, and decision‑making must be governed to prevent misuse, overreach, and unintended consequences. Together, these shifts introduce a distinct set of risks at the agent layer:
1. Agents Act and So Their Actions Must Be Controlled
Agents don’t just generate text. They:
- Run code
- Query business systems
- Update records
- Draft communications
- Execute multi‑step workflows
- Spawn sub-agents
- Persist and retrieve memory
- This means they can cause real harm if misconfigured or compromised.
The guide formalizes requirements like:
- Strict tool allowlisting
- Capability scoping and privilege boundaries
- Identity and access controls for every tool call
- Mandatory human-in-the-loop for high‑impact actions
- Action tracing and full auditability
This is where least privilege becomes absolutely non‑negotiable.
2. The Memory and Retrieval Layer Becomes a Behavioral Dependency
Unlike models, agents maintain:
- Working memory
- Long-term memory
- Vector stores or Retrieval Augmented Generation (RAG) pipelines
These are not passive knowledge bases. They shape the agent’s decisions. That means memory poisoning can completely alter agent behavior.
The guide treats memory as a high‑sensitivity asset requiring:
- Access control
- Classification
- Retention and deletion rules
- Input sanitization
- Validation before use
RAG is reframed not as a data storage problem but as a security boundary influencing agent logic.
3. Multi-Agent and Orchestration Risks
Agents can collaborate, delegate, or chain tasks. This introduces:
- Cross-agent contamination
- Confusion between authority scopes
- Blind trust in another agent’s output
- Infinite or runaway loops
The guide encourages guardrail layers that evaluate every proposed action deterministically before execution.
4. Deployment and Runtime Ownership Matters
The guide distinguishes:
- Provider-managed agent runtimes
- Enterprise-managed runtimes
- Local/embedded runtimes
Because ownership determines who controls memory, logs, tool interfaces, and agent behavior.
The takeaway: Once an AI system moves from “predict text” to “decide and act,” the security challenges shift dramatically. This guide builds a safety net around the autonomy layer.
As of June 23, 2025, the MS-ISAC has introduced a fee-based membership. Any potential reference to no-cost MS-ISAC services no longer applies.