Applying the CIS Controls to Real‑World AI Environments

ai iconArtificial intelligence (AI) is not arriving quietly. It is showing up everywhere at once. Models now power internal copilots, agents handle multi‑step tasks, and new integration protocols let AI systems interact with tools, application programming interfaces (APIs), and business data. For many organizations, this feels like defending a moving target as the attack surface expands. Behaviors shift with every model update, and AI systems operate with a level of autonomy traditional controls were never designed to manage.

To address these challenges, the Center for Internet Security® (CIS®), Astrix Security, and Cequence Security partnered to develop actionable cybersecurity guidance tailored to AI environments. The work extends the globally recognized CIS Critical Security Controls® (CIS Controls®) into environments where autonomous decision‑making, tool and API access, and automated threats introduce new risks.

The result is three new CIS Companion Guides: the AI Large Language Models (LLM) Companion Guide, the AI Agent Companion Guide, and the Model Context Protocol (MCP) Companion Guide. Together, they help enterprises adopt AI responsibly and securely while staying aligned with the CIS Controls they already use.

Why AI Needed More Than a Single Guide

AI systems are not a single technology layered onto existing applications. They consist of multiple components, each with its own security challenges.

LLMs determine how information is processed and generated. AI agents add reasoning, planning, memory, and autonomous action across workflows. MCP defines how AI systems interact with external tools, services, and data through a structured protocol.

Treating these as one security surface would blur critical boundaries. CIS intentionally separated the guidance into three Companion Guides, each answering a distinct question security teams already ask:

Together, the guides provide full coverage without duplication or gaps between layers.

The partnership between CIS, Astrix, and Cequence focused on securing AI as it operates in real production environments.

Astrix contributed expertise in securing AI agents, MCP servers, and non‑human identities (NHIs), such as API keys, service accounts, and OAuth tokens. This ensured strong emphasis on identity, authorization, and credential lifecycle management.

Cequence brought deep experience securing enterprise applications and APIs, shaping guidance around visibility, governance, and control over what AI systems can access and execute.

Combined with CIS’s standards leadership, the collaboration produced guidance grounded in real operational needs.

Answering the Questions Security Teams Are Asking

The structure of the Companion Guides reflects the practical questions enterprises often ask.

What Security Controls Apply to AI Systems?

The CIS Controls continue to apply, but AI systems behave differently than traditional applications. The Companion Guides evaluate each CIS Control through an AI‑aware lens, documenting how CIS Safeguards apply to models, agents, and MCP environments along with where traditional assumptions no longer hold.

Why Publish Three Guides Instead of One?

AI risk exists across multiple layers:

  • Model Layer: Inputs, outputs, context, and data exposure
  • Agent Layer: Memory, tool use, and autonomous workflows
  • MCP Layer: Protocol boundaries where tools and data are accessed

Each Companion Guide secures what the others cannot.

How Do the Guides Work Together in Practice?

Across the three Companion Guides, CIS defines a shared AI security lifecycle:

  • Inputs are sanitized at the model layer (AI LLM Companion Guide)
  • Context and memory are protected across model and agent layers (AI LLM and AI Agent Companion Guides)
  • Reasoning is constrained by guardrails at the agent layer (AI Agent Companion Guide)
  • Tool requests are validated and authorized through MCP (MCP Companion Guide)
  • Actions are logged, bounded, and auditable (MCP Companion Guide)
  • Outputs are reviewed, redacted, or minimized (AI LLM and AI Agent Companion Guides)

No single layer can enforce security end to end. Security holds only when controls span all three surfaces.

Why This Guidance Matters Now

Enterprise AI adoption continues to accelerate, often faster than security programs can evolve. Enterprises are deploying AI into production workflows that touch sensitive data and systems, often without clear answers about which controls apply.

AI risks are already material:

  • Models can leak sensitive data through embeddings or logs
  • Agents can execute unauthorized code or corrupt records
  • Memory stores can accumulate confidential information indefinitely
  • RAG pipelines can be poisoned to manipulate decision-making
  • MCP servers can introduce unsafe capabilities through silent updates

These guides recognize that AI is no longer experimental; it is operational and must be secured with the same rigor applied to cloud‑native apps, containerized workloads, and microservices.

By extending the CIS Controls into AI environments, the Companion Guides give security teams practical, prioritized guidance for securing AI systems that are already operational without introducing a new framework.

From Partnership to Practice

The partnership between CIS, Astrix, and Cequence reflects a shared goal: helping enterprises innovate with AI responsibly and securely. By combining standards leadership with real‑world expertise securing agents, identities, protocols, and execution paths, the final release delivers guidance that can be put into practice immediately.

The three Companion Guides mark a turning point in enterprise AI security. Instead of treating AI systems as ungovernable, CIS brings them into the familiar structure of the CIS Controls while addressing the unique risks they introduce.

  • The AI LLM Guide secures the model layer.
  • The AI Agent Guide secures autonomy and action.
  • The MCP Guide secures how AI interacts with tools and data.

Together, they provide a practical, layered framework for building AI systems that are both operational and secure. AI is no longer just a research project or a productivity boost; it’s becoming infrastructure, and infrastructure needs controls.

As of June 23, 2025, the MS-ISAC has introduced a fee-based membership. Any potential reference to no-cost MS-ISAC services no longer applies.