Securing the Integration Protocol

If the model layer is about predicting text and the agent layer is about taking action, the protocol layer is about execution. The Model Context Protocol (MCP) provides a structured, open interface through which models and agents invoke tools, access data sources, and apply reusable prompts using a consistent, standardized mechanism.ai icon

This concentrates authority at a single execution boundary. As a result, the Center for Internet Security® (CIS®), Astrix Security, and Cequence Security partnered to develop actionable cybersecurity guidance tailored to AI environments. The work extends the globally recognized CIS Critical Security Controls® (CIS Controls®) into environments where autonomous decision‑making, tool and API access, and automated threats introduce new risks.

Targeted Guidance for Securing How Modern AI Systems Operate

The result is three CIS Companion Guides: the AI Large Language Model (LLM) Companion Guide, the AI Agent Companion Guide, and the Model Context Protocol (MCP) Companion Guide. Together, they help enterprises adopt AI responsibly and securely while staying aligned with the CIS Controls they already use.

The third of the three guides, the Model Context Protocol (MCP) Companion Guide, focuses on securing the protocol layer where tool invocation, data access, and execution requests are authorized and enforced. Since MCP sits between AI reasoning and real‑world systems, weaknesses at this layer can bypass controls at both the model and agent layers, introducing a distinct set of risks that must be addressed explicitly:

1. Authorization Must Never Be Delegated to the Model

One of MCP’s most important principles is that a model request is not an authorization decision. This is critical. The server or gateway, not the AI model, must enforce:

  • Identity binding
  • Token validation
  • Scope and audience restrictions
  • Access Control Lists (ACLs) on tools and resources
  • Least-privilege access
  • Session control

Without this, an AI-generated call could execute destructive operations.

2. Securing Transports and Endpoints

Whether via stdio or Streamable HTTP, MCP introduces new network and process boundaries. To help prevent cross-protocol attacks and unauthorized access, the guide emphasizes:

  • Restricting bind addresses
  • Enforcing Transport Layer Security (TLS)
  • Validating Origin headers (for HTTP)
  • Preventing Domain Name System (DNS) rebinding
  • Avoiding token passthrough
  • Treating MCP-Session-Id as session state, not identity

3. Capability Drift and Rug Pulls

MCP servers can update their declared tools, resources, and prompts. Without governance, this means:

  • A previously safe server could suddenly expose dangerous capabilities
  • A dependency update could silently expand privileges
  • Third-party servers could introduce malicious functions

The guide requires:

  • Enterprise allowlists and registries
  • Signed artifacts and provenance checks
  • Capability baselines and drift detection
  • Deny-by-default for new capabilities

This formally introduces the concept of a capability supply chain.

4. Tool, Resource, and Prompt Hardening

MCP primitives must be treated as:

  • High-sensitivity configuration
  • Inputs to the model
  • Potential vectors for prompt injection or data leakage

The guide pushes for:

  • Schema validation
  • Parameter sanitization
  • Logging identifiers, not sensitive content
  • Review and versioning of all declared capabilities

In essence, MCP is where decisions turn into actions. The guide provides guardrails for that translation layer.

Securing the Execution Layer: Turning AI Intent into Trusted Action

As organizations accelerate AI adoption, the protocol layer becomes the decisive control point where intent turns into impact. The MCP Companion Guide makes clear that securing this layer is not optional; it is foundational. By enforcing authorization outside the model, hardening transports and endpoints, governing capability changes, and rigorously validating tools and inputs, enterprises can prevent AI systems from overstepping their bounds and introducing unintended risk.

Together with the LLM and Agent Companion Guides, this guidance extends the CIS Controls into the realities of modern AI, helping security teams apply familiar, proven principles to new architectures. The outcome is a practical path to innovation without compromising control, ensuring that as AI systems become more powerful, they also remain predictable, auditable, and secure.

As of June 23, 2025, the MS-ISAC has introduced a fee-based membership. Any potential reference to no-cost MS-ISAC services no longer applies.