Securing the AI Ecosystem Begins at the Model Layer

ai iconFor many organizations, securing the AI ecosystem is a tedious task with a lack of clarity which requires specialized guidance for each layer. The foundation of any AI-enabled system is the model: the Large Language Model (LLM) or Small Language Model (SLM) responsible for generating responses, transforming text, writing code, handling
data, or powering downstream workflows. However, unlike traditional software, models aren’t deterministic. Their behavior shifts with prompts, context windows, retrieval inputs, fine-tuning, temperature settings, or even silent provider updates.

These characteristics fundamentally change the threat model. Attackers no longer need to exploit code paths or vulnerabilities. Instead, they can manipulate inputs, context, data sources, or configuration to influence model behavior in ways that are difficult to detect and even harder to reproduce.

Securing the AI Ecosystem

To address these challenges, the Center for Internet Security® (CIS®), Astrix Security, and Cequence Security partnered to develop actionable cybersecurity guidance tailored to AI environments. The work extends the globally recognized CIS Critical Security Controls® (CIS Controls®) into environments where autonomous decision‑making, tool and API access, and automated threats introduce new risks.

Targeted Guidance for Securing How Modern AI Systems Operate

The result is three new CIS Companion Guides: the AI Large Language Model (LLM) Companion Guide, the AI Agent Companion Guide, and the Model Context Protocol (MCP) Companion Guide. Together, they help enterprises adopt AI responsibly and securely while staying aligned with the CIS Controls they already use.

The first of three guides, AI LLM Companion Guide, creates new classes of risk that security teams cannot ignore. This guide focuses on what it takes to secure the “model layer,” addressing challenges such as:

1. Context Integrity

Models rely heavily on whatever input they are given. If that context becomes poisoned, whether deliberately (prompt injection) or accidentally (bad data passed from a downstream system), the model’s behavior can change dramatically. The guide emphasizes:

  • Treating all model inputs as untrusted
  • Sanitizing retrieved context
  • Hardening system prompts
  • Preventing indirect prompt injection
  • Governing Retrieval Augmented Generation (RAG) data as a high‑trust input channel

This elevates context itself to a security boundary, which is a new idea for many teams.

2. Data Sensitivity and Leakage

Prompts, completions, logs, embeddings, and RAG content often contain sensitive information, even if users never intended to provide it. The guide stresses:

  • Data classification for model‑related data
  • Strict retention and deletion policies
  • Encryption and access controls for embeddings, logs, and caches
  • Avoiding “data drift” in uncontrolled model memory

In short: everything a model touches must be handled like the sensitive data it often is.

3. Deployment Differences

The guide differentiates between:

  • Endpoint-hosted models (local notebooks, desktop clients, local inference)
  • Enterprise-hosted models (private cloud, Graphics Processing Unit (GPU) clusters)
  • SaaS-hosted models (provider Application Programming Interface (APIs))

Each has radically different security obligations.

4. Model Supply Chain and Provenance

Enterprises increasingly mix open‑weight models, SaaS-hosted models, and fine‑tuned variants. Without clear provenance, version control, and support guarantees, it becomes impossible to manage vulnerabilities or behavioral drift.

The guide pushes enterprises to treat models like software artifacts, with version pinning, registries, integrity checks, and retirement policies.

Strengthening AI Security Without a New Framework 

Extending the CIS Controls to AI systems provides clear actions that help reduce risk based on how AI is utilized at organizations without requiring them to adapt to a new framework. That means no additional skillset or learning required.

The AI LLM Guide reframes model security as a data, configuration, and supply-chain problem, not just a safety or red‑teaming issue. It sets the stage for everything that comes later.

 

As of June 23, 2025, the MS-ISAC has introduced a fee-based membership. Any potential reference to no-cost MS-ISAC services no longer applies.