Artificial Intelligence and Large Language Models Companion Guide

Published on April 20, 2026

As enterprises rapidly integrate Large Language Models (LLMs), Small Language Models (SLMs), and other generative artificial intelligence (AI) systems into business workflows and IT operations, these systems introduce security and operational risks that differ from traditional applications because they are probabilistic, prompt-driven, and often connected to retrieval systems, memory stores, and external tools that can take real actions. The primary attack surface shifts toward context integrity, tool misuse, data exposure, model-specific supply chain risks, and deterministically controlling probabilistic outputs. These differences require security teams to interpret existing best practices through an AI-aware lens to ensure that essential safeguards continue to provide strong coverage.

The CIS Critical Security Controls® (CIS Controls®) remain a globally trusted, prioritized set of defensive actions for reducing cybersecurity risk, but they were written before generative AI became an enterprise platform and parts of existing enterprise platforms. Many CIS Safeguards map directly to AI-enabled systems (asset management, secure configuration, identity, logging, vulnerability management, and supplier governance). However, implementation must be interpreted through an AI-aware lens to address risks unique to LLM ecosystems, including direct and indirect prompt injection, retrieval poisoning, over-permissioned tool integrations, and provider-driven model updates. Applying the CIS Controls in this context therefore requires understanding how AI systems behave operationally and where their risks diverge from traditional software or cloud services.

This guide adapts CIS Controls v8.1 for text-centric generative AI by translating the intent of each CIS Control into practical expectations for AI-enabled systems across the full lifecycle: training/fine-tuning, deployment, inference, monitoring, and retirement. It also highlights AI-specific risk domains that require explicit operational controls, including prompt and guardrail change control, context boundary enforcement, model and dataset provenance, and the containment levers required to respond quickly when AI-driven workflows behave unexpectedly.

By interpreting the CIS Controls through the lens of text-based generative AI, this guide provides practitioners with a practical, defensible way to secure emerging AI capabilities using the same prioritized framework already used across thousands of enterprises worldwide. The result is a consistent approach that strengthens the security of LLM- and SLM-enabled systems while preserving the flexibility to align with rapidly advancing AI technologies.

 

CIS_Controls_v81_AI_and_LLM_Companion_Guide_PREVIEW

As of June 23, 2025, the MS-ISAC has introduced a fee-based membership. Any potential reference to no-cost MS-ISAC services no longer applies.