Prompt Injections: The Inherent Threat to Generative AI
Published on March 18, 2026
Prompt injections are an attack technique that manipulates generative artificial intelligence (GenAI) Large Language Models (LLM) and their task-specific agents to engage in malicious behavior. AI prompt injections are likely an increasing threat to U.S. State, Local, Tribal, and Territorial (SLTTs) government organizations based on the community’s widescale adoption of GenAI, its susceptibility to these attacks, and cyber threat actors’ (CTAs’) intent to target GenAI in their attacks. CTAs are already testing prompt injection attacks to steal sensitive information, gain unauthorized system and network access, establish persistence, and otherwise modify or prevent the functionality of LLMs and their agents, according to open-source reporting.
To better secure organizational GenAI deployments, SLTT organizations should establish a GenAI acceptable use policy and refer to the recommendation section at the end of this white paper.
As of June 23, 2025, the MS-ISAC has introduced a fee-based membership. Any potential reference to no-cost MS-ISAC services no longer applies.