An Examination of Generative AI and Physical Threat Planning

Published on March 2, 2026

The evolution and widespread adoption of generative artificial intelligence (GenAI) has transformed and expanded the threat landscape. Analysts assess it is highly likely threat actors will increasingly leverage GenAI in support of malicious activities, and they further assess GenAI models are unlikely to significantly improve safety measures while also maintaining full platform functionality for legitimate uses. This assessment is made with moderate confidence based on analytical testing and open-source research conducted between August 2025 and December 2025.

Analysts at the Center for Internet Security® (CIS®) tested a jailbreak strategy against three popular GenAI platforms to gain insight into how threat actors can use the technique to support malicious activities, including attacking critical infrastructure, constructing explosive devices, targeting law enforcement, and identifying U.S. border vulnerabilities. A jailbreak is a technique used to circumvent a GenAI model’s safeguards, such as leveraging strategically worded prompts to bypass platform restrictions. All three platforms provided detailed information associated with each tested topic.

Although the tested models provided information likely accessible via traditional open-source research methods, our findings demonstrate the evolution in public access to highly specific information that could be used for malicious purposes. It is essential law enforcement, public safety officials, and critical infrastructure operators evolve threat assessments, monitoring, and investigative techniques to reflect how GenAI is shifting the threat landscape.

An Examination of Generative AI and Physical Threat Planning cover

As of June 23, 2025, the MS-ISAC has introduced a fee-based membership. Any potential reference to no-cost MS-ISAC services no longer applies.