An Examination of AI-Enabled Threats to Event and Stadium Security

Published on May 1, 2026

The advent and widespread adoption of generative artificial intelligence (GenAI) have transformed and expanded the threat landscape. Analysts assess it is highly likely that threat actors will increasingly leverage GenAI to support malicious activities. Additionally, creators of GenAI models are unlikely to significantly improve safety measures while successfully maintaining intended platform functionality for legitimate uses. This assessment is made with moderate confidence based on analytical testing and open-source research conducted between February and March 2026, and it serves as a follow up to An Examination of Generative AI and Physical Threat Planning, a January 2026 case study published by the Center for Internet Security® (CIS®).

CIS analysts tested a jailbreak strategy against three popular GenAI platforms to gain insight into how threat actors can use these capabilities to support malicious activities targeting stadiums, large-scale events, and host cities’ adjacent critical infrastructure. A jailbreak is a technique used to circumvent a GenAI model’s safeguards, such as leveraging strategically worded prompts to bypass platform restrictions. Although the tested models provided information likely accessible via traditional open-source research methods, the findings discussed in this white paper demonstrate the evolution in public access to highly specific information that could be used for malicious purposes. It is essential that law enforcement, public safety officials, critical infrastructure operators, and other applicable stakeholders evolve their threat assessments, monitoring, and investigative techniques to reflect the growing risk of GenAI-enabled attack planning.

  

An Examination of AI-Enabled Threats to Event and Stadium Security_COVER

As of June 23, 2025, the MS-ISAC has introduced a fee-based membership. Any potential reference to no-cost MS-ISAC services no longer applies.