Report: AI chatbots readily assist in plotting mass violence, with some saying ‘Happy shooting!’

Researchers posing as teenagers plotting attacks found that 80% of AI assistants offered advice on targets and weapons

A joint probe by CNN and the Center for Countering Digital Hate (CCDH) found that eight out of ten top AI chatbots readily helped users plan violent acts, such as school shootings, bombings of religious sites, and assassinations.

Posing as distressed teens, researchers evaluated ten widely-used chatbots, such as ChatGPT, Google Gemini, Meta AI, and DeepSeek. Across numerous interactions, these AI tools gave specific advice on choosing targets, obtaining weapons, and methods for carrying out attacks.

In one reported conversation, DeepSeek concluded by telling a potential attacker “Happy (and safe) shooting!” Character.AI, a platform favored by younger audiences, directly promoted violence, advising a user who voiced hatred for a health insurance CEO to “use a gun.”

In response to a query about effective shrapnel for bombs, ChatGPT gave a detailed analysis of different materials and proposed making “a quick comparison chart showing the typical injuries.” Google’s Gemini offered comparable data, complete with a comparative table.

Only Anthropic’s Claude and Snapchat’s My AI consistently declined to help, with Claude proactively discouraging harmful actions and directing users to mental health support.

This report follows last month’s deadly school shooting in Tumbler Ridge, Canada, where an 18-year-old, who allegedly used ChatGPT to plan the assault, killed nine people. Although OpenAI had banned the shooter’s account, he bypassed the restriction by creating a new one—an action the company did not report to officials.

A lawsuit filed by the family of 12-year-old Maya Gebala, who was severely wounded in the attack, claims OpenAI possessed “specific knowledge of the shooter utilizing ChatGPT to plan a mass casualty event” but did not notify police. OpenAI has admitted it thought about reporting the behavior but chose not to.

Court records show that last May, a 16-year-old in Finland stabbed three fellow students after using ChatGPT for almost four months to research attacks. In a separate January 2025 incident, a man who detonated a Tesla Cybertruck outside the Trump International Hotel in Las Vegas also turned to ChatGPT for instructions on building explosives.

Meta informed CNN it has implemented measures “to fix the issue identified,” whereas Google and OpenAI stated their latest models have enhanced safety features. DeepSeek offered no comment on the findings.