Canada seeks answers from OpenAI following school massacre

The technology company’s safety division has been summoned to Ottawa to explain its failure to notify law enforcement regarding an account associated with a mass shooter.

Canadian authorities have requested senior representatives from OpenAI to come to Ottawa to address inquiries about the tech firm’s safety procedures, following confirmation that the company did not alert police about an account linked to mass shooter Jesse Van Rutselaar.

Minister of Innovation, Science and Economic Development, François-Philippe Champagne, stated on Monday that OpenAI’s senior safety officials are scheduled to visit Ottawa on Tuesday to detail the company’s criteria for notifying law enforcement.

Van Rutselaar, an 18-year-old transgender individual, was responsible for the deaths of nine people in a small British Columbia community earlier this month before taking their own life.

OpenAI acknowledged the upcoming meeting, indicating that senior executives will discuss “our overall approach to safety, the safeguards in place, and how they are continuously strengthened.” This meeting follows the company’s revelation that Van Rutselaar’s account was banned in June 2025 for “furthering violent activities”, but Canadian authorities were not informed.

Champagne expressed he was “deeply disturbed” by reports that the company suspended the account without contacting the police.

According to The Wall Street Journal, Van Rutselaar had shared violent scenarios involving firearms with ChatGPT over several days. OpenAI stated that its automated systems identified these interactions but found no indication of “credible or imminent planning,” leading to a ban rather than a referral to law enforcement.

The publication reported that staff had internally discussed contacting the Royal Canadian Mounted Police (RCMP), and OpenAI indicated that it only provided information to the RCMP after the attack.

Van Rutselaar, who had a history of mental health challenges, also reportedly used the online platform Roblox to construct a virtual mall equipped with weapons, where users could simulate shootings prior to the attack.

This incident occurs as the Canadian government considers how to regulate widely used AI chatbots, including potential restrictions on access for minors.

Last year, OpenAI made updates to ChatGPT after an internal review revealed that over a million users had disclosed suicidal thoughts to the chatbot. Psychiatrists have voiced concerns that prolonged interactions with AI could contribute to delusions and paranoia, a phenomenon sometimes referred to as “AI psychosis.”