Canada wants answers from OpenAI after school massacre

6 hours ago 3

The tech firm’s safety team has been called to Ottawa to explain why it failed to alert police about an account linked to a mass shooter

Canadian officials have summoned senior OpenAI representatives to Ottawa to answer questions about the tech company’s safety protocols after it confirmed it did not alert police about an account linked to mass shooter Jesse Van Rutselaar.

Artificial Intelligence Minister Evan Solomon said on Monday that OpenAI’s senior safety officials will come to Ottawa on Tuesday to outline how the company decides when to notify law enforcement.

Van Rutselaar, the 18-year-old transgender person, killed nine people in a small British Columbia town earlier this month before committing suicide.

OpenAI confirmed the meeting, saying senior leaders will discuss “our overall approach to safety, the safeguards in place, and how they are continuously strengthened.” The meeting follows the company’s disclosure that it banned Van Rutselaar’s account in June 2025 for “furthering violent activities” but did not notify Canadian authorities.

Solomon said he was “deeply disturbed” by reports that the company suspended the account without contacting police.

According to the Wall Street Journal, Van Rutselaar shared gun-related violent scenarios with ChatGPT over several days. OpenAI said its automated systems flagged the exchanges but found no evidence of “credible or imminent planning,” prompting a ban rather than a referral to law enforcement.

The outlet reported that staff had internally debated contacting the Royal Canadian Mounted Police (RCMP) and that OpenAI said it provided information to the RCMP only after the attack.

Van Rutselaar, who had a history of mental health issues, also reportedly used the online platform Roblox to create a virtual mall stocked with weapons where users could simulate shootings ahead of the attack.

The case comes as Ottawa weighs how to regulate widely used AI chatbots, including potential limits on access for minors.

Last year, OpenAI updated ChatGPT after an internal review found over a million users had disclosed suicidal thoughts to the chatbot. Psychiatrists have raised concerns about prolonged AI interactions contributing to delusions and paranoia, a phenomenon sometimes called “AI psychosis.”

Read Entire Article






<