The Canadian governme summoned OpenAI executives to Ottawa following a deadly school shooting in British Columbia. Governme officials criticized the tech company because they did not alert law enforceme agencies despite blocking the attacker’s accou on ChatGPT. Canada is now threatening to take legal action if OpenAI doesn’t change its safety protocols.
According to Reuters, Canadian authorities summoned the company’s executives because OpenAI had suspended the attacker’s accou for violating the rules before the disaster, but failed to inform police of the poteial risks.
Reports indicate that in 2025, some OpenAI employees blocked the accou of “Jesse van Roetslaar” due to warning signs of real-world violence. These employees asked senior managers to report the matter to law enforceme. However, an OpenAI spokesperson stated that the user’s activities did not meet the company’s iernal criteria for reporting to the police and the existence of an immine risk of physical harm.
Canada Notice to OpenAI About ChatGPT Safety Rules
Canada’s Justice Minister Sean Fraser bluly warned that if OpenAI doesn’t quickly amend its safety protocols, the governme will force these changes through tougher rules.
The Canadian governme is furious with OpenAI’s lack of timely reporting. Evan Solomon, the minister responsible for artificial ielligence in the federal governme, announced before the meeting with OpenAI executives that they should fully explain their tolerance thresholds and security protocols to the governme. Mark Carney, the Prime Minister of Canada, also emphasized the importance of preveing future disasters and promised that the governme will use all its legal capacities to investigate this issue.


Jesse van Roetslaar had a history of meal health issues and the Canadian police had previously confiscated his guns, but the authorities later returned them to him. This shooting happened in the small town of Tumbler Ridge with a population of about 2,400. Criminology experts say AI platforms need more oversight, but they also blame law enforceme agencies. They believe Canadian police missed importa opportunities to preve this tragedy by returning the weapons to a man with a history of meal health problems.
This isn’t the first time OpenAI’s safety policies have been called io question. In December 2025, the family of a victim sued the company, alleging that ChatGPT encouraged a man to murder his mother and then commit suicide by promoting superstitious beliefs.




