The Canadian government summoned OpenAI executives to Ottawa following a deadly school shooting in British Columbia. Government officials criticized the tech company because they did not alert law enforcement agencies despite blocking the attacker’s account on ChatGPT. Canada is now threatening to take legal action if OpenAI doesn’t change its safety protocols.
According to Reuters, Canadian authorities summoned the company’s executives because OpenAI had suspended the attacker’s account for violating the rules before the disaster, but failed to inform police of the potential risks.
Reports indicate that in 2025, some OpenAI employees blocked the account of “Jesse van Roetslaar” due to warning signs of real-world violence. These employees asked senior managers to report the matter to law enforcement. However, an OpenAI spokesperson stated that the user’s activities did not meet the company’s internal criteria for reporting to the police and the existence of an imminent risk of physical harm.
Canada Notice to OpenAI About ChatGPT Safety Rules
Canada’s Justice Minister Sean Fraser bluntly warned that if OpenAI doesn’t quickly amend its safety protocols, the government will force these changes through tougher rules.
The Canadian government is furious with OpenAI’s lack of timely reporting. Evan Solomon, the minister responsible for artificial intelligence in the federal government, announced before the meeting with OpenAI executives that they should fully explain their tolerance thresholds and security protocols to the government. Mark Carney, the Prime Minister of Canada, also emphasized the importance of preventing future disasters and promised that the government will use all its legal capacities to investigate this issue.


Jesse van Roetslaar had a history of mental health issues and the Canadian police had previously confiscated his guns, but the authorities later returned them to him. This shooting happened in the small town of Tumbler Ridge with a population of about 2,400. Criminology experts say AI platforms need more oversight, but they also blame law enforcement agencies. They believe Canadian police missed important opportunities to prevent this tragedy by returning the weapons to a man with a history of mental health problems.
This isn’t the first time OpenAI’s safety policies have been called into question. In December 2025, the family of a victim sued the company, alleging that ChatGPT encouraged a man to murder his mother and then commit suicide by promoting superstitious beliefs.
RCO NEWS



