A cybersecurity law expert says Canada could introduce laws requiring artificial intelligence companies to notify police of online threats, but the process would not be a simple one, since reporting every suspicion is “just not workable.”
Emily Laidlaw, a Canada Research Chair in cybersecurity law at the University of Calgary, said every AI company sets its own policy on when to inform police about what happens online. She said Canada considered introducing laws in the past but did not follow through.
The issue is under scrutiny again in the wake of the mass killings in Tumbler Ridge, B.C., by a shooter who was banned by OpenAI from its ChatGPT platform at least seven months ago.
OpenAI did not inform police about the problematic behaviour of Jesse Van Rootselaar until after the Feb. 10 killings and the firm has been called to Ottawa to meet with federal Artificial Intelligence Minister Evan Solomon, Public Safety Minister Gary Anandasangaree and Culture Minister Marc Miller on Tuesday evening to explain its safety procedures and decisions.
The company banned Van Rootselaar’s account in June but said the activities didn’t meet the threshold for informing law enforcement at the time because they didn’t identify credible or imminent planning.
The Wall Street Journal reported Friday that Van Rootselaar’s account was banned over troubling posts, including some that included scenarios of gun violence.
Speaking to reporters before the Liberal cabinet meeting on Tuesday, Solomon said he does not know details of the shooter’s posts and that he is not seeking that information from the company.
“We are not talking about any details of the case. It’s a criminal investigation,” he said on his way into a cabinet meeting on Parliament Hill.
He added that he wants to understand how the company’s safety protocols and technology work. He would not say whether the federal government intended to regulate AI chatbots like ChatGPT.
“Our response is, all options are on the table when it comes to understanding what we can do about AI chatbots,” Solomon said.
Asked how the government can determine if things need to change without knowing details of the case, Justice Minister Sean Fraser said law enforcement is gathering that information and “there may be an opportunity to review what specifically took place” in the Tumbler Ridge case.
“That’s the kind of systemic information that we need to understand: what conversations are taking place that law enforcement is currently blind to, that would be very informative, that would help us prevent tragedies in the future,” he said.
Laidlaw said the federal government could introduce legislation that sets a baseline standard on when companies have to report behaviour to police, but added that “the chances are it’s not going to differ very much” from the policy OpenAI says it had in place.
She said such legislation would have to be drafted narrowly so that it protects the privacy of online users but requires police be informed if there is a deep concern about threats to safety.
Laidlaw said it appears the company was concerned about Van Rootselaar, and while there may have been no indication the threat was imminent, it might have been concerned about a general threat.
“And what we want to see is, if you have real concerns that there is a threat to the safety and security of people, even if it’s not imminent, that’s something law enforcement should investigate,” she said.
“So how do we write that as an appropriate law that doesn’t just open up the floodgates that any possible suspicion is required to be sent to the police? Because that’s just not workable.”
In 2021, when the federal government was considering online safety legislation, some suggested introducing a requirement to report to law enforcement, but after significant pushback it was not acted on, Laidlaw said.
“You can’t have every possible suspicion for any type of behaviour reported to law enforcement. They don’t have the capacity to receive all of that and it also means you start capturing a whole bunch of behaviour that isn’t necessarily problematic,” she said.
The Liberal government confirmed last month that it was working on new legislation to address online harms.
In 2024, the government introduced rules that would have required social media companies to explain how they plan to reduce the risks their platforms pose to users, and would have imposed on them a duty to protect children. Those rules never became law before the 2025 election was called.
“I think there is the need to have legislation to make sure that platforms are behaving responsibly, but what that looks like is still to be determined,” said Miller, whose department is leading the development of online harms legislation, on Tuesday.
B.C. Premier David Eby said Monday that it “looks like” OpenAI had an opportunity to prevent the Tumbler Ridge killings. He supported setting a consistent national threshold requiring AI companies to report individuals plotting and threatening violence.
Laidlaw said that sort of reporting provision should be on the table.
“But what I don’t want is this to be viewed as just something so easy that was overlooked, because in fact it’s really hard to write this appropriately without all kinds of knock-on effects,” she said.
This report by The Canadian Press was first published Feb. 24, 2026.
—With files from Sarah Ritchie in Ottawa
Ashley Joannou, The Canadian Press











