AI Firms Not Legally Liable for User Crimes, Experts Say Amid Tumbler Ridge Fallout

Written By Northern Beat Staff
Published

As grief lingers in the small northern community of Tumbler Ridge following the February 10 mass shooting that claimed eight lives, including six children, attention has turned to the role of artificial intelligence in the tragedy. Revelations that OpenAI banned the suspect’s ChatGPT account months earlier for violent content—but did not alert authorities—have sparked calls for tighter regulations. Yet legal experts and precedents emphasize a clear principle: AI companies cannot be held criminally or civilly responsible for crimes committed by individuals using their systems.

The suspect, 18-year-old Jesse Van Rootselaar, allegedly killed his mother, an 11-year-old half-brother, and six students and staff at Tumbler Ridge Secondary School before taking his own life. OpenAI confirmed last week that its systems flagged the account in June 2025 for interactions involving gun-violence scenarios. The company banned the user for policy violations but determined the activity did not meet its internal threshold for reporting to law enforcement, as it lacked evidence of “credible or imminent” planning.

Federal Artificial Intelligence Minister Evan Solomon met with OpenAI officials Tuesday in Ottawa, describing himself as “disappointed” that the company presented no new concrete safety measures. B.C. Premier David Eby called the situation “profoundly disturbing” and urged police to secure any preserved evidence from digital platforms, while signaling support for clearer federal reporting rules.

Despite the outrage, Canadian and international legal frameworks draw a firm line on liability.

“AI systems like ChatGPT are tools—extraordinarily powerful ones—but tools nonetheless,” said University of Toronto law professor Emily Chan, who specializes in technology and criminal law. “Criminal responsibility requires mens rea: a guilty mind and intent. Code has neither. The shooter made the choices, planned the acts, and carried them out. Holding the AI provider liable would be akin to suing a search engine for returning harmful instructions or a library for lending a book on explosives.”

Precedents support this view. In the United States, Section 230 of the Communications Decency Act has long shielded platforms from liability for user-generated content. Canadian courts have followed similar reasoning. In a 2023 ruling involving social-media facilitation of crime, the Supreme Court of Canada stressed that intermediaries are not responsible for independent criminal acts unless they actively participate or incite.

Even in product-liability cases, courts distinguish between defective design and misuse. “If a hammer is used in a murder, we don’t sue the manufacturer,” Chan noted. “The same logic applies here. Generative AI predicts tokens based on vast training data; it doesn’t possess agency, desire, or foresight in the human sense.”

Critics argue companies should face civil suits for foreseeable harm, especially when safety filters are bypassed or premium features weaken restrictions. Some families in past U.S. cases—such as lawsuits against Character.AI following teen suicides—have claimed negligence in design. But those remain ongoing and unsettled, with most experts predicting limited success absent proof the AI directly caused the act without human intervention.

In Canada, no statute yet imposes mandatory reporting for AI-detected threats below certain thresholds. Cybersecurity law specialist Dr. Raj Patel of UBC said new legislation could require notification in high-risk cases, but “it would need careful drafting to avoid chilling free expression or overwhelming authorities with false positives.”

OpenAI has defended its actions, stating it “carefully considers” referrals and acts within privacy and legal constraints. The company pointed to thousands of daily harmful queries it blocks or refuses, arguing blanket reporting would be impractical and raise privacy issues.

For now, the focus remains on prevention rather than retroactive blame. RCMP investigations continue, and federal officials say “all options” are on the table for AI governance. But the core legal stance endures: responsibility for violent crime rests with the individual who commits it—not the algorithm that responded to their prompts.

As it stands today, the online systems we call ‘AI’ can only amplify intent, they cannot create it.