ChatGPT banned the gunman's account 8 months ago but did not report it to the police; later, he killed 8 people, and Altman apologized.

Canada’s British Columbia Tumbler Ridge experienced a shooting in February this year that resulted in 8 deaths. It was later revealed that OpenAI had banned the shooter Rootselaar’s ChatGPT account eight months prior due to “gun violence-related scenarios,” but did not report it to the police citing “not meeting the immediate threat threshold.” OpenAI CEO Sam Altman issued an apology in a letter last week.
(Background summary: Elon Musk sues OpenAI in court: requesting to revoke the transition to a profit-driven company, remove Altman, and seek $134 billion in damages)
(Additional background: OpenAI releases five constitutional principles for AGI: AI must not be monopolized by a few, and sacrifices can be exchanged for greater resilience)

Table of Contents

Toggle

  • 8 months of silence
  • Who sets the reporting threshold?
  • Changes and unchanged aspects

On February 10, 2026, a shooting broke out in Tumbler Ridge, a remote town in British Columbia, Canada. 18-year-old Jesse Van Rootselaar first killed his mother and 11-year-old brother at home, then went to the town’s middle school, resulting in 8 deaths, and died by suicide.

Subsequent investigations revealed: OpenAI had banned his ChatGPT account eight months earlier due to Van Rootselaar “describing gun violence scenarios” in conversations, but did not notify any law enforcement agencies.

8 months of silence

In June 2025, ChatGPT’s automated abuse detection system and human reviewers jointly flagged Van Rootselaar’s account, which was then banned. According to OpenAI later, the reviewers also discussed internally whether to report to the police at that time.

The conclusion of the discussion was: do not report.

OpenAI states that the review determined the account’s behavior “did not meet the threshold for an immediate and credible serious threat of harm to others,” and thus law enforcement reporting was not initiated. After the ban, the matter remained internal within the company. The account was closed, but no signal was transmitted outward.

It wasn’t until the 2026 shooting incident that OpenAI proactively contacted Canadian authorities.

Who sets the reporting threshold?

“Immediate and credible threats” is a reporting threshold set by OpenAI itself, decided by the company’s internal safety, legal, and policy teams on when to notify the police. This standard is not subject to external regulatory review, nor is there any public explanation of how it is calibrated.

Van Rootselaar’s account met the “needs to be banned” standard but did not meet the “needs to be reported” standard. These two thresholds are separate within OpenAI’s internal logic.

On April 23, Sam Altman wrote an apology letter, first published in the local newspaper TumblerRidgeLines. In the letter, Altman wrote:

The pain your community has endured is unimaginable.

Over the past few months, I have been thinking about you all. I deeply apologize that we did not notify law enforcement when the account was banned in June.

Before the letter was made public, Altman had personally communicated with Tumbler Ridge Mayor Darryl Krakowka and British Columbia Premier David Eby. Eby shared the letter on social media on April 24, along with a response: “An apology is necessary, but it is far from enough considering the devastation experienced by those families in Tumbler Ridge.”

Changes and unchanged aspects

OpenAI announced that it will relax the criteria for law enforcement reporting, allowing more account scenarios to trigger the reporting process. Canadian Federal AI Minister Evan Solomon stated that Altman agreed to establish a direct contact window with the Royal Canadian Mounted Police (RCMP), and added mechanisms to guide users showing crisis signals in conversations to local support services.

The Canadian government is currently evaluating whether further legislation is needed to regulate AI platform reporting obligations.

However, paradoxically, many people might say things in moments of emotional instability (you’ve probably also casually said “I want to die” as a way to vent, right?) and thus be reported, resulting in people becoming hesitant to share their feelings with AI (perhaps that’s not a bad thing?).

Currently, this issue is not a matter of OpenAI’s capability. ChatGPT detects the signals, human reviewers see them too, but the question is: who decides that threshold? Who interprets it? How to strike the right balance?

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments