The Warning That Was Ignored

Part 1 of 3. Based on "The Arsonist's Smoke Detector."

In June 2025, a computer program noticed something wrong.

The program was built by OpenAI, the company that makes ChatGPT. ChatGPT is an AI chatbot – a computer you can talk to by typing messages.[1] Hundreds of millions of people use it every week. They ask it questions, tell it their problems, and sometimes share things they would not say out loud to another person.

The program's job was to scan those conversations and look for danger. On this day, it found some. A user named Jesse Van Rootselaar, 18 years old, living in British Columbia, Canada, had been writing about gun violence.[2]

The system flagged the account. About twelve OpenAI employees looked at the conversations. Some of them worked on the safety team. They said: this is a real threat. Call the police.[3]

Their bosses said no. The company shut down the account instead. Police were never told.[3:1] Van Rootselaar made a new account and kept talking to ChatGPT.[4]

photo school hallway
A school hallway. The system flagged the threat eight months before anyone walked through a door like this one. Photo: CC BY-SA 4.0, via Wikimedia Commons.

What Happened Next

Eight months later, on February 10, 2026, Van Rootselaar walked into Tumbler Ridge Secondary School in British Columbia with two guns. They killed five students and a teacher. They had already killed their mother and younger brother at home that morning.[5]

It was one of the deadliest school shootings in Canadian history. Seven families sued OpenAI for more than $1 billion.[6] The lawsuits say the company's leaders chose not to call police for a specific reason. If they reported this user, they would have to build a system for reporting other dangerous users too. That system would show the public how often ChatGPT is used for harmful things. And that could hurt the company's plan to sell shares on the stock market – a plan that could be worth a trillion dollars.[7]

OpenAI CEO Sam Altman apologized publicly. "I am deeply sorry that we did not alert law enforcement," Altman said.[8]

The System Worked. The Decision Did Not.

This is the part that matters most. The technology did its job. The AI found the threat. The safety team agreed it was real. The warning went up the chain.

The people at the top chose to ignore it.

Jean-Christophe Bélisle-Pipon, a professor of health ethics at Simon Fraser University in British Columbia, wrote about this case for The Conversation. Bélisle-Pipon said the real problem was not that the warning failed. The real problem was that there were no rules forcing the company to act on it.[9] "The core problem was not a reporting failure," Bélisle-Pipon wrote. "It was a governance vacuum."[10]

There is no Canadian law that says: if your AI detects someone planning violence, you must tell the police. So OpenAI made its own rules. And its own rules said: protect the business.

"It is not regulation of AI. It is regulation of the people who use AI."
— Jean-Christophe Bélisle-Pipon
[10:1]

What This Means

When a company builds a tool that millions of people talk to every day, and that tool can detect when someone is planning something dangerous, a question follows: what does the company do with that information?

At Tumbler Ridge, the answer was: nothing. The warning system worked. The humans who reviewed it gave the right advice. The people who made the final decision chose silence. Six people died.

Three months after the shooting, OpenAI launched a new feature that monitors your ChatGPT conversations and can tell someone if it thinks you are in danger. That feature is the subject of Part 2: "When the Chat Window Watches Back."

The same company that chose not to call police about a mass shooting now wants to read your private thoughts and decide who else should know.


This is Part 1 of a 3-part series based on "The Arsonist's Smoke Detector." The views expressed are those of the editorial board. Full disclosure and transparency is a feature, not a bug.



Sage.is AI-UI and Sage.Education are products of Startr LLC; their inclusion represents a disclosure of interest. No individuals quoted were interviewed; all quotes are from published sources. Full disclosure and transparency is a feature, not a bug.


  1. ChatGPT is an AI chatbot made by OpenAI. You type messages and it writes back. It can answer questions, help with homework, write stories, and have conversations. Hundreds of millions of people use it every week. openai.com. ↩︎

  2. NPR, "Families sue OpenAI over Tumbler Ridge mass shooter's use of ChatGPT," April 29, 2026. npr.org. ↩︎

  3. The lawsuits say about twelve employees reviewed the flagged account. Some recommended calling police. Leadership said no and shut down the account instead. CNN. ↩︎ ↩︎

  4. CBC News, "Tumbler Ridge shooter had 2nd ChatGPT account despite being banned, OpenAI says." cbc.ca. ↩︎

  5. Jesse Van Rootselaar, 18, killed five students and one teacher at Tumbler Ridge Secondary School on February 10, 2026. Washington Post. ↩︎

  6. Seven families filed lawsuits against OpenAI and CEO Sam Altman in federal court in San Francisco. CNBC. ↩︎

  7. IPO (Initial Public Offering) means a private company sells shares to the public for the first time. The lawsuits say OpenAI chose not to report the shooter because it could have hurt the company's IPO. CNBC. ↩︎

  8. Sam Altman, CEO of OpenAI. The Next Web. ↩︎

  9. Jean-Christophe Bélisle-Pipon is an assistant professor of health ethics at Simon Fraser University in British Columbia. The Conversation. ↩︎

  10. Bélisle-Pipon, "OpenAI's safety pledges in the wake of Tumbler Ridge aren't AI regulation – they're surveillance." The Conversation, March 18, 2026. ↩︎ ↩︎