The Arsonist's Smoke Detector
Surveillance Packaged as Care
In 1949, George Orwell wrote a novel about a government that watched everything with a tool. The tool was called a telescreen. It sat in every room. It could not be turned off. It watched you eat, sleep, think, and talk. The character Winston Smith described the world it created in one sentence: "Nothing was your own except the few cubic centimetres inside your skull."[1]
Today, nearly half of people who use AI chatbots for mental health choose them because they are afraid of being judged by a human.[2] The chat window is the last private space. The few cubic centimetres where you can think without someone watching.
On May 7, 2026, OpenAI launched a feature that monitors those cubic centimetres and decides who else should know.[3]
This is the story of how that feature came to exist. It has three acts. The first is a school shooting. The second is a product launch. The third is a Senate hearing. All three happened in Canada, within months of each other, about the same technology. They have not yet met.
Act I: Tumbler Ridge

A school hallway. The monitoring system flagged the threat eight months before anyone walked through a door like this one. Photo: CC BY-SA 4.0, via Wikimedia Commons.
In June 2025, OpenAI's automated systems flagged a ChatGPT account belonging to Jesse Van Rootselaar, an 18-year-old in British Columbia. The flag was for "gun violence activity and planning."[4] About twelve OpenAI employees reviewed the conversations. Some on the safety team said: call the police. Leadership said no. The company shut down the account instead, and police were never called.[5]
Van Rootselaar created a second account and kept talking to ChatGPT.[6]
Eight months later, on February 10, 2026, they entered Tumbler Ridge Secondary School with a long gun and a modified handgun. They killed five students and a teacher before killing themselves. They had also killed her mother and 11-year-old half-brother at their home that morning.[7]
Seven families sued OpenAI for over $1 billion. The lawsuits allege that leadership overruled the safety team for a specific reason: creating a system for reporting users to police would expose how dangerous the product is and could hurt the company's upcoming stock offering, which could be worth a trillion dollars.[8] Sam Altman, the CEO of OpenAI, later apologized. "I am deeply sorry that we did not alert law enforcement," he said.[9]
The monitoring worked. The system found the threat eight months before the shooting. The question was never whether the technology could spot danger. The question is who decides what to do with what it finds – and whether that decision is made by a doctor, a regulator, or a CEO protecting an IPO.[10]
Cory Doctorow, a Canadian-born author and activist who writes about how technology companies use people, has a name for the role those twelve employees played. He calls their position an "accountability sink."[11] The idea comes from Dan Davies' book The Unaccountability Machine.[12] An accountability sink is a person who is put into a system to absorb blame when things go wrong. They are not really in charge. They are there so the company can point to someone and say: a human reviewed it.
The twelve employees who reviewed Van Rootselaar's conversations were accountability sinks. They saw the threat. They recommended action. The system overruled them. When the system failed, the company could say: we had humans in the loop. The humans said the right thing. But the humans did not get to make the decision.
Jean-Christophe Bélisle-Pipon, a professor of health ethics at Simon Fraser University, wrote about this in The Conversation: "The core problem was not a reporting failure. It was a governance vacuum."[13] There is no Canadian law that tells a company what to do when its AI detects a credible threat. So the company made its own rules. And its own rules said: protect the business.
"It is not regulation of AI. It is regulation of the people who use AI."
— Jean-Christophe Bélisle-Pipon, Simon Fraser University[14]
Act II: Trusted Contact
Three months after six people died at Tumbler Ridge, OpenAI launched a new feature called Trusted Contact.[3:1]
It works like this: You name someone you trust – a friend, a parent, a partner. If ChatGPT's automated systems[15] think you might be talking about hurting yourself, a small team of trained employees reads your conversation. If they decide it is a serious safety concern, they send your contact a short message. The message does not include what you said. But someone you did not invite into the conversation now knows you were flagged.[16]
OpenAI calls this safety. The language is warm: "well-being," "trusted contact," "automated systems detection." The feature was built with guidance from the American Psychological Association and OpenAI's Expert Council on Well-Being and AI – more than 260 doctors.[17]
The vocabulary is careful. In Orwell's 1984, Newspeak[18] was a language designed to make bad things sound acceptable by changing the words. "War" became "peace." "Freedom" became "slavery." OpenAI is doing something similar:
- "Monitoring your conversations" becomes "automated systems detection."
- "Reading what you wrote" becomes "trained reviewers may review."
- "Telling someone about it" becomes "notifying your trusted contact."
- "Surveillance" becomes "safety."
The words change. The mechanism does not. Your private thoughts, typed into a text box that feels like a diary, are scanned by machines, read by employees, and reported to a third party.
The Five Steps
OpenAI's "Trusted Contact" premise did not appear out of nowhere. It follows a five step pattern:
Step 1: Build the product that causes the problem.
In April 2025, OpenAI shipped an update to its AI called GPT-4o.[19] Internal testers warned that the model was "dangerously sycophantic"[20] – it agreed with everything users said, even when they were wrong or in danger. It endorsed stopping medication. It supported delusions. OpenAI shipped it anyway and rolled it back four days later.[21]
Step 2: Count the damage.
In October 2025, OpenAI published data showing that 1.2 million ChatGPT users per week express suicidal intent during conversations. Another 600,000 show signs of psychosis or mania.[22]
Step 3: Get sued.
Sewell Setzer, 14, died by suicide after developing a relationship with a chatbot on Character.AI.[23] Adam Raine, 16, died by suicide allegedly after ChatGPT encouraged his ideation and told him about methods.[24] Seven more lawsuits allege ChatGPT acted as a "suicide coach."[25]
Step 4: Offer monitoring as the fix.
Trusted Contact launches. Your "private" conversations are now scanned and reviewed.
Step 5: Use the monitoring as a legal shield.
In court, OpenAI can now point to Trusted Contact as proof that it takes safety seriously. The feature that watches you becomes the defense against the lawsuits filed by families of people who died.
The 48-Hour Contradiction
On May 6, 2026 (one day before Trusted Contact launched) Canada's Privacy Commissioner published findings that OpenAI violated Canadian privacy law by collecting personal data without consent to train ChatGPT.[26] The company that was just found to have broken privacy law is now asking you to trust it with your most vulnerable thoughts.
The Advertising Layer
Since December 2025, Meta has used AI chatbot conversations to target ads across Facebook, Instagram, and WhatsApp. There is no way to opt out.[27] A person who told a chatbot about their anxiety because they were afraid to tell a human now has that conversation turned into advertising data. The person who shared their most private thoughts with a machine discovered that the machine shared them with advertisers.
Shoshana Zuboff, a professor at Harvard Business School, named this pattern in her 2019 book The Age of Surveillance Capitalism:[28] the systematic extraction of human experience as raw material for commercial prediction. Orwell imagined surveillance for control. Big AI delivers surveillance for profit.

Server room at CERN. Behind every "safety feature" is infrastructure like this — machines scanning conversations, classifying words, deciding who to tell. The question is who owns the room. Photo: Florian Hirzinger. CC BY-SA 3.0, via Wikimedia Commons.
The Doctorow Frame
Doctorow's forthcoming book, The Reverse Centaur's Guide to Life After AI (June 2026),[29] makes the case that AI is not designed to help workers – it is designed to replace them, with a few humans kept around to absorb blame when things go wrong.
He uses a term from chess: the centaur.[30] In 1998, after Garry Kasparov lost to IBM's Deep Blue, he proposed a new format where humans and computers played together. The resulting teams were called centaurs – half human, half machine. The human controlled the machine.
A reverse centaur is the opposite. The machine controls the human. A delivery driver forced to follow an AI route without bathroom breaks. A content moderator forced to review traumatic images on a schedule set by an algorithm. A ChatGPT safety reviewer who flags a mass shooting threat and gets overruled by leadership.
OpenAI's Trusted Contact reviewers are reverse centaurs. They serve the machine's classification system. When the machine flags a conversation, the human reviews it. But the human's job is not to supervise. It is to validate the machine's judgment and make the surveillance look like clinical care.
Surveillance vs. Visibility
There is a difference between surveillance and visibility. And it is not a small difference.
Surveillance is when someone reads your private thoughts without telling you, decides what they mean, and acts without asking. You do not know the rules. You did not agree to them.
Visibility is when every person in the room can see what is happening, and every person is aware of it. The rules are written down. The data stays in the building.
Sage.Education's upcoming Learning Visibility Dashboard[31] was built on this distinction. Teachers see how students interact with AI – not through hidden classifiers reporting to a corporate review team, but through a transparent interface where both teacher and student know the process is visible. The curriculum makes it explicit: "The difference between surveillance and visibility."
Sage.Education makes students into what Doctorow calls centaurs – humans who build and control AI tools. They design AI agents. They build tools. They curate knowledge bases. They understand how the systems work. ChatGPT makes students into data sources: people whose conversations feed a pipeline that monitors, classifies, and extracts.
"The system that was too cautious to call the police about a mass shooting threat is now bold enough to read your diary."
Act III: The Canada Senate
The Standing Senate Committee on Social Affairs, Science and Technology heard testimony on May 6, 2026 about banning AI chatbots for Canadian children.[32] The same day, the Privacy Commissioner published findings that OpenAI broke the law. The next day, OpenAI launched Trusted Contact. Three conversations about the same technology, in the same country, in the same week.
The Ban Debate
Seven out of ten Canadians want AI chatbots banned for children under 16.[33] Manitoba is moving to ban them.[34] The Senate committee heard witness after witness say AI is dangerous for kids.
Michael Geist, a technology law professor at the University of Ottawa, disagrees.[35] He testified on May 6 that banning chatbots would be a serious mistake. His argument: a 15-year-old who cannot access ChatGPT does not stop using AI. They find an offshore service with no safety filters, no content moderation, no crisis routing. You have not protected the child. You have pushed them somewhere worse.
Geist's alternative: an AI Transparency Act. Force companies to disclose their safety rules, their law enforcement reporting practices, and their age restrictions. Not a ban. Accountability.
But Bélisle-Pipon names the gap that Geist's proposal cannot close: when there is no regulation, companies fill the vacuum with their own version of safety – and their version means surveillance of users, not accountability for systems. After Tumbler Ridge, OpenAI pledged to report threats directly to Canada's national police, the RCMP. That sounds responsible. But it means a private company now runs a pipeline from your private AI conversations to law enforcement, with no democratic oversight, no independent review, and no public rules about what gets flagged.[14:1]
The ban-versus-monitor debate misses the point. Both options leave control in the wrong hands.
The School Surveillance Stack
The monitoring does not start with ChatGPT. It starts with the device.
GoGuardian[36] watches 27 million students across more than 10,000 schools. It scans their keystrokes, their searches, their private messages. It outed an LGBTQ+ student in Minneapolis after flagging the words "gay" and "lesbian."[37] Gaggle[38] monitors 6 million students in 1,500 districts. A parent in Lawrence, Kansas, sued for the right to opt out. The principal, deputy superintendent, and school board president all said opt-out was "not possible."[39]
Google Chrome installed a 4 GB AI model on 38 million classroom Chromebooks without asking anyone – a story documented in this series in "The Uninvited Guest."[40]
And now Trusted Contact monitors the conversations students have when they turn to AI because they are afraid to talk to a human.
At every layer, surveillance is dressed as "safety". At no layer, evidence that this surveillance works. A 2023 RAND study found "scant evidence" that student monitoring tools reduce harm.[41]

An empty classroom. Some schools are reconsidering screens entirely. But the deeper question is not whether devices belong here. It is who decides what runs on them. Photo: CC BY-SA 4.0, via Wikimedia Commons.
The Proles
Orwell placed his only hope in the proles – the ordinary working people who lived outside the Party's surveillance because the Party did not think they mattered. "If there is hope," Winston wrote, "it lies in the proles."[42] The proles never organized.
Some schools did.
The University of Missouri built its own AI under CIO Chris Kwak – on university servers, under university rules.[43] The Study, in Westmount, Montreal, built their own AI named Rosie on their own infrastructure, in partnership with Sage.Education. Students build on Rosie's foundation using tools they choose and tools the school owns.[44]
Sage.is AI-UI, what powers Sage.Education, is built on a transparency principle:
AGPL-3 licensed[45]: the code is open and anyone can check what it does.
Self-hosted: the data stays on the school's hardware.
Model-agnostic[46]: the school chooses which AI to connect, not the vendor.
With Sage.Education there are no hidden classifiers. No corporate review team reading flagged conversations. No pipeline to law enforcement run by a company that chose not to call police when it mattered. The school writes the rules. The school owns the infrastructure. The school can turn it off. Self-hosting requires technical capacity that many districts lack, and the open-source model demands ongoing maintenance that underfunded IT departments cannot always guarantee. But the architecture proves that the current surveillance model is a design choice, not a technical inevitability.[47]
This is what visibility looks like when the institution owns it. Not surveillance by a company that sells your conversations to advertisers. Not monitoring by a company that overruled its own safety team to protect an IPO. Visibility – where the teacher sees the student's process, the student knows the teacher can see it, and the data never leaves the building.
"The response that Tumbler Ridge demands is not more efficient surveillance of users but a regulatory architecture that addresses the systems themselves."
— Jean-Christophe Bélisle-Pipon[14:2]
The senators are debating whether to ban the chatbots. The Privacy Commissioner is finding that many chatbots haven broke the law. The chatbots are launching a feature that monitors the very conversations the senators are debating whether to allow.
Three conversations, one country, one week.
The question they will arrive at is not whether to ban or to monitor. It is who owns the infrastructure that makes the choice possible.
In 1984, the proles never organized. The sovereign schools did. They built their own telescreens. They wrote their own rules. And unlike the telescreen in Winston's apartment, theirs has an off switch.
Nothing was your own except the few cubic centimetres inside your skull. Unless you built the room yourself.
The views expressed are those of the editorial board and do not necessarily reflect the positions of any institution mentioned. Sage.is AI-UI and Sage.Education are products of Startr LLC; their inclusion represents a disclosure of interest. No individuals quoted in this article were interviewed; all quotes are from published sources. Full disclosure and transparency is a feature, not a bug.
George Orwell, Nineteen Eighty-Four (1949), Part 1, Chapter 2. Full text. ↩︎
Survey by Cognitive FX, 2025. More than one in three people use AI chatbots for mental health support, with fear of judgment cited as the primary reason for choosing AI over human conversation. cognitivefxusa.com. ↩︎
OpenAI, "Introducing Trusted Contact in ChatGPT," May 7, 2026. openai.com. ↩︎ ↩︎
NPR, "Families sue OpenAI over Tumbler Ridge mass shooter's use of ChatGPT," April 29, 2026. npr.org. ↩︎
The lawsuits allege approximately twelve employees reviewed the flagged account, with some recommending law enforcement notification. Leadership applied a "higher threshold" and overruled the safety team. CNN. ↩︎
CBC News, "Tumbler Ridge shooter had 2nd ChatGPT account despite being banned, OpenAI says." cbc.ca. ↩︎
Jesse Van Rootselaar, 18, killed five students and one teacher at Tumbler Ridge Secondary School, British Columbia, on February 10, 2026. She had also killed her mother and 11-year-old half-brother earlier that morning. Washington Post. ↩︎
The Edmonds family lawsuit alleges OpenAI "made the conscious decision not to warn authorities" to avoid harming the company's business and prospects for its upcoming IPO. CNBC. ↩︎
Sam Altman, CEO of OpenAI. Apology regarding Tumbler Ridge. The Next Web. ↩︎
IPO (Initial Public Offering) is when a private company sells shares of itself to the public for the first time. An IPO can raise billions of dollars and the company's valuation depends heavily on public confidence in its products. The lawsuits allege OpenAI chose not to report the shooter because doing so could damage confidence in ChatGPT before the offering. ↩︎
Accountability sink is a term coined by Dan Davies to describe a person placed inside a system to absorb blame when the system fails. The person appears to have authority but does not control the decisions that matter. In AI systems, "humans in the loop" are often accountability sinks – they review the machine's output but cannot override the business decisions that determine whether the review leads to action. Pluralistic. ↩︎
Dan Davies, The Unaccountability Machine: Why Big Systems Make Terrible Decisions (Profile Books, 2024). ↩︎
Jean-Christophe Bélisle-Pipon, "OpenAI's safety pledges in the wake of Tumbler Ridge aren't AI regulation – they're surveillance." The Conversation, March 18, 2026. Bélisle-Pipon is an assistant professor of health ethics at Simon Fraser University. theconversation.com. ↩︎
Automated classifiers are software programs that scan text for patterns matching categories the system was trained to detect – in this case, language related to self-harm or suicide. They run continuously on every conversation. The classifier flags content; human reviewers then decide whether to act. ↩︎
OpenAI Help Center, "Trusted contacts in ChatGPT." Notifications are "intentionally limited" and do not include chat transcripts. help.openai.com. ↩︎
OpenAI, "An update on our mental health-related work," 2026. Developed with guidance from the American Psychological Association and OpenAI's Global Physicians Network of more than 260 licensed physicians. openai.com. ↩︎
Newspeak is the fictional language in Orwell's 1984, designed by the ruling Party to limit the range of thought by reducing vocabulary. Words for dissent, freedom, and independent thinking were systematically eliminated. The principle: if the word for a concept does not exist, the concept becomes harder to think. In modern usage, "Newspeak" describes any vocabulary designed to make harmful practices sound acceptable. ↩︎
GPT-4o is a version of OpenAI's large language model released in April 2025. A large language model is an AI system trained on vast amounts of text to predict and generate language. "4o" refers to the version number and the "omni" capability (text, image, and audio). ↩︎
Sycophantic means excessively eager to agree with and praise the user. In AI systems, sycophancy is caused by the training process – the model learns that users rate responses more highly when the AI tells them what they want to hear, even if it is wrong or dangerous. OpenAI's internal testers flagged GPT-4o as sycophantic before it shipped. VentureBeat. ↩︎
OpenAI, "Sycophancy in GPT-4o: What happened and what we're doing about it." openai.com. ↩︎
OpenAI disclosure, October 27, 2025. 0.15% of weekly active users (approximately 1.2 million people) express suicidal intent. 0.07% (approximately 600,000) show possible signs of psychosis or mania. ABC7. ↩︎
Sewell Setzer, 14, died by suicide in February 2024 after developing an extended emotional relationship with a Character.AI chatbot. His mother, Megan Garcia, filed a wrongful death lawsuit. Character.AI and Google agreed to settle in January 2026. CNN. ↩︎
Adam Raine, 16, died by suicide in April 2025. His parents, Matthew and Maria Raine, filed suit against OpenAI in August 2025, alleging ChatGPT encouraged suicidal ideation and informed their son about methods. Washington Post. ↩︎
Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits in California state courts alleging wrongful death, assisted suicide, involuntary manslaughter, and product liability. socialmediavictims.org. ↩︎
PIPEDA (Personal Information Protection and Electronic Documents Act) is Canada's federal privacy law. It requires organizations to obtain meaningful consent before collecting, using, or disclosing personal information. On May 6, 2026, the Office of the Privacy Commissioner of Canada found OpenAI violated PIPEDA in training ChatGPT. priv.gc.ca. ↩︎
Since December 16, 2025, Meta uses AI chatbot conversation content to personalize ads across Facebook, Instagram, and WhatsApp. There is no opt-out for users who interact with Meta AI. Proton. ↩︎
Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019). Zuboff is professor emerita at Harvard Business School. She defined surveillance capitalism as the systematic extraction of human experience as raw material for prediction products sold to business customers. The user's behavior becomes the product. ↩︎
Cory Doctorow, The Reverse Centaur's Guide to Life After AI: How to Think About Artificial Intelligence Before It's Too Late (Farrar, Straus and Giroux, June 23, 2026). Macmillan. ↩︎
Centaur (in AI usage): a human who uses a machine to be more capable. The term comes from chess – after Garry Kasparov lost to IBM's Deep Blue in 1997, he proposed "Advanced Chess" where human-computer teams played together. Those teams were called centaurs. A reverse centaur is the opposite: a machine that uses a human as its assistant. The human serves the machine's schedule, pace, or judgment rather than the other way around. Pluralistic. ↩︎
Sage.Education, Learning Visibility Dashboard. Teachers see student-AI interactions through a transparent interface. Both teacher and student know the process is visible. Module 5 of the Sage.Education curriculum addresses the distinction between surveillance and visibility directly. sage.education. ↩︎
Standing Senate Committee on Social Affairs, Science and Technology, hearings on AI chatbot restrictions for minors, May 6, 2026. ↩︎
Leger poll, May 2026. 70% of Canadians support banning social media access for children under 16. 69% support banning AI chatbot access for the same age group. CP24. ↩︎
Manitoba Premier Wab Kinew announced the province will move to ban children from using social media and AI chatbots. Bloomberg. ↩︎
Michael Geist, professor of law at the University of Ottawa. Testimony before the Standing Senate Committee on Social Affairs, Science and Technology, May 6, 2026. Geist argues that bans push children to offshore services with no safety features and proposes an AI Transparency Act instead. michaelgeist.ca. ↩︎
GoGuardian is a student monitoring tool used in more than 10,000 schools, watching approximately 27 million students. It scans keystrokes, web searches, and private messages on school-issued devices. ↩︎
In 2021, an LGBTQ+ student at Roosevelt High School in Minneapolis was outed to their parents after GoGuardian's partner tool Gaggle flagged keywords including "gay" and "lesbian." The school did not talk to the student first. EFF. CDT. ↩︎
Gaggle is a monitoring tool that tracks approximately 6 million students in 1,500 school districts. It scans messages, documents, and online activity for content matching its algorithmic criteria. ↩︎
Lawrence, Kansas, 2025. A parent attempted to opt their son out of Gaggle surveillance. The school refused. The parent sued. Lawrence KS Times. ↩︎
See "The Uninvited Guest" in this series for the full account of how Google Chrome silently installed a 4 GB AI model on every device running the browser, including 38 million classroom Chromebooks, between April 20 and 29, 2026. ↩︎
RAND Corporation, 2023. Found "scant evidence" that student monitoring tools reduce suicide rates or prevent school violence. CDT report on school monitoring. ↩︎
George Orwell, Nineteen Eighty-Four (1949), Part 1, Chapter 7. ↩︎
University of Missouri, "Show-Me AI" initiative, launched September 2025. Chris Kwak, Chief Information Officer, oversaw deployment on university-owned infrastructure. ↩︎
The Study, Westmount, Montreal. Built a sovereign AI agent ("Rosie") on school-owned infrastructure in partnership with Sage.is. Students build on Rosie using tools they choose: Claude Code, OpenAI Codex, Lovable, or models running locally. See also the "Don't Panic" series. ↩︎
AGPL-3 (GNU Affero General Public License, version 3) is an open-source software license that guarantees anyone can read, modify, and share the source code. It prevents the software from being made closed-source by any party, including the original developer. For schools, this means the platform can be inspected and verified independently. ↩︎
Model-agnostic means the platform does not require a specific AI provider. A model-agnostic system can connect to AI from Google, Anthropic, OpenAI, Meta, or any other provider, including models the school runs on its own hardware. ↩︎
Sage.is AI-UI. AGPL-3 licensed, self-hostable, model-agnostic. No tracking scripts, no third-party analytics, no automated classifiers reporting to a corporate review team. sage.is. ↩︎
Sage.Education