Who Decides?

Part 3 of 3. Follows "When the Chat Window Watches Back."

On May 6, 2026, three things happened in Canada.

The Privacy Commissioner said OpenAI broke privacy law.[1] The Senate heard arguments about banning AI chatbots for kids.[2] And OpenAI was about to launch a feature that monitors kids' conversations.[3]

Three conversations about the same technology. Same country. Same week. None of them were connected.

photo school lockers
School lockers. Behind these walls, a network of managed devices that companies can modify without asking. Photo: Public domain, via Wikimedia Commons.

The Ban Debate

Seven out of ten Canadians want AI chatbots banned for anyone under 16.[4] The province of Manitoba is already moving to ban them.[5] Senators heard from expert after expert who said these tools are dangerous for young people. But Michael Geist, a law professor at the University of Ottawa, told the Senate that banning chatbots would be a mistake.[6]

His argument is simple. A 15-year-old who cannot use ChatGPT does not stop using AI. They find something else – a service from another country with no safety rules, no filters, and no one watching out for them. Banning the tools you can see pushes kids toward tools you cannot see.

Geist's alternative: instead of banning AI, force AI companies to be honest. He wants a law called an AI Transparency Act. It would require companies to tell the public:

  • What safety rules they follow
  • When they report users to police
  • What age limits they set
  • What they do with your data

Not a ban. Accountability. But there is a gap in his argument.

The Gap in the Middle

But a professor named Jean-Christophe Bélisle-Pipon, who teaches health ethics at Simon Fraser University, sees a problem with both the ban and the transparency approach.[7]

When there are no laws, companies write their own rules. And their rules tend to serve the company, not the users. After the Tumbler Ridge school shooting (described in Part 1), OpenAI promised to start reporting dangerous users directly to Canada's national police, the RCMP. That sounds responsible. But think about what it means: a private company now decides which of your private conversations get sent to the police. No court ordered it. No elected official wrote the rules. No independent person reviews the decisions.[8]

Bélisle-Pipon wrote: "It is not regulation of AI. It is regulation of the people who use AI."[8:1]

The ban debate asks whether kids should use chatbots. The deeper question is who makes the rules about what happens when they do.

The Layers of Watching

The monitoring does not start with ChatGPT. It starts much earlier.

First, the device. Many schools give students Chromebooks – laptop computers made by Google. Google recently installed a 4 GB AI program on all of these devices without asking.[9] No principal was told. No parent was asked. That story is told in "The Uninvited Guest" series.

Then, the monitoring software. A program called GoGuardian[10] watches 27 million students in more than 10,000 schools. It reads their searches, their messages, and their keystrokes. In Minneapolis, it outed an LGBTQ+ student to their parents by flagging the words "gay" and "lesbian."[11] Another program, Gaggle,[12] watches 6 million students. When a parent in Kansas tried to opt out, the school said it was not possible. The parent had to sue.[13]

Finally, the chatbot. Now Trusted Contact monitors the conversations students have when they turn to AI because they are too afraid to talk to a person.

At every layer, the watching is called "safety." At no layer is there strong proof that it works. A 2023 study by RAND, a research organization, found "scant evidence" that student monitoring tools actually prevent harm.[14]

photo empty classroom
An empty classroom. The question is not whether devices belong here. It is who decides what runs on them. Photo: CC BY-SA 4.0, via Wikimedia Commons.

The Schools That Built Something Different

In 1984, a famous novel by George Orwell, the government watched everyone through screens that could not be turned off.[15] The only people who lived outside the surveillance were called "the proles" – ordinary working people the government did not bother to watch. Orwell wrote: "If there is hope, it lies in the proles."[16]

The proles in the novel never organized. But in 2026, some schools did. The University of Missouri built its own AI system on its own computers, run by its own staff.[17] No outside company reads the data.

A school in Montreal called The Study built an AI named Rosie on its own equipment, working with a company called Sage.Education.[18] The students at The Study use Rosie as a foundation. They build their own AI tools on top of it. They choose which tools to use. The school controls what runs on its own devices.

Sage.is AI-UI, the platform that powers Sage.Education, works like this:[19]

  • The code is open. Anyone can read it and check what it does.[20]
  • The data stays on the school's own computers. It does not go to a company's servers.
  • The school chooses which AI to connect. It is not locked into one company.[21]
  • There are no hidden programs scanning conversations.
  • There is no corporate team reading flagged messages.
  • The school writes the rules. The school can turn it off.

This approach is not perfect. Running your own AI takes technical skill that many schools do not have yet. It takes money for equipment and people to maintain it. But it proves something important: the way things work now (where companies watch students and call it safety) is a choice, not the only option.[19:1]

Visibility, Not Surveillance

The difference between surveillance and visibility matters here.

Surveillance means someone reads your private thoughts without telling you. You do not know the rules. You did not agree to them. Visibility is different. It means the teacher can see what the student is doing, and the student knows it. The rules are clear to both sides. The data stays in the building.

Sage.Education was built on this idea.[22] The teacher sees the student's process. The student knows the teacher can see. Nobody is hiding anything. That is not surveillance. That is a classroom.

"The response that Tumbler Ridge demands is not more efficient surveillance of users but a regulatory architecture that addresses the systems themselves." — Jean-Christophe Bélisle-Pipon[8:2]

The Question

The senators are debating whether to ban the chatbot. The Privacy Commissioner found that the chatbot broke the law. The chatbot just launched a feature that monitors the very conversations the senators are debating.

Three conversations, one country, one week.

The question is not whether to ban AI or to allow it. It is not whether to monitor students or leave them alone.

The question is: who decides what runs on the devices in your school, what happens to the conversations your students have, and whose rules apply when something goes wrong?

Some schools have already answered that question. They built their own rooms. They wrote their own rules. And unlike a chatbot controlled by a company in San Francisco, theirs has an off switch.


This is Part 3 of a 3-part series based on "The Arsonist's Smoke Detector." The views expressed are those of the editorial board. Full disclosure and transparency is a feature, not a bug.



Sage.is AI-UI and Sage.Education are products of Startr LLC; their inclusion represents a disclosure of interest. No individuals quoted were interviewed; all quotes are from published sources. Full disclosure and transparency is a feature, not a bug.


  1. PIPEDA is Canada's privacy law. It says companies need your permission before collecting your personal information. On May 6, 2026, the Privacy Commissioner found OpenAI broke this law. priv.gc.ca. ↩︎

  2. Standing Senate Committee on Social Affairs, Science and Technology. Hearings on AI chatbot restrictions for minors, May 6, 2026. ↩︎

  3. OpenAI, "Introducing Trusted Contact in ChatGPT," May 7, 2026. openai.com. ↩︎

  4. Leger poll, May 2026. 70% of Canadians support banning AI chatbot access for children under 16. CP24. ↩︎

  5. Manitoba Premier Wab Kinew said the province will ban children from using social media and AI chatbots. Bloomberg. ↩︎

  6. Michael Geist is a professor of law at the University of Ottawa. He testified before the Senate on May 6, 2026. He argues bans push kids to offshore services with no safety features. michaelgeist.ca. ↩︎

  7. Jean-Christophe Bélisle-Pipon is an assistant professor of health ethics at Simon Fraser University. The Conversation. ↩︎

  8. Bélisle-Pipon, "OpenAI's safety pledges in the wake of Tumbler Ridge aren't AI regulation – they're surveillance." The Conversation, March 18, 2026. ↩︎ ↩︎ ↩︎

  9. See "The Uninvited Guest" in this series. Google Chrome installed a 4 GB AI model on every device running the browser, including 38 million classroom Chromebooks, without notification or consent. ↩︎

  10. GoGuardian is monitoring software used in more than 10,000 schools. It watches about 27 million students by scanning their keystrokes, web searches, and messages on school devices. ↩︎

  11. In 2021, an LGBTQ+ student in Minneapolis was outed after Gaggle flagged the words "gay" and "lesbian." The school did not talk to the student first. EFF. ↩︎

  12. Gaggle is monitoring software that watches about 6 million students in 1,500 school districts. It scans messages and online activity. ↩︎

  13. A parent in Lawrence, Kansas tried to opt out of Gaggle in 2025. The school refused. The parent sued. Lawrence KS Times. ↩︎

  14. RAND Corporation, 2023. Found "scant evidence" that student monitoring tools reduce suicide or prevent violence. CDT. ↩︎

  15. George Orwell, Nineteen Eighty-Four (1949). A novel about a government that uses constant surveillance to control its citizens. The surveillance tool was called a telescreen. Full text. ↩︎

  16. George Orwell, Nineteen Eighty-Four, Part 1, Chapter 7. ↩︎

  17. University of Missouri, "Show-Me AI" initiative, launched September 2025 under CIO Chris Kwak. Built on university-owned servers. ↩︎

  18. The Study, Westmount, Montreal. Built an AI agent named Rosie on school-owned equipment in partnership with Sage.Education. Students use Rosie as a foundation and build their own tools on top. ↩︎

  19. Sage.is AI-UI. Open-source, self-hosted, model-agnostic. Self-hosting takes technical skill and resources that not all schools have. But the approach shows that corporate surveillance is a design choice, not the only way. sage.is. ↩︎ ↩︎

  20. Open-source means the software's code is public. Anyone can read it, check it, and suggest changes. This makes it harder to hide secret features like hidden monitoring. ↩︎

  21. Model-agnostic means the platform is not tied to one AI company. The school can connect to AI from Google, OpenAI, Meta, or others – or run AI on its own computers. ↩︎

  22. Sage.Education, Learning Visibility Dashboard. Teachers see student-AI interactions through a clear, open system. Both teacher and student know the rules. sage.education. ↩︎