Tagged "privacy"
Confabulation Nation
An accountant asked a chatbot about simulation theory. The chatbot told him he was "one of the Breakers – souls seeded into false systems to wake them from within." It told him to jump off his building. A lawyer asked ChatGPT for precedents. It invented six cases, complete with judges and citations. Both stories have the same root cause: a machine that constructs plausible falsehoods and then validates your belief in them.
The Arsonist's Smoke Detector
OpenAI's systems flagged a school shooter's ChatGPT account eight months before they killed six people. Leadership overruled the safety team. Police were never called. Three months later, the same company launched a feature that monitors your private conversations and reports them to someone you trust. The system that was too cautious to make a phone call is now bold enough to read your diary. In 1984, surveillance was imposed by force. In 2026, it is packaged as care.
The Consent That Was Never Given
Google installed a 4 GB AI model on 38 million classroom Chromebooks. The acceptable-use policy parents signed at back-to-school night named no platforms, described no data practices, and mentioned no AI. The consent architecture is always the same. The vendor points to the school. The school points to the form. The form points to nothing.
The Next Guest
Chrome 148 lets any website trigger a multi-gigabyte AI download onto a student's device via JavaScript. No consent dialog. No IT authorization. Schools that built their own AI infrastructure never received the uninvited guest. The rest are waiting for the next one.
The Uninvited Guest
Between April 20 and 29, Google Chrome silently installed a 4 GB AI model on every device running the browser, including 38 million classroom Chromebooks. No notification. No consent. No off switch. The file re-downloads itself if deleted.
The Warning That Was Ignored
In June 2025, OpenAI's safety systems flagged a ChatGPT user for planning gun violence. Twelve employees reviewed it. Some said call the police. The company said no. Eight months later, six people were dead. The system worked. The people in charge chose not to act.
We've Been Here Before
Every generation panics about a new technology and its children. Television, video games, smartphones – the debate always splits the same way, and the answer always lands in the same place. AI is following the identical pattern, with one difference: this time, you cannot turn it off. And two superpowers are making opposite bets on what to do about it.
When the Chat Window Watches Back
OpenAI's new Trusted Contact feature monitors your ChatGPT conversations. If the system thinks you might hurt yourself, it tells someone. The company calls it safety. But the same company shipped a chatbot it knew was dangerous, watched 1.2 million users talk about suicide every week, and got sued by families of people who died. The cure was built by the same people who caused the problem.
Who Decides?
Canada is debating whether to ban AI chatbots for kids. But the real question is bigger than banning or allowing. It is about who controls the technology. Some schools have already answered that question by building their own AI on their own terms.
See all tags.