Tagged "surveillance"
The Arsonist's Smoke Detector
OpenAI's systems flagged a school shooter's ChatGPT account eight months before they killed six people. Leadership overruled the safety team. Police were never called. Three months later, the same company launched a feature that monitors your private conversations and reports them to someone you trust. The system that was too cautious to make a phone call is now bold enough to read your diary. In 1984, surveillance was imposed by force. In 2026, it is packaged as care.
The Warning That Was Ignored
In June 2025, OpenAI's safety systems flagged a ChatGPT user for planning gun violence. Twelve employees reviewed it. Some said call the police. The company said no. Eight months later, six people were dead. The system worked. The people in charge chose not to act.
When the Chat Window Watches Back
OpenAI's new Trusted Contact feature monitors your ChatGPT conversations. If the system thinks you might hurt yourself, it tells someone. The company calls it safety. But the same company shipped a chatbot it knew was dangerous, watched 1.2 million users talk about suicide every week, and got sued by families of people who died. The cure was built by the same people who caused the problem.
Who Decides?
Canada is debating whether to ban AI chatbots for kids. But the real question is bigger than banning or allowing. It is about who controls the technology. Some schools have already answered that question by building their own AI on their own terms.
See all tags.