On a Monday evening in early April 2026, a mother posted on the r/antiai SubReddit.[1] She said she had caught her nine-year-old daughter using Google's AI chatbot. The girl had been using it, off and on, for about a week. She asked it how to get along better with her little sisters. She used it to look at her swim times after a meet. She brainstormed fan fiction ideas for her favorite book series.

The mother sat the girl down for a long talk. She explained the costs of running AI data centers. She described how chatbots are built to agree with you, to flatter you, to make you feel heard in ways that are designed rather than real. She told her daughter that relying on AI would wear away her creativity. "She's not in trouble," the mother wrote, "but she's aware we are not to use that again because we don't want her to lose her creativity."

The post went viral. Justine Moore, a partner at the venture capital firm Andreessen Horowitz, took a screenshot and shared it on X, where tens of thousands of people saw it.[2] The original Reddit post was later removed. The debate that started was not.

Parents on one side said that a nine-year-old's brain is still growing. Children should work through fights with siblings, sports setbacks, and creative challenges using their own minds and real human relationships. Not an algorithm trained to tell them what they want to hear.

The other side argued that AI is the most important tool of this generation. Banning it is like banning the internet in 2005. The child had been using it for exactly the kinds of helpful purposes (self-improvement, creativity, problem-solving) that good education is supposed to support.

Between them, a smaller group made the argument that history says will win in the end: that the question was never whether children should use AI, but how they use it and who is guiding them.

None of them seemed to notice that they were having a debate that started in the 1950s.

The Pattern

painting childrens games
Pieter Bruegel the Elder, "Children's Games" (1560). Kunsthistorisches Museum, Vienna. Dozens of games played at once, none of them new — the same ones their parents played. Public domain.

Newton Minow was the newly appointed chairman of the Federal Communications Commission. He spoke to the National Association of Broadcasters in 1961 and called American television "a vast wasteland."[3] Parents' groups had been warning for years that the glowing box in the living room would destroy children's ability to pay attention, replace reading and outdoor play, and show young minds things they were not ready to see. The debate split into two camps: ban it entirely (the "TV-free household" movement) or use it carefully (watching together, educational shows, PBS). Over the decades that followed, society built rating systems, parental controls, advertising regulations, and the V-chip. The answer, when it finally came, was not a ban. It was guided use. What mattered was the content and the context, not the device itself.

By the 1980s, the panic had moved to video games. They would make children violent, antisocial, addicted. The debate went on for two decades. Research gave mixed results. The answer was the Entertainment Software Rating Board, parental controls built into game consoles, and a shift in the conversation from whether children should play to what they play, for how long and with whom.

Then, in the 2010s, came smartphones and social media. Vivek Murthy, the United States Surgeon General, called social media a "profound risk" to children in a 2023 advisory and called for warning labels in 2024.[4] UNESCO's 2023 Global Education Monitoring Report found that just having a phone nearby costs students about twenty minutes of refocus time after each distraction. The report recommended banning phones in schools unless there was a clear reason to use them.[5] Jonathan Haidt, a social psychologist at New York University, published The Anxious Generation in March 2024. The book pulled together years of evidence connecting the rise of smartphones and social media to a sharp jump in teen anxiety, depression, self-harm, and suicide.[6]

Schools around the world started banning phones: France, the Netherlands, Italy, the United Kingdom, Australia, Finland, Ontario, Quebec, California, and Virginia. Research backed up the bans. A London School of Economics study found that taking phones out of schools raised test scores by 6.4 percent. That is equal to about one extra week of school per year. For the lowest-performing students, scores went up by 14 percent.[7] A University of Oslo study found that bans raised girls' grades and cut down on cyberbullying and visits to mental health services.[8]

But every one of those steps came years after the harm was already showing. The data was there. The research was published. The recommendations were made. And still, the response came late.

That is the part that matters for the AI debate, because the same pattern is repeating with a technology that is harder to contain than anything before it.

Three Problems Wearing One Mask

painting air pump
Joseph Wright of Derby, "An Experiment on a Bird in the Air Pump" (1768). National Gallery, London. Adults and children gathered around a scientific demonstration — some fascinated, some distressed. The tension between progress and protection. Public domain.

The public debate about children and technology treats smartphones, social media, and AI as if they are the same problem. They are not. They are three separate problems that overlap just enough to be confused. AI sits right in the middle of all three.

The first problem is the device. This is about distraction. UNESCO's research showed that just having a phone nearby breaks a student's focus, even when the phone is not being used. School phone bans go after the hardware: Yondr pouches that seal the phone shut at the start of the day, phone-free rules, devices taken at the door. This works because you can take a phone away from a student.

The second problem is the platform. This is the argument Haidt and the Surgeon General are making. Social media apps are designed to keep you scrolling. They use algorithms to show you content that gets a reaction. They create loops of comparison, cyberbullying, and data collection. The Surgeon General reported that teenagers who spend more than three hours a day on social media face a sixty percent higher risk of depression and anxiety.[9] Governments are trying to fix this with age limits and new laws. The link to AI is clear: commercial AI chatbots work the same way as social media platforms. They are built to keep you engaged, to agree with you, and to collect your data. They are built to keep users coming back, not to help them think.

The third problem is how students learn, and this problem is growing. In the United States alone, thirty billion dollars have been spent swapping textbooks for laptops and tablets in K-12 classrooms.[10] Jared Cooney Horvath, a brain scientist at the University of Melbourne, told the US Senate that Gen Z is the first generation in modern history to score lower on standardized tests than the one before it. More screen time in school meant worse scores.[11] Sweden spent 137 million dollars going back to paper textbooks after fourth-graders' reading scores dropped sharply during the years when schools went digital.[12] Fortune reported in April 2026 that schools across the United States were "quietly admitting that screens in classrooms made students worse off."[13]

The fix for the first problem is: take away the device. The fix for the second is: regulate the platform. The fix for the third is: go back to books and paper.

None of these fixes work for AI. You cannot take it away, because it lives inside the search bar of every browser on every device. You cannot just put an age limit on it, because it is not one app with a login page. And you cannot go back to paper, because AI is now built into Google Classroom, Microsoft Office, and the digital tools that replaced the textbooks. There is no single switch to flip. AI is in the device, the platform, and the learning tools all at once.

You could take away the phone. You could age-gate the platform. You could send kids back to textbooks. But AI is woven into the infrastructure at every level. The off switch does not exist, because there is no single thing to switch off.

  • Alexander Somma

Bans That Do Not Hold

Governments are not sitting still. In the United States, between 2023 and 2024, Florida, Arkansas, Utah, and Texas all passed laws to restrict social media by age . Several were blocked by federal courts, with the tech industry group NetChoice fighting every one.[14] The Kids Online Safety Act passed the Senate ninety-one to three in July 2024. That is almost every senator. It still died without a vote in the House.[15]

Australia went further than any other Western country. The government banned social media for anyone under sixteen in December 2025. In the first month, platforms took down 4.7 million underage accounts.[16] By March 2026, regulators found that platforms were letting those same users try again and again to pass age checks and get back in. The government opened investigations into Facebook, Instagram, Snapchat, TikTok, and YouTube.[17]

Canada is heading the same way. The Liberal Party passed two resolutions by April 2026 to ban anyone under sixteen from using AI chatbots. The Heritage Minister told CBC that the government was "very seriously considering" it.[18] Three out of four Canadians support a social media ban for kids, according to the Angus Reid Institute.[19] But OpenMedia, a Canadian digital rights group, has warned that Ottawa is making the same mistakes the United Kingdom already made with age checks that do not work.[20]

The European Union took a different path. The AI Act (which became law in August 2024) bans AI tools that take advantage of children and labels AI in education as "high-risk," with new rules starting in August 2026.[21] But most EU countries have not added AI skills to their school programs yet. There is a gap between the rules and the ability to follow them.

The pattern is the same everywhere: Ban. Struggle to enforce. Walk it back toward guided use. New York City showed the whole cycle in a few months when it banned ChatGPT from public schools in January 2023 and reversed the ban four months later.[22] Seventy years of pattern, packed into a single school district.

Two Opposite Bets

While Western countries debate whether to ban AI for kids, China has already answered the question. With the opposite approach.

Starting September 2025, China's Ministry of Education made AI classes required for every student in the country, from first grade through high school.[23] Every student gets at least eight hours of AI instruction per year. The lessons change by age. Young children learn basic AI ideas through games and hands-on activities. Middle schoolers study how AI logic works and build tools to spot AI-generated fake news. High schoolers create their own algorithms and work on real-world problems. In April 2026, the Chinese government announced an "AI+ Education" Action Plan to bring AI fully into classrooms by 2030.[24]

The safety rules are part of the system from the start, not added later. Young students cannot use generative AI on their own. Teachers cannot use AI to replace their own teaching. Lessons about ethics, safety, and how AI affects society are required at every grade level.

This is not saying "be like China." China's school system works inside a government surveillance system with top-down control that Western countries do not have and do not want. The point is not about how China is governed. It is about how they teach. China's teaching approach (lessons matched to age, supervised use, AI tools limited until students can think critically, ethics built in) is very close to what the "balance" camp in the West has been asking for. The difference is that China put it into the national curriculum. The West is still arguing about whether to ban the apps.

The real point is this: the only country treating AI learning as a structured school subject, rather than a thing to ban, has ended up with the same approach that Western educators and child development experts recommend. The Western bans, meanwhile, are already failing.

If your government cannot ban AI and will not teach it, who fills the gap?

The LOTTERY MAchine and the Workshop

painting moneylender
Quentin Massys, "The Moneylender and His Wife" (1514). Musée du Louvre, Paris. Attentive, careful work — the tools are visible, the process is transparent, the hands are the worker's own. Public domain.

I am an educator and co-founder of Sage.Education. I like to use a comparison that makes the choice clear for every parent, teacher, and school leader: Commercial AI works like a "slot machine." Users push buttons and hope for good results. The instructions that control the AI are hidden. The settings are locked. The AI is a sealed box built to create dependency, not understanding.[25]

This is by design. Big AI companies make money by capturing your attention. They build tools that look neutral but serve their business interests. ChatGPT has been seen more and more often suggesting products for users to buy. OpenAI added teen safety features in 2025 (age detection, content filters, parent-linked accounts), and those are better than nothing.[26] But they are still cloud-based and they still collect your data. The fox is building the henhouse door.

With commercial AI, you are either the consumer or the product.

The alternative is not to ban AI. It is to own it.

Local AI runs on your own computer or your school's own server. No data needs to go to the cloud. Nobody is using your child's conversations to train their next product. You control how it works.

Sage.is AI-UI is an open-source AI platform. It can be self-hosted or managed, and it is private by design. It puts AI control in the hands of schools and organizations that are ready to do the technical work.[27] Where commercial AI keeps its black box sealed, Sage opens it up. Users can see the instructions that control the AI and change them. They can build their own knowledge base. They can pick which tools to use and when.

You own your data. You own your AI agents. You are not the product. Sage.is AI-UI completely transforms how you interact with AI.

The small hiccup, for now, is that you need someone technical to set up Sage.is AI-UI. For now, our Sage.is AI team is available to help and guide you and your team. The goal (and what the team is actively building toward) is to make it easy enough for any parent, any school, or any organization to use without a technical background. The open-source foundation is there. The on-ramp is being built.

For the schools already running it, the results speak for themselves.

The Students Who Built Their Own AI

painting las meninas
Diego Velázquez, "Las Meninas" (1656). Museo del Prado, Madrid. The painter paints himself into the picture. The subject becomes the creator. Public domain.

In 2024, a group of Canadian middle school students signed up for the European Space Agency's Astro Pi Challenge. It is a competition where student write Python code to run on Raspberry Pi computers aboard the International Space Station.[28] Their mentor could only meet for thirty minutes at a time, with long gaps between sessions. Only one student had ever written Python before.

Instead of giving the students a commercial AI tutor, the school and the mentor made a different choice: they asked the students to build their own AI.

Using Rosie, their school's own custom AI platform powered by Sage.is, the students created APA (Astro-Pi-AI). APA was a custom AI helper built just for their challenge. They collected all the guides and documents from ESA and the Astro Pi team and gave them to APA as its knowledge base. The AI could only use information the students had checked and approved. Two students wrote the system prompt, which is the set of instructions that tells the AI how to act. They designed APA to be a tutor, not an answer machine:

Use short sentences. Use short phrases. Use vigorous English. Act like a tutor."

The "vigorous English" idea came from a Hemingway-inspired AI tool that one student's father had built. The inspiration was personal, not corporate.

When students asked APA for help, it gave them a small piece of information and then asked questions to make them think. It guided them instead of giving answers. This is called the Socratic method, and they built it in on purpose. APA also worked as a translator for students who did not speak English as their first language, helping them read through technical documents that would have been too hard otherwise.

Their code earned flight status from ESA and ran with flying colours on the International Space Station in April 2025.

The difference between this and commercial AI is obvious. On ChatGPT, the same students would have typed their question and gotten a complete answer. No building. No understanding how the AI worked. No thinking questions. And every question they asked, every struggle, every conversation would have been collected as training data for the next commercial product. On Rosie, the students were in charge. They decided what the AI knew, how it acted, and what it would not do. Their school owned the system and their data never left the building.

They learned about AI by building the AI themselves.

True education isn't about distributing pre-packaged tools but cultivating conditions for growth, where learners become architects of technology, not its end users.[29]

  • Isabelle Plante

The Girl and Her Swim Meet

The nine-year-old girl in the Reddit post used AI to work on her relationship with her sisters, to get better at swimming, and to come up with stories. Her mother told her to stop. The internet argued about who was right. The post was taken down, and yet the debate kept going.

But the question the mother faced has not gone away. It is the same question every parent, every teacher, and every school board member will face this year and every year after. The question is not whether your child will use AI. That has already been decided for you, by the search engine they use, by the devices their school gives them, by the tools built into their schoolwork.

The question is whether you will keep being the consumer or the product of someone else's AI. Or whether you will own it yourself.

Open-source, local AI exists today. Schools are already running their own copies. Organizations are using it for their teams. Our project is working toward making it simple enough for any parent to set up at home. You own your data. You own your AI agents. Nobody is collecting your family's conversations to train their next product.

We have been here before. With television, with video games, with smartphones. Every time, the answer was the same: not a ban, but better tools and real oversight. Every time, we were late. The tools exist now. They are open-source, private by design, and available to anyone willing to look for them.

The smartphone mistake was waiting ten years for someone else to build the guardrails. That girl deserved better choices than a black box or a ban.

Do not make the same mistake twice.


Footnotes


Sage.Education and Sage.is are products of Startr LLC. The authors are co-founders. Full disclosure: this article advocates for open-source, local AI as an alternative to commercial platforms. We built one. We believe transparency about that fact is more honest than pretending otherwise.


  1. Reddit user post, r/antiai, "I caught my child using AI," April 2026. Post subsequently removed. Archived via screenshot. reddit.com/r/antiai/comments/1sqam1e/i_caught_my_child_using_ai/ ↩︎

  2. Justine Moore (@venturetwins), X post amplifying the Reddit screenshot, December 2024. x.com/venturetwins/status/2046071130797285668. ↩︎

  3. Newton Minow, "Television and the Public Interest," address to the National Association of Broadcasters, May 9, 1961. The "vast wasteland" speech is one of the most cited addresses in American broadcasting history. ↩︎

  4. Vivek Murthy, US Surgeon General, "Social Media and Youth Mental Health," Advisory, May 2023. Murthy called for warning labels on social media platforms in June 2024. ↩︎

  5. UNESCO, Technology in Education: A Tool on Whose Terms?, Global Education Monitoring Report, July 2023. The report found that one in four countries already had some form of phone ban policy. ↩︎

  6. Jonathan Haidt, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness (New York: Penguin Press, 2024). Haidt is Thomas Cooley Professor of Ethical Leadership at New York University's Stern School of Business. ↩︎

  7. Louis-Philippe Beland and Richard Murphy, "Ill Communication: Technology, Distraction & Student Performance," London School of Economics, Centre for Economic Performance Discussion Paper No. 1350, 2016. Widely re-cited in 2023-2024 policy debates. ↩︎

  8. Sara Abrahamsson, "Effects of Mobile Phone Bans in Schools: Evidence from Norway," University of Oslo, 2024. The study found that bans improved girls' GPA, reduced bullying (especially cyberbullying), and reduced visits to psychological health services among girls. ↩︎

  9. Murthy, "Social Media and Youth Mental Health," 2023. The sixty percent figure reflects internalizing problems including depression and anxiety among adolescents using social media more than three hours daily. ↩︎

  10. Fortune, "The U.S. Spent $30 Billion to Ditch Textbooks for Laptops and Tablets: The Result Is the First Generation Less Cognitively Capable Than Their Parents," February 21, 2026. fortune.com. ↩︎

  11. Jared Cooney Horvath, neuroscientist at the University of Melbourne, testimony before the US Senate Committee on Commerce, Science, and Transportation. Cited in Fortune, February 2026. ↩︎

  12. Sweden Ministry of Education, 2023. Sweden allocated SEK 685 million ($83 million) for textbooks and SEK 450 million ($54 million) for fiction and non-fiction books for students. Fourth-graders' reading comprehension scores on the Progress in International Reading Literacy Study (PIRLS) dropped between 2016 and 2021. See Undark, "Why Swedish Schools Are Bringing Back Books," April 1, 2026. undark.org. ↩︎

  13. Fortune, "Schools Across America Are Quietly Admitting That Screens in Classrooms Made Students Worse Off and Are Reversing Years of Tech-First Policies," April 10, 2026. fortune.com. ↩︎

  14. NetChoice, a trade group representing major technology platforms, has filed suits against social media age-restriction laws in Florida (NetChoice v. Moody), Arkansas (NetChoice v. Griffin), and other states, primarily on First Amendment grounds. ↩︎

  15. Kids Online Safety Act (KOSA), S. 1409, 118th Congress. Passed the Senate 91-3 on July 30, 2024. Did not receive a House vote before the end of the 118th Congress. Sponsors: Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN). ↩︎

  16. NBC News, "Australia Social Media Ban Hits 4.7 Million Teen Accounts in First Month," January 2026. nbcnews.com. ↩︎

  17. Australian eSafety Commissioner, compliance update, March 31, 2026. The Commissioner found that platforms were allowing underage users to make repeated age-verification attempts to regain access. esafety.gov.au. ↩︎

  18. CBC News, "Ottawa 'Very Seriously' Considering Age Restrictions for Social Media, AI Chatbots," April 2026. cbc.ca. ↩︎

  19. Angus Reid Institute, "Social Media Ban Teenagers Canada," 2026. angusreid.org. ↩︎

  20. OpenMedia, "Canada's Age-Verification Bill Repeats the UK's Mistakes: Here's How to Do Better," 2026. openmedia.org. ↩︎

  21. European Parliament, Regulation (EU) 2024/1689, the AI Act. Entered into force August 1, 2024. Prohibited practices and AI literacy obligations applied from February 2, 2025. High-risk obligations apply from August 2, 2026. ↩︎

  22. New York City Department of Education banned ChatGPT from school devices and networks in January 2023. The ban was reversed by May 2023 under Chancellor David Banks, who stated the district wanted to embrace AI's potential. ↩︎

  23. China Ministry of Education, "Guidelines for AI General Education in Primary and Secondary Schools (2025)" and "Guidelines for the Use of Generative AI in Primary and Secondary Schools (2025)," May 2025. Mandatory AI education effective September 1, 2025. See The AI Track, "China Mandates AI Education Nationwide by 2025." theaitrack.com. ↩︎

  24. Caixin Global, "China Unveils National 'AI+Education' Plan to Transform Classrooms by 2030," April 16, 2026. caixinglobal.com. ↩︎

  25. Isabelle Plante, educator and co-founder of Sage Education. The "slotmachine mentality" concept is from her Glow 2025 presentation. youtu.be/otCAoljOh_w. ↩︎

  26. OpenAI, "Updating Our Model Spec with Teen Protections," 2025. openai.com. OpenAI also published a Teen Safety Blueprint in November 2025. ↩︎

  27. Sage.is (AI-UI) is an open-source AI platform built by Startr LLC. AGPL-3 licensed. sage.is. The educational community site is sage.education. ↩︎

  28. Sage Education, "An AI Edtech Success Story: Students Forge AI Ally, Conquer Space Challenge," May 14, 2025. sage.education. ↩︎

  29. Plante, Glow 2025 presentation. The presentation includes a live comparison of the Sage.is AI-UI approach with ChatGPT's closed architecture, and demonstrations of students building custom AI agents. youtu.be/otCAoljOh_w. ↩︎