
The Standing Senate Committee on Social Affairs, Science and Technology is one of several committees in the House and Senate conducting hearings on artificial intelligence. I appeared before the committee yesterday (my fourth appearance on the issue in recent months), but rather than reiterate previous testimony on privacy, copyright, and transparency, I focused on the big issue of the moment: bans on social media and AI chatbots for children. The committee had been hearing from many supportive witnesses who emphasized the risk of harm associated with AI. Indeed, one Senator asked the panel before mine to raise their hands if they supported a ban, and virtually all hands went up. I was unsure about how my comments would be received, but I found the Senators open to debate on the issue. A video of my opening remarks, together with the transcript, is posted below. A future Law Bytes podcast episode will delve into the discussion that followed.
Appearance before the Standing Senate Committee on Social Affairs, Science and Technology, May 6, 2026
Good afternoon and thank you for the invitation. My name is Michael Geist. I’m a law professor at the University of Ottawa where I hold the Canada Research Chair in Internet and E-commerce Law. I appear in a personal capacity representing only my own views.
I have appeared before several parliamentary committees on AI policy in recent months, focusing on three priorities: privacy, copyright and transparency. I touch on some of these in my opening remarks, but given the growing political momentum behind banning young Canadians from AI chatbots, alongside similar proposals for social media, I want to use my opening time to address that directly. The case for these bans is weak, the harms they would create are significant, and they should be rejected in favour of broad-based AI regulation.
The concerns motivating these proposals are real. But the discussion conflates two different questions: how to regulate AI chatbots and whether to layer a kids-specific access ban on top.
AI chatbots are not social media. The category itself is not well-defined: the same underlying models are accessible not just through ChatGPT and Claude but through APIs and AI features now standard in Google search, Microsoft Office, and Apple operating systems. These definitions matter for regulatory purposes.
Further, the inputs and outputs with AI chatbots raise different regulatory problems. The input side – AI prompts – resemble search queries or private messages, not public posts. Treating prompts as something companies must monitor and report on builds a system of corporate surveillance over interactions that users reasonably expect to be private.
The AI responses – the output side – is where the focus should lie: accuracy, safety on topics like self-harm, and design choices that draw users into emotionally intense interactions. Those are best addressed through regulation, not a ban. Other jurisdictions have already chosen this path. California rejected an age ban but has passed legislation requiring disclosure, crisis-response protocols, and restrictions on sexually explicit content for known minors.
I have written about how a social media ban for kids raises a host of concerns, including the failure to address risks affecting all users, the privacy and surveillance risks of age verification, and, thus far, the demonstrated ineffectiveness of a ban.
But a kids-specific AI chatbot ban would be worse than the social media version on every relevant factor. Age verification extends a surveillance infrastructure across an open-ended and growing set of services, effectively requiring all Canadians to verify themselves in ways that sacrifice privacy by sending IDs to services at risk of security breaches and that may evade Canadian privacy law. Further, age estimation frequently relies on user surveillance by monitoring their friends and messages and opens the door to bias against racialized minorities. Don’t take my word for it. Hundreds of scientific experts have said the same. Moreover, the costs of cutting young Canadians off AI are concrete: the tools have demonstrated educational, productivity, and accessibility benefits that no comparable social media analysis can match.
Canada should move forward with effective AI regulation. First, an AI Transparency Act mandating disclosure of corporate safety policies, training-data inclusion, government and law-enforcement demands, and the age-related restrictions that major commercial chatbots already apply. It shouldn’t take the AI minister having to meet with executives to get this information. All Canadians should be able to see what is already happening before legislating around it.
Second, a modernized privacy law that addresses both the inputs to AI systems and the outputs. Data sovereignty concerns are not solved by Canadian data centres. They are solved by Canadian privacy law that actually applies with real penalties. And we need privacy laws that directly address the risks posed by re-identifying de-identified data, a risk that is exacerbated by the power of AI inference and which was scarcely addressed by today’s Privacy Commissioner finding on OpenAI.
Third, an enforceable duty to act responsibly tailored to the chatbot context. The architecture of chatbots, where output is generated in response to prompts rather than pushed by an algorithmic feed, makes age-tiered design genuinely feasible. A duty that mandates and audits developmentally appropriate design across different ages is the version of age-related regulation that fits the technology. A binary access cutoff borrowed from social media is not.
The political appeal of bans is obvious. But the case for them on AI is weak. We need to move on the harder and more useful work of building an effective Canadian model for AI regulation. I look forward to your questions.
The post Why Social Media and AI Chatbot Bans for Kids Are Bad Policy: Making the Case at the Senate Social Affairs, Science and Tech Committee appeared first on Michael Geist.