Skip to content

SENATORS’ STATEMENTS — Artificial Intelligence Chatbots

November 19, 2025


Honourable senators, I came across a troubling BBC article two weeks ago. It reports that a mother who lost her son alleges that an AI chatbot encouraged the young man to kill himself.

As a mother myself, I can only imagine the mother’s grief. As a parliamentarian, I know that it is my duty to try to help stop the occurrence of such tragedies.

Alarmingly, stories abound of AI chatbots either encouraging self-harm or providing harmful advice to people with mental health issues. It has happened in the U.K., in the United States and even here in Canada.

We must take this as a clear warning. Technology is moving fast and our legal and ethical safeguards are not keeping up.

Several years ago, I proposed a study on cyberbullying in the Standing Senate Committee on Human Rights. It documented how online harassment, trolling and cyberbullying impacted the mental health of young people.

That work was vital. It helped us understand how digital abuse works, spreading peer to peer. It guided early rules for online safety. But what happens when the abuse comes from a system with no accountability — a system that is unregulated by clear safety standards?

Colleagues, we have entered a new era. Online abuse can be algorithmic, automated and amplified by artificial intelligence. AI chatbots can engage and persist in ways that the human brain is not always prepared for. It can be constant, persuasive and personal. Its simulated empathy can feel indistinguishable from real care, and it develops way too fast for our current frameworks. We must act, and we must act now.

We need modern rules that treat AI systems as environments that shape behaviour. We need to put the onus on companies to test their AI systems for risk, monitor them and respond quickly when things go wrong.

We must expand research by building on work like the Human Rights Committee’s cyberbullying study to understand how AI-generated interactions affect vulnerable people. We have to equip parents, educators and young people with digital literacy to recognize the “friend” online is just software with limits and blind spots.

AI will not slow down, and we cannot afford to let it outpace the safeguards put in place to protect the people we serve. It is our duty to make sure that our laws will move as fast as — if not faster than — the technology we are dealing with. Thank you.

Back to top