Need for Safe and Productive Development and Use of Artificial Intelligence
Inquiry--Debate Adjourned
June 11, 2025
Rose pursuant to notice of May 28, 2025:
That she will call the attention of the Senate to the need for the safe and productive development and use of artificial intelligence in Canada.
She said: Honourable senators, artificial intelligence, or AI, is one of the most transformative technologies in our history. From improving health care, to driving innovation in industries like education, culture and defence, to offering us new possibilities in scientific research, national security and many other spaces, AI has the potential to change the way we live, work and interact. AI has already begun to reshape many aspects of our society through automation and advanced problem solving. Wherever you look, you can’t help but notice the impact of AI. However, as AI is increasingly integrated into our lives, we must confront its potential risks. It is not a tool we can control with ease. If left unchecked, AI could cause significant harm to individual communities and society at large.
For example, Geoffrey Hinton, the “Godfather of AI,” has warned that we are entering an era when machines may surpass human intelligence. He describes AI as an accidental creation born from human failure and highlights serious concerns, such as fake news and bias in hiring practices and policing. These are just a few examples of the risks that we as policy-makers must consider.
AI is not inherently good or evil; it is a tool. Its impact on society will be shaped by how we choose to regulate, develop and use it. That is why it is critical that we act now. We are already behind with respect to fully understanding and governing this rapidly evolving technology.
Colleagues, this inquiry marks an essential step in ensuring that we as leaders are proactive in confronting the challenges that AI presents while also embracing its vast and valuable potential. We cannot afford to repeat the mistakes made with social media, where unchecked growth and a lack of safeguards led to unintended consequences for our democracy, culture and public health.
As senators, we have a duty to protect Canadians from these risks while also steering AI development toward outcomes that serve the public good. This is not only a national priority but a global responsibility, and Canada can and must have a strong voice in shaping the future of AI governance.
In my remarks this evening, I will begin by discussing the International AI Safety Report, recent developments in Canada’s AI sector and then the ways in which AI is being considered globally, looking more specifically at the outcomes of the Paris Artificial Intelligence Action Summit.
One of the key publications to guide us through this evolving landscape is the International AI Safety Report led by Yoshua Bengio, a leading global figure in AI research here in Canada. This report serves as a critical resource for understanding the global risks associated with AI, including cyber-threats, misinformation, labour market disruptions and the potential weaponization of AI.
The authors of the report noted, “Policymakers face the challenge of creating flexible regulatory environments that are robust to technological change over time.” They continued, saying, “Constructive scientific and public discussion will be essential for societies and policymakers to make the right choices.”
This sentiment underlines the importance of ongoing dialogue and flexible regulation to ensure that AI develops in a way that maximizes its benefit while minimizing its risks.
The report also warns of the danger of AI development becoming concentrated in a few countries, like the United States and China, which could lead to a global imbalance in AI leadership. It emphasizes the urgent need for international collaboration and comprehensive risk assessments to ensure that AI does not outpace our ability to regulate it.
As we consider Canada’s role in AI development, the International AI Safety Report offers us an essential framework that we can consider when thinking about how to govern AI. It encourages us to take a global perspective on AI safety while addressing domestic priorities.
Let me highlight some of the recent progress we have made here in Canada. In November of 2024, Canada made a significant move by launching the Canadian Artificial Intelligence Safety Institute. The institute will receive an initial budget of $50 million over five years as part of a $2.4-billion investment, announced in the 2024 federal budget, that includes the proposed artificial intelligence and data act and the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.
In April of 2024, former prime minister Justin Trudeau announced an investment of $2.4 billion to develop Canada’s AI sector. This includes the Canadian Sovereign AI Compute Strategy, which provides $700 million to build and expand data centres, $300 million to support AI computing costs for small- and medium-sized businesses and $1 billion to enhance high-performance computing for academic researchers.
More recently, in March of 2025, the President of the Treasury Board unveiled Canada’s first-ever artificial intelligence strategy for the federal public service. This strategy, to be updated every two years, aims to improve government operations and services by ensuring that AI is used safely, ethically and responsibly. It includes goals such as creating an AI centre of expertise, ensuring AI systems security, fostering talent development and promoting transparency and accountability.
Of course, Prime Minister Carney recently appointed the Honourable Evan Solomon as our first minister responsible for artificial intelligence and digital innovation. Yesterday, Minister Solomon gave his first public remarks at the Canada 2020 conference, where he indicated the pillars of Canada’s AI industrial strategy: first, scaling up AI; second, AI adoption throughout industries; third, increased trust in AI through regulation to protect data and privacy; and, finally, fourth, AI sovereignty for Canada’s defence and security. I look forward to learning about the specifics of the Carney government’s plan regarding AI as it unfolds.
Colleagues, with so much momentum in this space, we as senators have a responsibility to carefully examine proposed strategies and investments. We must ask whether they truly serve the interests of all Canadians and thoughtfully consider their long-term implications. This gives us the opportunity to ask the right questions and consider the path forward.
Understanding AI in Canada requires us to consider the broader global context. The world is evolving rapidly, and many countries are pushing ahead with AI initiatives. In February of 2025, France and India hosted the Paris Artificial Intelligence Action Summit, where leaders, experts and researchers discussed the future of AI. The summit focused on five key themes: public interest in AI, the future of work, innovation, trust and global governance. However, one critical area received only limited attention: AI governance.
At the summit, U.S. Vice President Vance raised concerns that excessive regulation could stifle innovation, arguing that democratic nations might fall behind authoritarian countries that have fewer restrictions. This is a crucial debate that highlights the global divide and nuance in this area. Some advocates for strong regulation to safeguard society prioritize economic interests over governance. But many fall in the middle, wanting to benefit from the prosperity that could come with AI in a way that aligns with our democratic values, such as human rights, inclusivity and the rule of law.
Despite these differences, 62 countries, including Canada, signed the AI Action Summit Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet. This commitment to ensuring AI is developed responsibly reflects our shared responsibility to tackle the challenges that AI presents. But the United States and the United Kingdom chose not to sign, citing concerns over restrictive language and governance frameworks. This divergence underscores that even amongst countries that are normally allies, there may be differences when it comes to AI.
At the same time, in January of this year, U.S. President Trump announced a $500-billion investment in private sector AI infrastructure led by companies like OpenAI, Oracle and SoftBank. The European Commission also pledged over €200 billion to AI and digital innovation. French President Emmanuel Macron announced plans to invest €109 billion in AI, including new data centres.
These rapid global investments show just how urgent it is to address AI’s growing prominence and the need for coordinated global efforts to regulate it. Without coordinated domestic and global standards and cooperation, we risk allowing global markets to drive AI development unchecked, potentially at the expense of the public good. This underscores the importance of having critical discussions here in Canada about how we are going to continue to prioritize regulation, transparency and human rights within AI development while remaining a key player in the global AI race.
Given these global developments, this inquiry is a critical opportunity for us to assess the implications of AI on our future here in Canada. If we are to remain a leader in the global AI race, we must focus on regulation, transparency and human rights. If we are to maximize the benefits of AI in Canada, we must be on the front foot so that we can determine and create our own AI ecosystem — not one that’s flooded with products and technology that we can’t control, but one where we set standards and norms to ensure technologies are safe, accurate and high quality and come from countries with democratic values. AI’s growth must be steered with careful consideration, and we must not allow market forces or government priorities to drive its development unchecked. The reality is that the decisions we make today will shape the future of AI for generations.
In the International AI Safety Report, Yoshua Bengio, one of the leading figures in Canadian AI research, stated, “AI does not happen to us; choices made by people determine its future.” This quote encapsulates the urgency and the responsibility we have as policy-makers. Speaking about responsibility, Yoshua Bengio just recently launched a new non-profit called LawZero, which aims to bring together world-class AI researchers to develop technical solutions for safe-by-design AI systems. Efforts like this highlight that we must make intentional, informed decisions that prioritize public safety and societal benefits.
While the future of AI is uncertain, one thing is clear: Its trajectory will be shaped by the choices we make. We have the power to steer this technology toward progress while mitigating its risks. It is important to remember that trust and safety will drive productivity —
Senator Moodie, I’m sorry to interrupt, but your time is over. Thank you.
Thank you.
Honourable senators, I rise to speak to Senator Moodie’s inquiry on a matter of urgency: the regulation of artificial intelligence in Canada.
At the outset, I note that the Oxford philosopher Nick Bostrom has argued that given the potential power of advanced computers to run simulations, our reality is very likely a simulation. With that in mind, you might want to take my speech today with a grain of salt, but we’ll wait until the finish, because maybe not.
Colleagues, we stand on the cusp of a profound transformation. Artificial intelligence, or AI, is a change on the scale of the Industrial Revolution, nuclear energy or the advent of language itself. AI is not just another emerging technology; it is a force multiplier — a general-purpose capability evolving faster than our institutions, laws and imagination.
Today, I’ll speak to you about four aspects of this subject. First, I will outline the far-reaching implications of AI and the potential regulation for our democracy and society. Second, I’ll explore containment — technical, normative and legal — as a framework for governance. Third, I’ll survey global approaches, from the EU to the U.S. and China, before turning to Canada’s efforts through Bill C-27 and the artificial intelligence and data act. Finally, I’ll offer concrete proposals for strengthening our approach so that AI remains a tool that serves the public interest, not one that undermines it.
First, there are the social implications of AI and the potential regulation. Some argue that regulation stifles innovation. In the context of AI, however, what we want is innovation accompanied by robust risk management. This is where potential regulation comes in. A strong regulatory framework can attract investment, foster innovation and bolster public confidence. This is why potential regulation must address civil rights, including the rights to privacy, free speech and transparency, as well as concerns around safety and accountability.
The stakes could not be higher. AI is no longer on the margins. It’s reshaping how we work, learn, communicate and govern. In its most advanced forms, AI won’t merely assist us, but it will replace, optimize, predict and sometimes outpace us. From finance to health care to defence and justice, AI’s reach is expanding rapidly. Its growing power brings opportunities but also serious risks to our society.
Colleagues, this is not alarmism. The modern democratic state once promised us security, prosperity and democratic rights. AI now threatens to topple those pillars.
How about security? Imagine a world where autonomous drones and other weapons controlled by algorithms wage wars — machines out-thinking machines in conflicts we can’t even comprehend. Where is human accountability? Where does it stop? At the same time, without advanced AI, our country and our allies would be vulnerable to the advanced AI capabilities of potential adversaries.
As for our prosperity, AI-driven systems could come to dominate — and even manipulate — financial markets. A small number of firms may end up controlling the machines that influence your mortgage, pension or employment. Technology has already displaced many forms of physical labour, and now even the domains of human thought, creativity and artistic expression are at risk. In a society whose social contract has long balanced the rewards of free enterprise with the protections of a social safety net, what new challenges will AI pose to this fragile balance?
AI could also undermine social justice. A 2019 study in the journal Science found that AI in U.S. health care was far less likely to recommend care for Black patients than White patients with similar conditions — not due to malice but because it mirrored past discrimination baked into the data. This isn’t conscious bias; it’s structural harm, and it’s invisible until it becomes systemic.
As for democratic rights, algorithmically curated content and social media manipulation threaten to undermine their meaningful exercise. Generative AI systems, like ChatGPT, Claude, Gemini, Llama, Grok and Copilot, are embedded in our browsers, messengers and productivity tools. These tools don’t just generate text — they shape influence. They direct information flows, frame debates and set emotional tones. With minor tweaks, bad actors can weaponize them, cheaply flooding public spaces with misinformation, “deepfakes” and propaganda. In a world where artificial intelligence, or AI, may undermine the development of critical thinking skills in young people, this is all the more dangerous.
Although one danger is that AI could become malevolent, a pressing concern is that it is indifferent. It doesn’t care. It optimizes. It reflects the world as it is, not as it should be. Without deliberate choices about the values and limits we encode, AI will default to the logic of profit, power and prediction.
There is another danger — subtler still — that the machines will misunderstand. Humans are walking contradictions. We are nothing if not inconsistent. We want both adventure and safety, privacy and convenience. We lie. We change our minds. We change our moods. We exaggerate. We regret. How can machines understand us when we contain multitudes?
Yet, we are poised to hand over not just our tasks but our judgments — even our ethics — to machines. How will machines balance the rights of individuals with the greater good, a dilemma that humans continue to debate in many contexts?
This is not just a technical issue. It is a political challenge, a moral test, a crisis of good governance.
So how should we respond?
This brings me to my second point: containment, not as suppression, but stewardship. Not through fear, but through responsibility. “Containment” means democratic control over the tools we create. It rests on three principles.
First, technical containment, referring to what happens in a lab or a R&D facility. In the context of AI, this includes using air‑gapped systems, secure sandboxes, controlled simulations, emergency shut-off mechanisms and robust built-in safety and security protocols. These tools help ensure a system’s safety, integrity and freedom from compromise, and allow it to be shut down if necessary.
Second, normative containment, a culture among developers and institutions that values ethics over velocity. Power without reflection is dangerous.
Third, legal containment, regulation that crosses borders, laws ensuring transparency, civil rights, liability, oversight, integrity, values and ethics, transparency and sustainability as well.
Let me be clear, regulation alone is not enough. A summit or a Silicon Valley press release is no substitute for binding rules. We must bring together government, industry, academia and civil society to co-create a Canadian vision of AI rooted in that integrity, values and ethics, transparency and sustainability, not to mention fairness, inclusion and peace.
We must proactively act before we’re forced to react, before the next discriminatory algorithm, job loss or erosion of trust.
To my third point, globally, governments are taking divergent approaches.
The European Union adopted a comprehensive Artificial Intelligence Act — a tiered, risk-based system with clear obligations for high-risk systems and enforceable transparency rules for generative AI.
The U.S. is taking a sectoral, market-led approach, encouraging cooperation, but with uneven results.
China — once a leader in regulation and an outlier in practice. On paper, China looks proactive, regulating social media, banning crypto and publishing AI ethics guidelines. Its draft rules for large language models — LLMs — go further than the West’s. But in reality, civilian AI is tightly controlled while military and surveillance AI operate with few limits. AI there is not just a tool; it is state power. That is the future we must avoid.
To close, where then does Canada stand?
Our most significant step was Bill C-27, the digital charter implementation act, which included the artificial intelligence and data act, or AIDA. AIDA proposed risk-based oversight for high-impact systems, including generative models. But AIDA didn’t pass before Parliament dissolved. Canada now lacks binding legal safeguards, leaving a critical governance gap.
In response, the government introduced a Voluntary Code of Practice for generative AI developers. It encourages fairness, transparency and accountability, but is non-binding and unenforceable. It is no substitute for legislation.
More recently, Canada appointed its first Minister of Artificial Intelligence and Digital Innovation, announced by Prime Minister Carney on May 13, 2025. The Honourable Evan Solomon’s appointment signals growing recognition of AI’s importance, but the minister’s mandate is still undefined. According to a May 17 CBC report, the Prime Minister’s Office referred inquiries to the Liberal Party’s platform, Canada Strong, where AI is mainly tied to economic growth and public service reform. Worthy goals, but leaving many questions unanswered.
Contrast that with the EU’s AI Act which requires developers to disclose copyrighted training data, prevent illegal content generation and comply with General Data Protection and Regulation-level privacy rules. Canada’s approach via AIDA and the voluntary code remains vague and toothless. The gap is especially clear in one critical area: privacy.
Privacy needs urgent attention. AI is transforming how data is collected, inferred and used. In Quebec, a 2022 ruling found AI‑generated dropout predictions counted as personal information even when based on anonymized data. The Privacy Commissioner has called for mandatory privacy impact assessments for high-risk AI systems. This chamber should support that call.
We must ensure that AI serves people, not the other way around. That means enforceable standards for defining and regulating generative AI; mandatory privacy safeguards and impact assessments; public disclosure rules for high-risk applications; and independent oversight with enforcement power.
It also means broad and inclusive consultation with technologists, ethicists, labour leaders, Indigenous communities and Canadians.
Honourable senators, AI governance is a global challenge, but our response must be distinctly Canadian, rooted in dignity, equality, transparency and the rule of law.
AI is not just a tool. It changes how we make decisions, assign accountability and define human agency. We must meet this moment with clarity and resolve.
If we delay, we risk falling behind, letting digital systems evolve faster than our laws, leaving Canadians exposed to discrimination, misinformation and privacy violations.
Let us commit to making Canadian innovation a force not only for economic development but for justice and well-being.
In short, as science fiction becomes reality, let us remember the lesson of The Terminator franchise. In the words of John Connor, “There’s no fate but what we make for ourselves.”
Thank you, hiy kitatamîhin.
Thank you to you and Senator Moodie for this and to Senator Moodie for launching this incredibly important inquiry. I hope others speak to it.
One question for thought: AI development comes from two distinctly different categories. One is state-driven, such as in China, which is defined as upholding socialist principles — whatever that means — and the other is private industry-driven. As I understand it, it would be unlikely that one country could create legislation and rules that would govern an industry which is driven by private companies or by a state.
Has there been discussion or do you know about any discussion for creating a global agency that might have some of this regulatory capacity, such as the International Atomic Energy Agency? Would that be something that Canada should think about doing with like-minded states?
I think that’s the kind of window you need to be talking about. I fully agree with what you are looking at. We need to explore that. When you are pulling together those people for that summit or that pinnacle moment, you need to have the right people there.
I have a lot of faith in Canada’s abilities within this realm. We just need to get them together, but give them a reason why so that they are engaged, committed and they see what happens if they fail. Then we talk about the “hows.”
I know our Prime Minister likes to talk about how and not so much why. I like to spend time talking about why so that people get engaged, committed and so they know what they are fighting for. But I think — exactly as you said — it is a good place to start.