Need for Safe and Productive Development and Use of Artificial Intelligence
Inquiry--Debate Adjourned
November 26, 2024
Rose pursuant to notice of June 13, 2024:
That she will call the attention of the Senate to the need for the safe and productive development and use of artificial intelligence in Canada.
She said: Honourable senators, I rise to begin this inquiry on the safe and productive development and use of artificial intelligence here in Canada.
Artificial intelligence, or AI, is the collection of technologies that allows machines to perform tasks that are usually associated with human intelligence — tasks such as learning, perceiving and creating. AI holds the potential to change our society and economy in many ways, unleashing productivity and innovation — for many, evoking excitement and promise for the future.
But there is a growing concern that we risk losing control of this very powerful tool, that people will soon be left behind and that some may even be harmed by the uncontrollable evolution of this emerging technology.
Colleagues, that is why I put forward this inquiry. Artificial intelligence is already having a wide and pervasive impact on every aspect of our lives. We cannot be marred down by inaction. We cannot let this technology move forward while standing on the sidelines and hoping for the best.
We, as senators, legislators and leaders, have the duty and the tools to actively examine, understand and have an impact on the development and use of AI, especially to ensure that it is safe and productive.
We must reflect upon the lessons of the past. We now know how dangerous the effects of social media have been. Despite some of the many good things that have come from it, social media has had extraordinary influences and, in some instances, negative impacts upon our culture, democracy, physical and mental health, economy and more.
What if there were efforts 20 years ago to understand how this technology could evolve and where there needed to be protections and cautions? What would governments and parliaments do if they had the opportunity to start over again? While we can’t go back in time, we can focus on the future. We can all agree that artificial intelligence is a pivotal, transformative technology like electricity, antibiotics or the internet before it. It will change everything about our way of life.
While we work to unleash AI’s potential for the economy and society, we must understand where the opportunities lie to protect Canadians from the harms of AI.
This inquiry is an important step in our dialogue in seeking our collective understanding of the impact we would seek to have as parliamentarians. I urge as many of you as possible to participate.
AI is impacting our health, culture, human rights, parliaments, democracies, health care, the arts, education, scientific research, economic growth, national defence and security, international relations and many more sectors. I trust that you will have much to say, honourable colleagues, about how AI is impacting those and other sectors as well as the benefits and risks that this landmark technology presents.
But for the remainder of my time, I will focus on the governance of AI. The rapid advancements in AI come with significant challenges related to ethical considerations, accountability and its disruptive impacts upon society. AI governance has emerged as a critical area of focus with the ultimate aim of mitigating the risks of AI while maximizing its benefits.
Governance related to the technology is the responsibility of everyone — governments, regulatory bodies, industry and developers themselves. Already, we have seen the emergence of existing governance mechanisms; for example, AI technologies may fall within the boundaries of existing laws and regulations. Legislation related to discrimination, data protection and privacy already exist. As well, AI is already governed by a regulatory compliance. Additionally, industry will look to govern AI themselves, to manage and mitigate risks, so that, with their products, they ensure their commercial interests are protected and that their core services function well.
Many have sought to apply an ethical lens to AI. For example, UNESCO has proposed a human rights approach to AI based on 10 principles such as transparency, “explainability,” human oversight, multiple stakeholders and adaptive governance and collaboration. Also, some major technology firms, such as Microsoft, Lenovo and Salesforce, have adopted UNESCO’s AI ethical framework. Yet core to any and every effective governance mechanism for AI must be transparency and accountability.
When considering AI technologies, it’s important to understand the following: Where does the data that trains AI systems come from? How is it controlled and maintained to ensure accuracy, quality and privacy? Who is involved in the development and creation of AI systems? Is there diverse representation around their decision-making tables? Are there challenge functions embedded within the organizations? It is crucial to identify mechanisms to ensure true transparency and accountability in the development and deployment of AI.
By now, it must be increasingly obvious to all of us the enormous challenge before us. How do we govern a technology so complex, so ubiquitous and developing so rapidly, even as we speak? We can say without a doubt that doing nothing is really not an option, but it is clear that governance will be as complex as AI technology is.
What is Canada doing? The Canadian government has made a few moves to regulate AI both within its own operations and within Canadian society. The government introduced the Guide on the use of generative artificial intelligence, which serves as a crucial resource for federal public service organizations. This guide outlines key principles and practices that should be followed when implementing generative AI systems. It emphasizes the significance of ethical considerations and accountability. It focuses on the principles of fairness, security and relevance.
Another significant step taken by the Canadian government on AI governance is the introduction of Bill C-27. Officially known as the digital charter implementation act, 2022, it represents a significant legislative effort to address the complexities and challenges posed by digital and emerging technologies in Canada, aiming to enhance privacy protections for Canadians while promoting innovation in the digital economy. Bill C-27 proposes measures to ensure accountability and guidelines to mitigate risks. It addresses specific concerns on generative AI and seeks to safeguard individual rights and values while, at the same time, recognizing the need to foster innovation. This bill represents an important evolution in our governance environment.
The government also introduced the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, which aims to serve as a guideline for organizations developing generative AI technologies and emphasizes the importance, again, of ethical principles and responsible innovation. Signatories to this code commit to working toward accountability, safety, fairness, equity, transparency, human oversight and monitoring, validity and robustness in AI systems. We’re seeing a notable number of signatories to this code, including TELUS, Lenovo, the Council of Canadian Innovators and IBM.
Just this month, the Canadian government launched the Canadian Artificial Intelligence Safety Institute, a new research institute expected to advance our understanding of the risks associated with AI and to develop measures to address these risks. It will collaborate with other global safety institutes as part of the work that it undertakes.
Globally, the governance of AI is also an emerging priority. International organizations are increasingly involved in examining the ethical considerations that we’ve talked about. The Organisation for Economic Co-operation and Development, or OECD, has an AI Incidents Monitor that I encourage you all to visit. This monitor reports any event where an AI system produces a negative outcome, whether due to biases, errors or misalignments with human values. Such tools are valuable because tracking incidents allows organizations to learn from their experiences and to refine the AI policies and practices accordingly.
The World Economic Forum published a report on governance of generative AI earlier this year, outlining the best practices for managing this technology. In fact, they have done a significant amount of work on AI governance and have created tools to help those interested consider the complex impacts of AI on various sectors of society.
The UN High-level Advisory Body on Artificial Intelligence published its final report in September entitled Governing AI for Humanity. This report serves as a call to action for a balanced approach to harnessing AI’s potential while addressing its challenges. Crafted by a diverse team — government, tech and human rights leaders, all of whom were engaged with more than 2,000 experts worldwide — it emphasizes the need for a global framework to ensure the responsible and ethical development of technologies.
Finally, various governments are working on AI regulations. The EU has an important example, as it recently passed the Artificial Intelligence Act, and it is probably the most significant legal development on AI regulations globally at this point in time. The act establishes a comprehensive framework on AI and includes the creation of an EU AI office, which will oversee the implementation and enforcement of this act, including the power to impose significant penalties when regulations are not respected.
Colleagues, to conclude, the governance of AI is an enormous but necessary enterprise. Today, I’ve given you a very high-level scan of some of the efforts currently ongoing, but I’d like to leave you with my greatest concerns.
First, AI’s development is significantly exclusionary. Access to this technology and its benefits is so far reserved for significantly affluent countries within the Global North. This concerns me because it means that, almost certainly, AI will have unintended consequences that will hit those already impacted by systemic marginalization and racism. It also means that these people will not gain from the productive and innovative benefits of AI in an equitable way.
Second, I’m concerned about the lack of transparency within the industry. When given a peek through a 2022 MIT Sloan Management Review article, where over 1,000 managers responsible for the development and deployment of AI globally were surveyed, we were told that only 25% believe that they had fully mature governance processes in place. This reflects a troubling reality that private industry — the principal generators and users of these tools — have a lot to improve and a long way to go. We need to be able to see into these processes in a way that both respects and maintains the commercial interests of industry while, at the same time, ensuring they are challenged to ensure their work is responsible.
Finally, I’m concerned about the implementation of AI policies within Canada and globally. We need policies that are aligned, consistent and carry appropriate penalties for non-compliance. Without this, we risk stifling industry or allow it to run free to our own detriment.
Colleagues, it’s almost bedtime. I thank you for your attention. The safe and productive development of AI is one of the most important issues of our time. I hope to hear from as many of you as possible on this inquiry and look forward to our ongoing debate.