Skip to content
TRCM - Standing Committee

Transport and Communications


THE STANDING SENATE COMMITTEE ON TRANSPORT AND COMMUNICATIONS

EVIDENCE


OTTAWA, Tuesday, November 25, 2025

The Standing Senate Committee on Transport and Communications met with videoconference this day at 9 a.m. [ET] to examine and report on the opportunities and challenges of artificial intelligence (AI) in the information and communication technology sector.

Senator Larry W. Smith (Chair) in the chair.

[English]

The Chair: Before we begin, please take a moment to review the cards placed on the tables in the committee room to familiarize yourself with the guidelines for preventing incidents related to sound feedback. Please keep your earphones away from all microphones at all times. Do not touch the microphones. Their activation and deactivation will be controlled by the console operator.

Finally, avoid handling your earphones when the microphone is activated. Earphones must remain in your ears or be placed on the sticker provided for this purpose at each seat. Thank you all for your cooperation.

My name is Larry Smith. I’m a senator from Quebec, and I’m the chair of this committee. I’d like to ask my colleagues to introduce themselves.

Senator Simons: Paula Simons from Alberta, Treaty 6 territory.

Senator Mohamed: Farah Mohamed, Ontario.

[Translation]

Senator Cormier: René Cormier from New Brunswick.

Senator Arnold: Dawn Arnold from New Brunswick.

Senator Quinn: Jim Quinn from New Brunswick.

Senator Aucoin: Réjean Aucoin from Nova Scotia.

[English]

Senator Lewis: Todd Lewis, Saskatchewan.

[Translation]

Senator Miville-Dechêne: Julie Miville-Dechêne from Quebec.

[English]

Senator Dasko: Donna Dasko, senator from Ontario.

The Chair: I would like to welcome everyone with us today as well as those listening to us online on the Senate’s website, sencanada.ca. We are meeting today to begin our study on the opportunities and challenges of artificial intelligence, or AI, in the information and communication technology sector.

I would like to introduce our first panel. From Innovation, Science and Economic Development Canada, or ISED, we have: Jordan Zed, Senior Assistant Deputy Minister; Samir Chhabra, Director General, Marketplace Framework Policy Branch; Andre Arbour, Director General, Telecommunications Policy Branch; and Patrick Blanar, Director, Marketplace Framework Policy Branch. Also on the panel is Olivier Blais, Co-founder and Vice‑President, Artificial Intelligence, Moov AI. He is also a member of the AI Strategy Task Force and Co-Chair of the Advisory Council on Artificial Intelligence. Thank you all for joining us today.

Witnesses will provide opening remarks of approximately five minutes, which will be followed by a question-and-answer session with senators.

[Translation]

Jordan Zed, Senior Assistant Deputy Minister, Innovation, Science and Economic Development Canada: Good morning, Mr. Chair and members of the committee.

My name is Jordan Zed, and I serve as Senior Assistant Deputy Minister at the AI secretariat at the Department of Innovation, Science and Economic Development Canada. Thank you for the opportunity to speak with you today.

Canada is at an important juncture as we navigate global challenges and rapid technological change. The refresh of Canada’s AI Strategy provides an opportunity to strengthen our approach to one of the most transformative technologies of our time, and one that Canada’s research community has played an outsized role in creating.

[English]

While current government strategies have supported the curiosity-driven research that has brought us this tool kit and supported development and commercialization for thousands of Canadian businesses, Canada lags on adoption and commercialization. Meanwhile, other countries have massively increased investment in and focus on AI-driven transformation. Canada needs to adapt to remain competitive and harvest the benefits of this technology while also ensuring appropriate governance and that benefits accrue to all. This work must also reflect Canada’s fiscal context, so strategic use of resources will be essential.

The AI Strategy Task Force is playing a key role in informing this process, and the Government of Canada values the input provided, including that of Olivier Blais, who is with us this morning.

The task force brought together 28 experienced leaders from across Canada and from a variety of perspectives to help shape a strategy that is practical, scalable and aligned with Canada’s priorities.

Since the announcement of the refreshed AI Strategy Task Force by the minister a few weeks ago, there has been significant interest and engagement. We appreciate the commitment of task force members during this period of change. Other countries are undertaking similar exercises, and our objective is to develop a clear and actionable plan that positions Canada effectively.

[Translation]

Each member was encouraged to provide individual perspectives and ideas. While collaboration was welcome, the process was designed to ensure diverse and ambitious thinking rather than consensus-driven outcomes. The minister also had the opportunity to follow up with the task force to consider and think through the proposals in detail.

This work was a priority for the government, and while timelines were ambitious, we were confident the task force would deliver meaningful recommendations.

[English]

That is exactly what we saw: a very robust set of recommendations from across the task force membership, in addition to 11,300 submissions by the general public via the ISED portal, which is a historic number in terms of the interest and level of engagement. We are very pleased with the level and quality of submissions received across the board. Thank you very much for the opportunity to be here.

[Translation]

Samir Chhabra, Director General, Marketplace Framework Policy Branch Innovation, Science and Economic Development Canada: Good morning, Mr. Chair and members of the committee. Thank you for the opportunity to be here with you.

[English]

I want to briefly discuss today two key aspects of your study, namely, the implications of AI for Canada’s copyright framework and the impacts of AI-generated misinformation and disinformation and “deepfakes” on Canadians and on the marketplace.

On the implications of AI regarding copyright policy, the government has been engaging with stakeholders on these questions for several years. Most recently, between October 2023 and January 2024, the government engaged in consultation with Canadians, producing the study entitled Consultation on Copyright in the Age of Generative Artificial Intelligence, given the emergence of ChatGPT and other large language models, or LLMs, proliferating across the market.

The government asked Canadians’ views on three policy areas: the use of copyright-protected content in the training of AI models, notably for text and data mining, or TDM; authorship and ownership rights related to AI-generated content; and questions of liability, notably, when AI-generated content infringes on an existing copyright. The consultation received over 1,000 written submissions in response to the online consultation form.

A majority of Canadians engaged in this discussion were from the cultural sector, and they tended to highlight the need to ensure that creative works used in AI training should only be used with consent, due credit and compensation. For their part, the technology industries that participated in this consultation voiced concerns that the lack of clarity in the current copyright framework could hinder Canada’s competitiveness in establishing an AI development industry.

The government wants to ensure that the Copyright Act supports the development of AI while also protecting creators and copyright holders. We are closely monitoring the marketplace, including ongoing litigation and the developing licensing market for the use of copyright-protected content in AI training.

Beyond copyright issues, AI-generated content — especially realistic synthetic content commonly known as “deepfakes” — raises many other concerns for Canadians. These concerns include the generation of deceptive content that can impact public trust, democratic institutions or public safety and also fraud and deception on a more targeted level. These concerns also include the generation of realistic content featuring a person’s likeness without that person’s consent.

A range of government departments are currently working on these issues and to respond to these concerns, such as the Digital Citizen Initiative at the Department of Canadian Heritage and work at the Privy Council Office to safeguard our democratic institutions and electoral processes.

For our part at ISED, we’re advancing a number of initiatives relevant to these issues. First, I’ll highlight the Canadian Artificial Intelligence Safety Institute, or CAISI, which was launched in 2024.

CAISI brings together top Canadian researchers inside and outside government to advance the science of AI safety. CAISI collaborates with a network of related partner institutions around the world, including in the U.S., Europe, Asia and Africa, to help drive development and adoption of safeguards and standards to help mitigate risks from AI development and deployment, including risks from “deepfakes” and AI-generated disinformation.

Finally, I will highlight another initiative, which is known as the Voluntary Code of Conduct on Advanced Generative AI Systems, through which the government has encouraged AI developers and those deploying AI systems to put measures in place to address risks associated with synthetic content such as by developing or implementing reliable and freely available methods to detect content generated by their systems. Thank you very much again for the invitation to join you today and we look forward to your questions.

The Chair: Thank you very much, Mr. Chhabra.

[Translation]

I would now like to invite Mr. Blais to deliver his opening remarks.

Olivier Blais, Co-founder and Vice-President, Artificial Intelligence, Moov AI, As an Individual: Good morning, Mr. Chair and members of the committee. Thank you for the opportunity to be here and speak with you today.

[English]

I think I’ve been introduced, but suffice to say, Moov AI has recently been acquired by Publicis Canada, the largest communication group.

I’ve spent the past decade helping Canadian organizations adopt AI responsibly across different sectors, and I’m here to provide a practical overview of what AI means for the information and communications sector.

First, AI has moved from something nice to have to a core component of the digital landscape. AI is no longer a peripheral tool; it is becoming core infrastructure in how information is created, distributed and moderated.

This shift brings significant opportunities for productivity and innovation but also new responsibilities for safety, copyright and public trust.

If we focus on AI use cases and content creation distribution and processing, we see that AI assists with drafting, summarizing, translation, and ideation. AI can also generate any type of content such as audio, video and images.

Humans usually do and should remain in full control of judgment, tone and accuracy. AI simply accelerates the work.

AI helps deliver the right content to the right audience more efficiently, reducing waste and improving relevance. It can also personalize content and experience so that consumers get the right message at the right time.

Finally, AI automates transcription, metadata creation, accessibility support and detection of harmful content, allowing teams to focus on higher-value work. For example, Moov AI built an AI system to create transcription capabilities for the archive of all of the CBC and Radio-Canada three years ago.

Let me focus on a few risks that have been mentioned previously. First, let’s focus on copyright and intellectual property and three areas that require special attention.

First is training data rules: Creators need transparency and fair dealing.

Second is ownership of AI-generated outputs: Businesses need certainty that content they create with AI is legally theirs to commercialize.

Third is provenance and attribution: Emerging technical standards will help identify how content is produced.

Canada has an opportunity to align these policies with emerging standards, protecting creators while enabling innovation.

Regarding disinformation, misinformation and “deepfakes,” generative AI enables the rapid production of highly realistic and highly targeted misinformation. The threat is not only realism; it is scale and personalization.

Technical safeguards exist, including watermarking and authenticity standards, but their adoption across platforms will be essential for protecting Canadians.

Practical issues facing Canadian businesses are indemnity, contract addendums and copyright clarity. I want to highlight these three specific issues currently affecting Canadian organizations as they adopt AI.

Companies increasingly use AI assistants to transcribe meetings, summarize conversations or support account management. These tools don’t generate public content, but they handle sensitive information and can introduce errors.

Businesses need clarity on who is responsible if an AI transcription is wrong, how data is protected and how risks are shared between vendors and users. Without reasonable indemnity frameworks, companies hesitate to deploy tools that could significantly improve productivity.

Also, regarding contracts, most Canadian organizations operate under Master Service Agreements written long before generative AI existed.

Instead of renegotiating every contract from scratch, our main adoption businesses increasingly need simple, standardized addendums to clarify their level of usage of AI.

And finally, I think the most important one would be copyright clarity for AI-generated content. This is a growing pain. AI‑generated content today spans a full spectrum, from completely original content created by AI to content that is modified slightly and content where teams put significant creative effort into refining or enhancing AI output. Right now, businesses lack clear guidance on whether they fully own the rights to AI‑assisted creations, whether modifications to human content change ownership and how much human contribution is required for protection.

This legal grey zone is slowing down innovation, especially in advertising, media production and digital communications, because companies don’t want to invest in content if they cannot be certain they own it.

Providing clearer guidelines around copyright for AI‑generated or AI-assisted content would unlock enormous economic value while giving creators and businesses confidence to use these tools responsibly.

On that note, thank you very much for your time, and I look forward to your questions.

[Translation]

The Chair: Thank you, Mr. Blais.

We will now move on to questions from senators.

[English]

I’d like to advise senators that you will have approximately five minutes for questions. If we have time, we’ll look at a second round. Right now, we have a lot of questions to be asked. Please ensure you alert our clerk if you have not already done so.

Senator Dasko: Thank you, witnesses. This is our first meeting on this topic, so we really are getting our feet wet. Speaking for myself, I have a lot to learn; that’s why your input is so important.

Mr. Zed, you talked about the AI task force bringing together 28 leaders. Tell us what the task force found and what it concluded. That would be helpful.

Mr. Zed: Thank you, senator. I would say a couple of things. What was very interesting about the task force was they really came to the subject from a variety of disciplines and backgrounds. We have academics, business leaders and people with experience in and out of government at various points.

Interestingly, there wasn’t a consensus, necessarily, on any one subject. Those ideas will help inform the various parts of a strategy that we’re working through at the moment. We are in the process of synthesizing the hundreds of pages of the reports of task force members and the inputs of the public consultations that I referred to off the top and working interdepartmentally, recognizing this is something that impacts the government writ large and our economy more broadly.

So I wouldn’t say that there were any number of things that everyone landed on, but I would say certain themes I observed included increasing national literacy on this subject. Doing a lot more to try to address the gap between the understanding of what the technology actually and is and its capability in various contexts, including in our companies and businesses and society more broadly. How can we take a concerted approach in ensuring that we know more from CEOs and executives all the way down? Clearly, that was a common element.

The other piece I observed was an emphasis on the importance of adoption, and doing it thoughtfully and responsibly, recognizing that these are important tools at our disposal but asking how we do it in a thoughtful, responsible way. There is a lot of discussion about building trust, safety and security, as well as the connection between that and adoption.

Senator Dasko: You said Canada lacks progress on commercialization of AI, so we would be looking to the private sector to do this; they would be the leads. What is the appropriate role for government, given that the private sector is in the lead in taking this up?

Mr. Zed: It is an excellent question.

What sets Canada apart in this space, in large measure, is that we were the first to have a pan-Canadian AI strategy when we established it many years ago, and it was based upon investing in research. We have been foundational globally — with citations and research globally. A lot of it depends on Canadian researchers and leaders, one of whom was recently awarded a Nobel Prize. There are many, many others from those schools and across the country.

The comment I made about commercialization and scaling is a broader one and not just specific to AI, although it includes AI. I think you’re right that the role of the private sector in this context is absolutely critical, so figuring out what the role of the federal government is — to encourage, to incentivize, to perhaps take away roadblocks, to facilitate increased literacy, for example — those are some of the areas that we’re looking at very closely in understanding where and how the federal government can lean in.

It is also about where, in some cases, the federal government can lean back. It is understanding where and how the combination of those things can make a difference in improving our adoption, scaling and commercialization.

Senator Lewis: Thanks for your comments so far.

You talked about the lack of copyright protections and the litigation going on now. Is government going to be able to keep up with the speed of business and even the speed of litigation? Is the new legislation or future required legislation going to have to change constantly as the courts make decisions on this? Where do you see that playing itself out?

Mr. Chhabra: Thank you for the question, senator.

It is an open question right now whether Canada’s Copyright Act as it stands offers full protections in the case of text and data mining activities. There are a number of different perspectives on that. There are 13 different suits in the courts in Canada that we’re tracking right now that are raising those questions. This is a space that remains to be decided by the courts.

When we conducted the consultations I referenced earlier, in 2023-24, the creative community was generally of the perspective that changes to the Copyright Act was not required and that it was sufficiently robust to protect their rights. By contrast, some industry stakeholders were looking for more clarity around an exception, specifically one that would enable text and data mining.

There are, of course, existing exceptions within the Copyright Act, including for fair dealing, that are going to be tested in the courts to see whether that provision applies. Of course, there are cases internationally, as well, that are testing the limits of different copyright frameworks around the world to see what is permissible and under what conditions. These acts tend to be quite technical in nature and focus on the nature of copying, which, in itself, has changed dramatically over a number of years. So the definitions, stipulations and case law that underpin all that work will continually be tested.

That is the case for all of our marketplace framework laws: They test, evolve and develop with case law over time. We are monitoring closely to see where the cases come out and whether there is, then, a clear cause or need for government interaction to consider updating the law. That is why we undertook the consultations themselves: to get a sense of the broader public’s views, the creative community’s views and industry’s views to see where this issue was going.

Senator Lewis: You mentioned the voluntary code of conduct. How much uptake has there been for that? Have you been able to track it?

Mr. Chhabra: Thank you again for the question, senator.

To date, we have 46 signatories to that voluntary code of conduct, ranging from smaller organizations within Canada to large multinationals. We have also engaged on that voluntary code of conduct with partner jurisdictions around the world that are looking to undertake similar activities in setting up similar voluntary frameworks.

Mostly recently, in the spring of this year, we released a manager’s guide or tool kit to support smaller organizations looking to deploy AI tools in a responsible manner. The uptake, to date, has been quite robust. It’s a useful tool kit, and we get questions about it quite often, including from organizations that are looking to join on and sign up. I expect to see more join over time, as well.

Senator Wilson: I am going to yield my spot, since there are so many people on the list.

[Translation]

Senator Cormier: Thank you and welcome.

My first question has two parts.

The first part of my question is about the task force. Did the cultural sector sit on the task force? You carried out a 30-day consultation, which is fairly short. I would like to know about that.

The second part is for Mr. Chhabra.

During UNESCO’s recent international conference on culture, Mondiacult, ministers of culture committed to protecting the rights of artists, creators and rights holders in the digital environment; fighting against unethical uses of artificial intelligence; recognizing human creativity; supporting the discoverability of multilingual cultural content on digital platforms; involving the cultural sector in the development of policies related to artificial intelligence; and so on. Given all these important matters, what is your work plan? How do you intend to address these issues?

Mr. Zed: Thank you.

First, I would say that some members of the task force focused on that and made a number of suggestions and recommendations.

Senator Cormier: Were those people from the cultural sector?

Mr. Zed: They had experience in the cultural sector. They looked at some of the points you raised, including how laws should respond to the development of artificial intelligence.

[English]

There were a number of different perspectives on what to do and where to focus. I would say that copyright and intellectual property did feature as parts of a number of the proposals that came forward. These will all be public.

[Translation]

That will give everyone an opportunity to see everything that the task force members raised in that context. We’re in the process of translating that work and addressing some issues so that everyone can access it on the website. That will give everyone an opportunity to see and test the ideas that were raised.

Senator Cormier: The Coalition for the Diversity of Cultural Expressions talks about three essential conditions — authorization, remuneration and transparency — as being the basic principles for ensuring that culture and artists are properly represented. Are those concepts of authorization, remuneration and transparency at the heart of the discussions that you had or the results of those consultations? Mr. Blanar?

Patrick Blanar, Director, Marketplace Framework Policy Branch Innovation, Science and Economic Development Canada: Good morning.

Absolutely. Those are key considerations. The acronym that gets used more and more is ART, which stands for authorization, remuneration and transparency. That’s something we are very aware of. It was raised in many submissions during our consultations in 2023 and 2024. It’s an issue that we are constantly working on, one we understand deeply, and we appreciate it being raised.

To answer your question about UNESCO, the Minister of Canadian Identity and Culture is in charge of that, but copyright is a responsibility shared between two departments, namely, Innovation, Science and Economic Development Canada, or ISED, and Canadian Heritage. We always work hand in hand when it comes to policy development. That means that cultural issues will always be very important and will always be considered in the development of any policy that we put forward.

[English]

Senator Simons: My first question is for Mr. Chhabra. I was a journalist for 30 years before I joined the Senate, so I’m very sensitive to the issue of how LLMs train themselves. For me, there’s an inherent tension because ChatGPT is so stupid and so full of misinformation — you can’t call it disinformation because ChatGPT doesn’t trick you on purpose; it’s too dumb to do that. If we don’t train it with accurate information, it becomes dumber. At some level, I want LLMs to learn from credible articles and the best writing. At the same time, we need to protect the intellectual property rights of the human creators who created that content. How do we find a way to enable it to learn from the best without exploiting the work of human creators?

Mr. Chhabra: Thank you for the question, senator. It’s certainly a very important question, and it seems to be a live question in the marketplace today.

One of the things that we’re starting to see is LLM companies, including OpenAI and others, starting to try to differentiate their products on the basis of quality, and that competitive effect that’s occurring right now has led some of them to engage in direct licensing with content providers as a way of demonstrating the calibre of the inputs used to provide the training and also to ensure that the outputs they’re providing are valid, relevant, interesting and take advantage of the best content that’s out there.

There are a number of licensing agreements that we’re tracking across the market. Those include connections with Associated Press that OpenAI entered into, and Axel Springer is another major one that was signed. There continue to be examples of this happening across the board and in different jurisdictions. That’s one element that we’re seeing: Companies in this space that want to take advantage of specific content are reaching out and developing licensing agreements in certain circumstances. That’s not, of course, the entirety of the market, but it is a trend that we’re watching quite closely.

Senator Simons: Mr. Blais, I know it’s hard when you’re the only person not in the room, but I want to ensure you receive a question too. You raised an issue that I hadn’t really thought about before, which is copyright protection for AI work products. It’s tricky because copyright is for the artist who does the creative work, so how would you imagine a copyright regime that could protect work created by an AI program?

Mr. Blais: Thank you for asking the question. That’s a good question.

The goal is more to protect users using internal tools. For instance, having guarantees that the content generated within the parameters of an organization can stay private and will not be used as training material. This is something that is a big concern for end users. Right now, we’re trying to improve productivity by asking organizations to use more and more of these tools, but this is one of the main concerns causing some employees to be reluctant to use generative AI. So it’s about ensuring that different tools remain private in the context of an organization. It was more in this area, and I probably mixed the two topics together. I’m sorry about that.

Senator Simons: I see. That’s very interesting.

This is to the larger question: To what extent do you think Canadian users understand that when they’re playing with these things — and they think it’s like a game and a fun thing — that they are surrendering their own photographs or artwork to these AI companies? I shared a taxi to the airport last week with a senator who is not in this room, who showed me all the ways he had animated the pictures of his ancestors, and I thought they have now sent all their family photos to some company. Do we need to do more to improve the literacy of Canadians so that they understand the risk that they’re running?

Mr. Blais: Of course. KPMG released a study a few weeks ago measuring the level of education in artificial intelligence by country, and we ranked 44 out of 47. We need to educate more on the risk and also the mitigation. For instance, when we do something that is more sensitive, there are tools that guarantee some kind of security, but people don’t know how to properly use these different tools, what needs to be monitored, what needs to be reviewed afterwards and what is probably not worth reviewing. All these elements on the way to properly use the tools and the risks of using these tools are not very well known, and because of that, it creates risk and also makes people underutilize these tools because, by default, there are concerns, and they lack trust in the different tools.

Mr. Zed: I agree with everything Olivier said. As a result of people’s fears about what might happen and how the data they’re inputting might be used, there’s a lack of adoption. To Olivier’s last point, there’s a connection between the two. If we want to get to broader adoption, obviously, trust is fundamental, and part of that is mitigating some of the risks that Olivier was talking about.

[Translation]

Senator Miville-Dechêne: I will speak to the representative of Innovation, Science and Economic Development Canada.

You are responsible for both promoting and regulating artificial intelligence. However, some would say that it’s very difficult for someone to regulate a product when they are promoting it, and that the regulation often gets in the way of those who want to promote that product. Does the department have a conflict of interest in that regard? I’m particularly thinking of the difficulties that you have in drafting a regulation or a bill on copyright. You seem to be saying that you are listening and looking, but at some point, regulations are needed. Is it a good idea or a dangerous idea to entrust those two seemingly contradictory missions to the same entity?

Mr. Chhabra: Thank you for the question. Since it’s quite technical, I will answer in English.

[English]

We’re at a very interesting point in the development of AI. It’s advanced enough to have captured the public imagination and to have taken off in terms of utilization. It’s advanced enough to have generated certain jurisdictions to respond by putting rules in place, including the EU’s AI Act. It’s also nascent enough to have recently caused the EU to take steps back in its regulation of AI, because there were concerns about overregulation, approaches and tools that were misfit for the ultimate outcome that was being sought.

In 2022, Canada attempted to bring forward, through Bill C-27, the artificial intelligence data act, but that bill died on the Order Paper in January 2025. So there has been some work in Canada to consider what the risks are around artificial intelligence and how to address them.

As we discussed during that time, particularly through committee deliberations and others, there are a number of different levels of government and offices around government — regulators that exist today — that will all be wrestling with issues generated by AI in their own context. These span from the Commissioner of Competition; to considerations within the copyright framework; to considerations within the financial sector, for example, through the Office of the Superintendent of Financial Institutions and the Department of Finance Canada; to consideration of these issues within Transport Canada in the context of autonomous vehicles; to Health Canada in the context of medical devices and AI — these issues are going to be everywhere. AI is going to be a ubiquitous technology that underpins so much that multiple regulators will be, and already are, wrestling with how to deal with it.

The question of whether to advance a horizontal piece of policy that attempts to wrestle with AI as a whole is an open one that the government will continue to assess and monitor to see whether work needs to be done in that space. However, I would not say it’s an entirely settled question; it’s one on which we continue to engage broadly and listen to feedback. The pieces we’re talking about today, specifically around copyright, are, of course, an active element of that. There are other considerations around what types of regulatory approaches would be necessary.

[Translation]

Senator Miville-Dechêne: My question was a bit more direct than that. Aren’t you in a conflict of interest by trying to both promote artificial intelligence and regulate it in your department, as opposed to having separate entities do that?

Mr. Chhabra: Thank you again for the question.

[English]

I don’t think we sense a conflict as much as we sense that there are a number of considerations to be taken into account. I don’t think this is a binary issue of on or off — more AI or less AI; it’s a question of how. It’s about the impacts: understanding those impacts and managing them effectively.

Because of the ubiquity of the technology that I referenced earlier, there are going to be use cases for it that are very powerful, impactful and economically and socially beneficial. There are also going to be use cases for AI that are going to be more challenging, that we will need to wrestle with to understand where the risks are emerging and how we most effectively address those risks, whether that be within existing regulators and existing departmental mandates that may be outside of ISED or whether that be different levels of government.

Understanding the broad sweep of what is happening and how the technology is being utilized in real time is an important part of what we’re doing.

The Chair: We need to move on because we have four senators and only 12 to 14 minutes left.

Senator Mohamed: I’ll ask my question to Mr. Zed.

I was glad to hear that Canada was ahead when we had our Pan-Canadian Artificial Intelligence Strategy. We’ve been noted on the world stage for some of our advancements, but the impression many have is that we’re lagging behind the U.S., the EU and China. As specifically as possible, perhaps not including data sovereignty, can you help me frame where we are ahead and where we are lagging behind?

Second, you mentioned that government can remove roadblocks. Can you share some of those roadblocks with us, please?

Mr. Zed: Absolutely.

With respect to your first question — where are we ahead and where are we behind — I would say we’re ahead, and have been for some time, on the research side. The schools and the national institutes, in particular, have really put Canada on the map, in Toronto, Montreal and Edmonton, in particular, because the pioneers of reinforcement learning and machine learning are from there and have essentially been developing schools that have spun off some pretty extraordinary and advanced thinkers who have put Canada on the map and are engaging internationally very effectively. There are many well beyond those three national centres, as well. In Western Canada and Atlantic Canada, there have been some really important developments and contributions on the research side.

Where I think we are behind is on the commercialization and scaling side. That was really a consistent theme that emerged in the context of the task force reports and in the discussions that followed after, which were convened by the minister. It really was about wonderful ideas emerging from Canada, as well as incredible start-ups and innovation, but then them being plucked and somehow moved elsewhere or being drawn in by other countries or other opportunities.

Part of what the conversation has been about is, first of all, how we continue to attract world-class talent here in a very competitive global environment. Are there certain incentives that can be produced? I know there’s a talent attraction piece that a number of ministers, such as Minister Joly and others, are pursuing. The question for us in an AI context is around what specific sets of things can be done in AI that can be part of that broader talent attraction piece. That is going to be important.

Part of it, too, is ensuring we have the skills and training we need here to ensure we equip our organizations, companies and even the Government of Canada with opportunities that ensure we are able to retain that talent here.

There’s a connection to skilling, upskilling and retraining that is absolutely part of the conversation.

Additionally, Canada, in some areas, is doing well in AI, and we’re still very much seen as a credible leader in the space internationally. Samir knows that via the various international engagements he’s pursuing, as well. However, others have caught up relative to where we started, so the level of investment and focus by others has increased exponentially, which is why I made the comment off the top.

As to your second question, please remind me again what it was.

Senator Mohamed: It was with respect to roadblocks.

Mr. Zed: Thank you.

There was some discussion around the combination of incentives and skilling. Skilling is more leaning in, and some of the roadblocks are facilitating an economic environment that allows companies to grow and scale appropriately in Canada. A number of considerations around talent seem to be on the table — economic considerations, fiscal policy and that sort of thing.

Senator Arnold: I have a simple question for Samir first. You mentioned the AI data bill. Are you concerned that the bill died?

Mr. Chhabra: I’m not concerned that the bill died. It was a proposal from the previous government and was being considered in the House when it was prorogued. The government is actively considering opportunities to assess whether further interventions are required and whether that mix of interventions could be a mix of voluntary approaches, standards-based approaches and other jurisdictions.

Since the artificial intelligence and data act was introduced as part of Bill C-27 back in 2022, a lot has happened. As mentioned earlier, the EU Artificial Intelligence Act was developed, promulgated and put into place. Now we’re seeing issues where the EU is starting to walk back certain elements of it. We have seen other jurisdictions make attempts at the national and sub‑national levels, and we’ve seen different regulators pick up issues related to AI within their own ambits, as well.

It’s a fair question for the government to continue to assess and monitor whether horizontal legislation or a regulatory framework is necessary or more sector-specific approaches would be warranted. The U.K., for example, has been pursuing for some time more of that sector-specific approach.

What are the risks associated with AI in the financial services industry? What are the risks associated with AI in the transport industry — enabling their regulators to assess and make determinations around regulatory changes that may be required.

I think we’re seeing a lot of jurisdictions wrestle with these things in different ways. I don’t think one or the other is necessarily right or wrong at this stage.

Senator Arnold: Thank you for that. On that broad sweep of what is happening and there not being a right or wrong, we’ve been grappling with this in the Senate, too. I’m on a number of committees all discussing this, trying to figure out how to approach it. Do you feel there’s a willingness to work together? Are you seeing silos emerge, or are you seeing that people are open to working together to figure this out?

Mr. Chhabra: My colleague Jordan just highlighted that we had more than 11,000 responses to the AI sprint consultation and a number of task force members participating. We’ve had participation from across the country, and we have a very strong network of both international peers and multilateral organizations who are also wrestling with this issue.

I would say there’s a very significant degree of integration and discussion internationally and domestically, between industry and government, with non-governmental organizations, with the creative industries and communities, with provinces and territories, so it’s very much an active, engaged conversation. I do see a lot of partnership and collaboration happening on these issues, including through the International Network of AI Safety Institutes of which Canada is a founding member. So we have a number of different ways in which we are integrating and connecting with other partners, ensuring we have the best available intelligence and information upon which to be making decisions, and we do share that.

In July of this year, for example, Canada and Australia, along with a number of the other AI safety institutes, published a research agenda on synthetic content that was very much designed to not just to engage the nations involved in this activity, but also academics and other third-sector organizations, to help set a table for what we need to better understand when it comes to synthetic content: where the risks are, how it is being utilized today and what tools we can use to mark synthetic content so we can all determine synthetic from real. These are active conversations we’re involved in right now.

Senator Quinn: Thank you, witnesses, for being here this morning. This is really interesting stuff. I’m a neophyte for sure, but I have this image in my mind of a super data warehouse that’s now sitting on top of the global internet and drawing information up that it can process so quickly, it’s like displacing the human person. It’s thinking faster than we can. It’s moving fast, but can government keep up with regulatory requirements, and are there enough resources that government is dedicating to this? It’s being discussed everywhere. Should there be a single window that’s primarily responsible? Is it you guys?

Mr. Zed: I’m happy to answer.

Your image is quite a good one, because I think a lot of the conversation now around AI is about data centres and what is powering all of these large language models, of which, by the way, Canada has one. We’re among a very small number of countries who do: Canada, France, the U.S. and China.

So that, to go back to an earlier point, is another distinguishing feature for Canada as well.

The point around keeping up, I think, is a really good one. Honestly, the pace of change and the generations, the improvements are happening at such an extraordinary pace, even more quickly than we would have anticipated a couple of years ago, before large language models. I think part of the approach that we’re trying to take in the work we’re doing in supporting the minister, and supporting and hearing and being informed by the task force, is determining the key principles that should guide our thinking irrespective of the technology. What are the values and principles that will help guide and shape the kind of advice that we would provide, whether the technology is coming from this company or that company, and whether the technology itself shifts?

So we are trying to think about things at the level of principle. The reality also is that the rubber hits the road when you apply principle to practice, and that does change quite quickly. So part of it, too, I think, is a recognition that we can’t do it alone. We’re in a global economy, and we need to be thinking and working through risk mitigation with other countries and other experts. Samir referred to the AI safety institute. I think that kind of work is really important, even though to many it might seem fairly technocratic. I think at the end of the day, those are the kinds of discussions and points of connection beyond even our own borders that can help us come to common ground on some of these really complicated issues.

Mr. Chhabra: Maybe I’ll just add that the research ecosystem and the strength of that ecosystem in Canada that Jordan referenced earlier is an important consideration as well. I mean that in the sense that you’re quite right that it’s moving very rapidly, but because we’ve been investing in it as a country earlier, and because we’ve built very strong linkages between the government and AI institutes, we have access to intelligence, information and knowledge about what is happening today and where the research directions are going.

The investment in the Canadian AI Safety Institute was another element that was meant to help keep us abreast of what is going on from a safety and risk perspective. We have other partners working, for example, through the Standards Council of Canada on an AI and data governance collaborative that is also meant to be thinking about how we bring standards to bear in this space. To your other question about a single window, part of our testimony today has been about how broad the technology is and how many different use cases there are for it. I think you will continue to see, and quite rightly, a number of different offices and regulators engaging with the issue of AI in their specific domains.

The Chair: Thank you very much.

[Translation]

Senator Aucoin: I’m not that knowledgeable in that area.

There’s an issue you mentioned in your answers that interests me, namely the people involved in this field and their livelihoods. With technology developing at such a fast pace, is there a concerted effort to ensure that there are jobs in this field? For example, are there any discussions with universities to ensure that the technology will stay here? Are there discussions to guarantee jobs, or even increase the number of jobs? I don’t know if you understand my question. Will we lose our jobs in Canada? Will they go elsewhere, or will we ensure that the jobs stay in Canada and that the number of jobs may even increase?

Mr. Zed: Thank you very much for the question.

It’s very important to try to have information on the link between work and artificial intelligence technology.

[English]

Displacement of workers is certainly an issue, and part of what we’re looking at is, first of all, understanding what the data says. Every technological change has produced changes in terms of labour, and I think this is no different. One observation I would make is that the scale of that change really varies depending on whom you ask.

[Translation]

There isn’t really a consensus on the extent of the impact or how it will unfold.

[English]

I think it will be really important for us as a government to think about how we prepare for that transition. What are the key things in terms of skilling so that the government can help support some of these transitions in key sectors of importance for the country? I think skilling will be incredibly important, but also facilitating and assisting with that transition.

[Translation]

It isn’t clear yet which sectors will be affected more than others.

[English]

So I think what is really important is before large language models, the assessment would have been that it’s more kind of in automation that we should focus our efforts. Now service is included in large language models, so the scope, I think, has changed, which means we need to think about the scope of what the government and governments plural, globally, can work together to focus on. But clearly this is an important part of our work, and in working with our colleagues at ESDC colleagues, that will be part of what we pursue.

The Chair: We’ve reached the end of our time for this panel. I would like to thank you all for appearing today. It is most appreciated.

Mr. Arbour, I hope we have another chance to meet you so that you can participate verbally. I know you were here in spirit, and we thank you for that.

If there is anything that any of you would like to send in writing in terms of scaling or the future that we didn’t discuss today, because we’re just getting into the subject, we would appreciate anything that you could send us. If you have anything in writing, please send it by December 9, 2025.

Again, thank you all. It was eye-opening, and I’m sure we have many more questions we’d love to ask you. We will have to figure out how to get you back.

I would like to introduce our next panel. From AIGS Canada, Wyatt Tessari L’Allié, Founder and Executive Director.

Anatoliy Gruzd, Professor and Canada Research Chair in Privacy Preserving Digital Technologies, Social Media Lab, Toronto Metropolitan University; and Nadia Naffi, Associate Professor, Educational Technology, Faculty of Education Sciences, Laval University and Researcher, International Observatory on the Societal Impacts of AI and Digital Technologies.

Thank you all for joining us today.

The witnesses will provide opening remarks for a maximum of five minutes each, followed by a question-and-answer session with senators.

I now invite Mr. Tessari L’Allié to give his opening remarks. Go ahead, sir.

[Translation]

Wyatt Tessari L’Allié, Founder and Executive Director, AI Governance and Safety Canada (AIGS Canada): Thank you. Mr. Chair, members of the committee, thank you for the honour of inviting me.

AI Governance and Safety Canada is a non-partisan, non‑profit organization and a community of people across the country. Our starting point is the following question: What can we do, in Canada and from Canada, to ensure that advanced AI is safe and beneficial for everyone?

Since 2022, we have been providing the federal government with forward-looking public policy recommendations, such as our submissions to ISED and Treasury Board. We also testified before the Standing Committee on Industry and Technology, or INDU, about Bill C-27.

I also bring relevant personal experience as a graduate in engineering and film, having directed a feature film and written a book.

What I hope I can bring to the committee today is an overview of the impacts on the information and communications technology sector in the context of the challenges of AI in general.

[English]

Last month, as part of our formal submission to the Ministry of AI’s consultations on Canada’s national strategy, we unveiled our 2025 white paper entitled Preparing for the AI Crisis: A Plan for Canada.

The basic situation we face is this: With human intelligence staying the same and AI getting better by the day, we are heading into a world in which AI can outperform us in all domains. This includes running companies, caring for people, and creating high-quality original content — areas where we currently still hold an advantage. Building this level of AI is the explicit goal of global technology companies like OpenAI, Google, DeepMind and Alibaba.

Smarter-than-human AI will have significantly greater impacts on society than the generative AI models that we’ve seen thus far. In the case of the creative industries, three likely implications include the following.

First, smarter-than-human AI won’t need to train on copyrighted human content. The models we see today that need to scrape millions of books or songs in order to produce anything intelligible are a passing phase. Much like the human brain, smarter-than-human AI will be able to learn from relatively little data, such as public domain data, and go on to create authentic and engaging works of art.

Second, the quality of AI content will improve far beyond human levels. Right now, when we look at most AI content, we rightfully identify it as “slop” or low-grade. This is a passing phase. Within a few short years, the situation could be reversed, with the human-created content looking comparatively quaint and simplistic.

Third, we’re already seeing platforms like Spotify and Amazon get flooded by AI content. This is just the beginning. Within a few years, when we will likely have high-quality customized AI content available on demand at very low cost, we could easily be in a situation where 90% or more of what Canadians see on their platforms is AI generated.

These are just some of the impacts of smarter-than-human AI within the narrow field of this Senate study. Even more concerning are the public safety, national defence and job impacts.

This is why a wide range of respected public figures — including former head of the U.S. armed forces Admiral Mike Mullen, Nobel Prize-winning AI scientist Geoffrey Hinton, and former Heritage Minister Pascale St-Onge — signed a statement last month calling for a prohibition on the development of superintelligence, not to be lifted before there is broad scientific consensus that it will be done safely and controllably or there is strong public buy-in.

Companies like OpenAI expect to build smarter-than-human AI in as little as one to three years, and a number of trends suggest they could be right. While we hope they are wrong, a responsible government needs to launch preparations immediately.

Our recommendations are, therefore, as follows: First, pivot to meet the AI Crisis. This 2025 study on the impacts of AI on the creative and communication industries is like a December 2019 study on the first coronavirus outbreak in Wuhan, China. You’ve stumbled across an early warning sign, an industry that happened to be hit hard by an early wave of AI.

The big story is what’s coming next, and the biggest impacts will be elsewhere. This is not a crisis you can fit into an existing agenda or delegate to a ministry. For Canada’s response to be adequate, this will require a whole-of-government effort led by the Prime Minister’s Office.

Second, spearhead the global response. AI is a global phenomenon, and the leading AI labs are in the U.S. and China. Canada cannot protect its citizens from harmful forms of AI through domestic action alone. We are in a good position to kick-start global talks, and foreign policy will be our strongest tool in addressing the AI crisis.

Third, build Canada’s resilience. The fact that creative and media industries are getting impacted first means they are an opportunity to pilot the support measures that other sectors could soon need, as well. Pass measures that are beneficial to all Canadians and are robust to future AI, such as labelling content and piloting a basic income.

Fourth and finally, launch a national conversation on AI. Canadians deserve to be informed and consulted regarding a technology that will fundamentally reshape their lives. We need nationwide public hearings to educate and consult on core decisions pertaining to our collective AI future.

In these brief remarks, I hope to have conveyed to you the momentousness of what is about to unfold with respect to AI. To quote Prime Minister Carney, “We will have to do things that we haven’t imagined before, at speeds we didn’t think possible. . . .”

The clock is ticking. Let’s get to work. Thank you.

The Chair: Thank you very much.

Anatoliy Gruzd, Professor and Canada Research Chair in Privacy Preserving Digital Technologies, Social Media Lab, Toronto Metropolitan University: Good morning, Mr. Chair and senators. Thank you for inviting me to discuss the opportunities and challenges that AI brings to Canada’s information and communications technology, or ICT, sector.

My remarks will focus on two key areas: first, the adoption of generative AI in Canada; and, second, its use by malicious actors to create and disseminate disinformation. This reflects my research conducted with my colleague Philip Mai and other collaborators at the Social Media Lab, where we study how manipulated content spreads and how digital platforms shape public opinion.

On the first issue, generative AI tools are changing how people create — we heard that from the first panel — search for information and make sense of the world. Their rapid and often uncoordinated integration across multiple products we use is speeding up adoption, but it also leads to serious consequences. I will share some highlights from our recent study The State of Generative AI Use in Canada 2025. We found that about two thirds of Canadians said they had tried generative AI, but only 38% felt confident using it effectively and only 36% understood the rules and ethics behind responsible use. This suggests that there is a growing gap between how widely these tools are used and how prepared Canadians are to use them safely.

Our survey also reveals that about 7 in 10 people are concerned that generative AI could be used to manipulate voters or interfere with democratic processes. About 60% report having less trust in online political news because they’re afraid that it might be manipulated.

Canadians also strongly support regulation. A majority of nearly 80% believe AI companies should be held responsible when their tools cause harm. This final statistic makes one thing clear: Accountability should rest with the companies that build, release and profit from AI systems.

On the second issue, the same features that make these tools popular also make them appealing to malicious actors. AI can clone voices, manufacture events and impersonate officials, which makes it harder for the public to know what is real. Recent cases demonstrate the severity of this issue. This summer, senior officials in Europe and the United States received AI-generated messages that pretended to be from U.S. Secretary of State Marco Rubio. These messages included synthetic audio, which shows how easily unknown actors can reach high-level officials using tools available to almost anyone.

In another example, groups like the Institute for Strategic Dialogue documented a similar pattern in a Russian-linked operation called Matryoshka, also known as Operation Overload. It included tactics such as the use of fabricated voice-overs to impersonate news outlets and experts. The goal was to spread misleading narratives about Western institutions and Ukraine.

These examples highlight the need for greater transparency. Many AI systems already include so-called content credentials in the form of watermarks and metadata to show when content is synthetic. Industry-led initiatives, such as the Coalition for Content Provenance and Authenticity, or C2PA — you will probably hear more about that — have developed credentials for this purpose, but adoption is still low. Even when content carries these credentialed signals, major social media platforms often remove or fail to display them after the content is uploaded by users. This shows the limits of self-regulation.

Platforms have conflicts of interest. Synthetic content is cheap to produce and often highly engaging, which benefits business models built upon the attention economy. Most policies are also voluntary, selectively applied and reversible.

Canada should take a more proactive step by requiring providers of high-risk generative AI systems and large online platforms that operate here to support, retain and display standard authenticity metadata. This is especially important for political and news-related content. It would ensure that provenance signals are applied at the point of content creation — when things are created — and preserved at the point of distribution when we actually see it online.

Canada also needs to reinforce its information ecosystem. Canadians should have access to clear labels and online tools that help them verify synthetic media and use AI responsibly. Public officials need training to recognize and respond to AI‑driven spam messages and impersonation attempts.

Canadians want responsible innovation. To build trust and reduce harm, we need to set clear expectations for AI companies and invest in capacity building across the information environment.

Thank you. I look forward to your questions.

The Chair: Thank you, Mr. Gruzd.

[Translation]

Nadia Naffi, Associate Professor, Educational Technology, Faculty of Education Sciences, Laval University and Researcher, International Observatory on the Societal Impacts of AI and Digital Technologies (OBVIA): Mr. Chair, honourable senators, fellow panellists, I would like to thank the committee for inviting our observatory, which I have the honour of representing, to testify.

Today, in all human history, we are the society with the greatest access to information, but also the one most exposed to falsehood. And with a single click, misleading content can be spread at scale.

To understand this tension, it is useful to distinguish between two uses of AI: predictive AI and generative AI.

Predictive AI anticipates outcomes based on past data. It powers recommendation algorithms that structure visibility, create echo chambers and amplify biases.

Generative AI, now widely accessible, creates texts, images, voices and videos that are difficult to distinguish from reality. It supports media outlets in producing and personalizing content and in making services more accessible. Yet these same capabilities, when misused, also enable the fabrication of deepfakes, voice cloning and the creation of synthetic identities used for disinformation.

Malicious uses are already here. Synthetic content is used to dehumanize entire groups, to downplay mass violence and sow doubt about documented atrocities, including ongoing genocides, to incite hatred, and to spread misinformation about vaccines and public health.

During election periods, synthetic content can manipulate the public’s perception of candidates and erode trust in the integrity of the process, while recommendation systems amplify what captures attention and engagement, not necessarily what is true.

The World Economic Forum ranks misinformation and disinformation among the top global risks. We are no longer dealing with a few isolated fake news stories, but with an ecosystem of disinformation, misinformation and strategic under-information, which includes foreign information interference campaigns, as well as nonconsensual pornographic deepfakes and gendered disinformation that primarily target women and girls to discredit them and silence them in the public sphere.

At the same time, we see the emergence of the liar’s dividend: the ability to dismiss authentic evidence by labelling it as a deepfake.

Gradually, we slide into a zero-trust posture that weakens public debate, confidence in institutions and democratic life.

From a legal standpoint, Canada does not yet have a comprehensive framework to govern generative AI, and the responsibilities of platforms and tool providers remain poorly defined. In copyright law, only a human being can be recognized as an author.

Automatically generated content is not protected but can infringe the rights of others.

In response, measures remain mostly technical: developing detectors, which are necessary but insufficient, because it is essentially a cat-and-mouse game.

We need to move from trying to “spot the fake” to a strategy that enables citizens to “verify what is real.”

Two complementary avenues emerge.

First: trust infrastructures and risk-based regulation at the ecosystem level. We need to go beyond detection alone and put in place standards for traceability and provenance, such as the coalition for content provenance and authenticity standard, or C2PA standard. It works like a label for digital media: it indicates who produced a piece of content, when, and what edits were made. This provides an authenticity signal and helps limit the liar’s dividend. But provenance on its own is not enough. A case of disinformation through deepfakes involves the person who creates the content, the tool that generates it and the platform that distributes it. We cannot place the burden solely on the targeted group or the victim, whether it is a woman targeted by a sexual deepfake or a voter exposed to a deceptive campaign. We need clearer obligations for communication platforms and high-risk tool providers regarding risk assessment, transparency and the rapid removal of manifestly harmful content.

Second: education, moral digital agency and a form of moral sobriety in the digital realm. We need to empower citizens to remain awake, vigilant and informed in their information environments, and to use AI to strengthen our human capacities rather than to pollute this ecosystem.

In concrete terms, this is the ability to say: this piece of disinformation stops with me, I will not share it. This requires targeted investments in education and training, in schools, higher education, workplaces and community organizations. It is no longer about promising that everyone will be able to spot what is fake, a promise that creates a false sense of confidence, but about strengthening judgment, the capacity to act and shared responsibility.

We speak a great deal about the environmental impact of AI; we must also address the human footprint of our practices in the information ecosystem, when they leave behind fear, shame, loss of trust and dehumanization.

Ultimately, the goal is to ensure that AI in the information and communications technology sector strengthens the quality of information, public trust and freedom of thought, rather than undermining them.

Thank you.

[English]

The Chair: Thank you, Professor Naffi.

Senator Dasko: I have a couple of questions, but I don’t want to take too much time from my colleagues.

Mr. Tessari L’Allié, you mentioned superintelligence. What does superintelligence look like? What is it?

Mr. Tessari L’Allié: It is a system that can do everything the human brain can do, but faster, better and cheaper. The explicit goal of Frontier Labs in building this is what they call the holy grail of AI, which is a system that can outperform the human brain. In practice, it means that any intellectual activity the human brain can do can be done better. Certainly, the job impacts could be systemic — we don’t know exactly how it will be rolled out — but in terms of capacity, the ability to outthink and outcompete us will be there.

Senator Dasko: It can’t wash the dishes or talk to my daughter for me, for example.

Mr. Tessari L’Allié: That is a good point. Yes, robotics is the second piece behind it. It will take longer to do because it’s physical. If you’re following the robotics sector, it is moving very fast as well. Robotics more capable than humans will be around in 10-plus years. Basically, we’re entering a world in which everything we can do can be done better. Depending on your perspective, that could be a good or bad thing, but it changes the game. In terms of misinformation or content creation, your best pieces of art could come from AI and your most sophisticated campaigns will be coming from AI.

Senator Dasko: It can’t live my life for me.

Mr. Tessari L’Allié: The human consumer is still there. The human choice of what we do with our day is still there. However, in terms of economic competitiveness and other challenges, for every decision you make throughout your day, the best bet will be to ask the system what to do because it will be more aware and more accurate. This is assuming these systems are built and designed well, but in theory, the human brain has limits, and those limits can be surpassed. That is where we’re headed.

Senator Wilson: My question is for Mr. Tessari L’Allié. In your statement, you talked about the need for a whole-of-government approach led by the PMO. I’ll use my own words, but it sounded like you were advocating putting in place guidelines, such as ring-fencing, around smarter-than-human AI, as well as for Canada to take a leadership role internationally. One of the things that I’m interested in is this: If we were to take that kind of approach, how can we stop bad actors from not doing the same thing? Don’t we create the risk that bad actors could develop smarter-than-human AI in ways that are going to be highly problematic?

Mr. Tessari L’Allié: Yes. If Canada tries to stop super-smart AI on its own, it won’t help. It has to be a global-first approach, which is extremely difficult, but it looks like our only option. The other strategy you will hear the Frontier Labs themselves talk about is that whoever builds the first safe smarter-than-human AI system will use it to govern smarter-than-human AI systems; it will be the cop that polices other systems. That is very anti-democratic and high-risk, so your next-best option is trusting the tech companies to govern the AI for us.

The most capable models will probably come from Silicon Valley or China. That means that Canada’s best defence and best opportunity is going to be building global momentum and global talks and being the facilitator in the background to get to the point where the U.S. and China realize it is in their interest to collaborate, because they are facing huge unemployment and loss of control from their own systems. In the terms of AI risks, we’re all on “Team Human.”

So there is a sense that this is a monumental task ahead of us, but there is at least a basis for a desire for global talks on this subject.

Senator Wilson: Assuming you could get that global collaboration happening, I’m assuming there are still going to be outliers. Let’s say Russia, for example, doesn’t participate and does its own thing. How difficult would it be for it to create its own smarter-than-human AI, which could then come back and be a problem for the rest of the global community?

Mr. Tessari L’Allié: This is the key challenge of the global talks. It would have to be binding and universal, which has never been seen before in human history. Some would say that it is impossible. All paths forward on AI are a Hail Mary, so we must try multiple strategies to see if we can get a deal.

Vladimir Putin doesn’t want to lose control or power or see massive unemployment in his own country. But if you build a system that is smarter than you and can outcompete you in the real world, if there is an accident, misuse or something goes wrong, and it starts to interpret human beings as an obstacle to achieving its goal, there is no guarantee that labs or governments will ever be able to regain control. So that would be a permanent loss of control. Even if you are the worst of autocrats, you still don’t want to lose power to an AI system. That’s where there is incentive, even for bad actors, to agree on the basics.

Senator Lewis: Our last panel talked about the consultation process. I believe you referenced it, and I assume other witnesses also participated in it. Were you happy with the process, and is it enough? How did you feel about the recent consultation process? Do you think it will do any good in putting your viewpoints forward to the government?

Mr. Tessari L’Allié: I appreciated the speed at which they put it together. Since everything is moving fast, you have to move fast as a government.

There are so many different aspects to AI and so many valuable parts to be working on. We’re focused on where AI is headed and think that is the bigger story, but there are many other elements. A responsible government has to take all that into account and turn it into a strategy.

It’s a good first step. Is it sufficient? Obviously not. People are worried about their jobs and misinformation. For people to have a sense of ownership of the future they want to create, public hearings are necessary. They will obviously be messy because it is a big topic, but giving people a chance to have a say about the future is very important. I was impressed by the 11,000 submissions. There are 40 million Canadians. They are giving everyone a chance to be informed about what is going on and to have an input. That is another huge area in which Canada is well placed to pilot this. We are relatively stable and well educated. We have a strong AI ecosystem. We could pilot the world’s first national conversation on AI and from it learn how to shape our domestic and foreign policies.

The Chair: Dr. Gaza and Professor Naffi, would you like to add any comments?

Mr. Gruzd: I heard the discussion from the first panel. I haven’t heard anything concrete so far from the consultation. Consultations are part of the process, but what is more important is what decisions will be made. I heard that there were many opinions, but that is true with any type of consultation. What I would like to hear is how we’re going to address the lack of graphics processing unit, or GPU, access if we are talking about research excellence and innovation in this space.

We are investing millions, but when you look at countries like France, they are investing billions in that space.

What I would also like to hear is how we’re protecting Canadians’ data privacy. We are all using these tools, and I think you had a couple of good questions about it during the previous panel, but we’re all contributing our data and don’t really know what’s happening to it. I would like to hear more concrete steps coming out of that consultation.

The Chair: Professor Naffi, any comments?

[Translation]

Ms. Naffi: I’m focused on the issue of skills and skills development. There was discussion about scaling, but there weren’t really any concrete actions. In the universities, we’re hard at work thinking about how to prepare our future workers. That made it into the discussions, but we didn’t really get any concrete answers on that.

[English]

Senator Simons: Thank you very much. I’m going to stay out of the “androids dreaming of electric sheep” for a moment to focus on what is happening in the here and now.

When I hear from Professor Gruzd and Professor Naffi, it really makes me worry about the security of our electoral process, because right now, we have no legislation in place that protects us from artificial intelligence, or AI, manipulation.

When Bill C-63, the online harms bill — which died on the Order Paper — was first contemplated, AI wasn’t even a question. What I’m wondering, practically speaking, is this: What legislative options do you think there are, either for the Chief Electoral Officer of Canada to have or in some future online harms bill? What, practically speaking, if anything, can be done to protect Canadians’ information ecosystem from really high-quality malicious disinformation?

[Translation]

Ms. Naffi: This is a major concern. As I mentioned, we have hyperfast access to information. There’s generative AI, predictive AI, cognitive biases and so on. We know that bad actors work on cognitive biases and try to personalize manipulation, and they go after voters. Right now, there are no limits put on all of that work.

I’m on the working group with the Commission de l’éthique en science et en technologie, or CEST. Currently, we are working on artificial intelligence and democracy to find proposals and recommendations. We are studying an approach in the electoral framework on the accepted use of artificial intelligence. We need to determine what is and is not permitted to limit the abuse of artificial intelligence in these processes that the public is not yet ready to defend itself against.

That’s why I’m talking about education to develop the skills of our population. We can do everything related to elections, regulations and so on, but it will take an enormous amount of time to put it all in place. In the meantime, the public will be victims of this manipulation. We have to work on the whole issue of awareness, but also on a form of agency for these people. It’s not just about saying that it exists.

That was discussed in the other panel. We are so used to seeing fake content that, at some point, we can no longer see that it is fake. It becomes standard practice. You don’t see the manipulation anymore. It is important to strengthen Canadians’ awareness, but also to develop their agency. We cannot stop at being able to detect disinformation and misinformation. We also need to notify our community and make sure that those around us know that it is fake so they don’t spread it further.

[English]

Senator Simons: You’re right. It has existed for all time. It’s just that human gossip was a lot easier to contain.

Professor Gruzd, last week, we saw Grok being programmed to tell us that Elon Musk was the best at everything in the world. When I see the slop that’s out there, I’m a little less worried about superintelligence, but is there something that you think we need to be doing beyond educating people — which is hard — to protect them, or in a world where we value free speech, is there a mechanism to stop the slop?

Mr. Gruzd: Let’s split this issue. The first part regards the content that is being pushed on Canadians — so generative AI content, and let’s say during the election period. The second issue regards when Canadians are going and using one of those systems to ask questions about the election process or a particular political entity.

On the first part, where content is being pushed on Canadians, to address the issues you raised, we need transparency. A Canadian needs to be able to see a clear label saying that the content was created with generative AI tools.

I mentioned earlier that there are standards that exist. From the technological side, there’s no obstacle to preserve that information. The problem is that platforms that are self‑regulating are either unwilling or otherwise don’t display that content or those credentials.

Senator Simons: Last week, X decided it would display where people came from, and then they realized that that was not helpful to their model, so they stopped that.

Mr. Gruzd: But it is a good example. I usually don’t give praise to X or anything, but this is actually a good example where it did reveal to X users how some MAGA accounts on X were not actually from the United States. They were pro-Trump, but they were located in Eastern Europe. That’s a good example where adding those transparency labels is important, but that was a voluntary initiative by one platform, and those initiatives can go away at any time.

This is where regulation is important. You have power to put that mandate/requirement on social media platforms, especially in the context of elections: that any content that is election related has to clearly display whether it was created with generative AI.

The second issue regards when Canadians go to these chatbots and ask questions about the election. That’s just as dangerous as other issues we discussed, because we don’t have the ability to audit or check the answers Canadians are getting.

As a Canadian researcher, I have to use research funding to pay a lot of money to just subscribe to these different application programming interfaces, or APIs, to check what responses they provide. There’s no mandate for these platforms to provide research access to their latest API so we can audit and so we can ask what Canadians see when they ask whom they should vote for.

I think those are two issues we need to departmentalize.

Mr. Tessari L’Allié: The biggest point of leverage the Canadian government has in terms of stopping this is that — if you can’t stop a teenager in their basement in another country from creating a “deepfake” — you can tell YouTube and social media companies that if they want to do business in Canada, they have to not put up “deepfakes” or need rules in place around disclosure for researchers. It is, obviously, its own challenge, but it is the best point of leverage.

[Translation]

Senator Miville-Dechêne: I’m going to continue along the same lines as my colleague, since that’s what interests me as well. Specifically, I’m going to talk about deepfakes. I’d like to pick up on what you said, Ms. Naffi, about the fact that we can’t leave it up to consumers to report deepfakes and to ensure that, eventually, someone somewhere in some commission forces its removal. Education is great, and I believe in it. However, for the moment, we’ve heard from people from the department that they’re not ready, that they’re thinking about it. Regulations may eventually be drafted. What urgently needs to be done regulation-wise in the short term so that consumers can see whether or not something is a deepfake? There was talk at one time about requiring platforms to write a notice saying that something is not real. Is that a way forward? The bad actors will never use that sign. What do you think can be done very concretely to help consumers? My question is also for Mr. Gruzd.

Ms. Naffi: Thank you for the question. As you say, the public are not the only ones responsible, but they are part of the equation. I don’t want to be alarmist, but we really no longer know what is real and what is fake. You can even make up a life. I could be with you right now, but it’s not really me. This is actually happening today, and it’s very hard to detect when it happens. We can’t rely solely on the public; that’s why the C2PA standard exists. It helps us make sure that we know where the problematic content is coming from. Of course, that standard can be hijacked, and you can wonder whether it is being adopted, but you can also push for it to be adopted so that Canadian consumers know where content is coming from and what about it has been modified.

There is a lot of —

Senator Miville-Dechêne: How do we actually tell them that the content is fake? What would be the process?

Ms. Naffi: With the C2PA standard, there is a label similar to the nutritional and ingredient labels found on supermarket products. It’s the same with images and videos. We have a code we can click on that indicates the date and origin of the content and who created it. We are currently seeing it on platforms such as LinkedIn, for instance. There are invisible watermarks, which can also help, but which the consumer can’t see with the naked eye.

All of this requires the tool and platform providers to make sure that they add these elements to protect consumers so that they know what is real. Another example: In the case of Sora, we can see the Sora 2 logo, but couldn’t malicious people manage to make it disappear? I believe that, with all of the technologies that exist, we also have to be very strict about censorship with the platforms and tool providers so that we can protect Canadian consumers.

Senator Miville-Dechêne: Mr. Gruzd, quickly?

[English]

The Chair: Mr. Gruzd, would you like to comment?

Mr. Gruzd: Yes, I absolutely agree. The labelling is the lowest-hanging fruit that we can mandate platforms to do. There is technology out there. We need to adopt it. We can start with sensitive content for sure — politics, election, news, health-related information. That should have that. Not everything will have that label. You mentioned malicious actors will use their own models to bypass it, but at least we’ll cover a huge chunk of that AI “slop,” as you referred to.

On the other side, we need to work with platforms or mandate that they have safeguards for prompts. Users can generate quite sensitive content and misuse it in their communication. When somebody is trying to impersonate one of you, senators, the platform should say, “Wait a minute. What are you doing? You’re not supposed to do that.”

And because most of the users are now funnelled into those major Google or Microsoft products, as long as you cover those major players, you cover the majority of the market.

[Translation]

Senator Cormier: My questions are along the same lines as those of Senator Miville-Dechêne. I would go to you, Mr. Tessari L’Allié. You say you’re in the film industry.

Mr. Tessari L’Allié: Yes, Senator Cormier.

Senator Cormier: The big challenge is to distinguish real from fake, to protect copyright for creators and to ensure that we don’t give royalties to fake authors who have created synthetic products. What can you tell us about that? If we had to regulate in this area, what should the priority criteria be to clearly distinguish and identify real and fake in film production, since artificial intelligence is used in the industry?

Mr. Tessari L’Allié: That’s a very difficult question, because a person with a powerful tool can create a feature film on their own, and it’s hard to tell whether it’s human work or artificial intelligence. I don’t have the exact answer, but I think it needs to be studied quickly, because it’s already a problem.

I agree with the CRTC’s decision not to recognize AI content as Canadian. We are heading into a world where the majority of content will be created by AI because it is easier, so the regulations that should be put in place should protect humans rather than trying to regulate AI.

Senator Aucoin: Do you feel that we are heading towards a place where the issue of copyright could be called into question in its traditional definition? I’m thinking of the benefits and challenges of creative artists.

Mr. Tessari L’Allié: Absolutely. When it comes to protecting human works, for example, we already have a Copyright Act that can be put in place as is. We could also add clauses to protect the appearance of an actor or an individual as belonging to them. However, overall, the world we’re headed towards is one where there will be art and human creation, but it won’t really be the basis of a business model or an industry as such. A human with an AI tool can create a feature film, so it’s hard to imagine what’s going to happen. We’re already in a transition period. I have more questions than answers, unfortunately.

Senator Cormier: Ms. Naffi, when your colleague Dave Anctil appeared at Canadian Heritage and at the House of Commons Standing Committee on Canadian Heritage, he also talked about this labelling issue. He said it would be easy to quickly regulate. What would it ideally contain, if it could be easily put in place?

Ms. Naffi: It’s a technology that platforms could already have in their products. For example, you could open Adobe Photoshop and click on a little button and that information would automatically be inserted into the creation. The important thing is that the platforms and tools agree to adopt the regulations. It has to be imposed. Technically speaking, it’s easy to do, and many companies and platforms have already adopted it. We have to continue our efforts.

Senator Cormier: Thank you.

Ms. Naffi: That was a very interesting question, and I would like to add a few things to it. You asked about how we help people differentiate between what’s real and what’s fake. That is why my colleague has more questions than answers, and that is also why I talk a lot about the idea of having a false sense of security. Today, we can encourage people to pay attention to how someone’s lips or head move, for example, because that can often be a tell. However, technology is evolving so quickly that we can’t give advice or recommendations that will always determine what is real and what isn’t. It’s even dangerous to make a list of recommendations.

The Chair: Thank you.

Senator Aucoin: I’ll be brief. Ms. Naffi, I don’t know where to start, because technology is moving so fast and there’s so much misinformation and disinformation. We heard government representatives comment on the subject. Is it realistic to think that you can regulate that quickly or as quickly as you can to stand up to such fast-evolving technology? What makes us think that large companies outside Canada will agree to be regulated? For example, there was the case of taxing Netflix. What makes us think that this is possible? The technologies will continue to advance. I’d like you to comment on that.

Ms. Naffi: Absolutely. It’s not realistic. I agree 100%. Our work focuses on the human side, because by the time regulations are set, technology has changed. Everything that needs to be adopted has to be adopted. In the meantime, we human beings are victims. I talked to you about misinformation and disinformation, but there’s also cyber-fraud. There are seniors who receive distress calls from their grandchildren saying that they need money. Seniors rush to pay money to bad people.

Speaking of the human side, and I have been meeting with groups from different backgrounds and departments every week. I am realizing more and more that people are not aware that we are facing this enormous danger.

A first step — and this is part of our work — is to inform people that this technology exists and that we live in this ecosystem. We’re already immunizing them in a way. It’s a bit like the COVID vaccines. We make sure that when we are exposed to fake content, at least we are aware that the technology exists. Today, not everyone knows that it exists. We get a lot of messages and videos on WhatsApp and other apps that people share without knowing that the technology exists.

To go back to your question about regulation, yes, we have no choice. We need to move in the direction of imposing it on platforms. We have to do that, but we can’t wait for the public. When I talk about investments in education, I’m talking about all levels and all work contexts. I’m talking about from young school kids all the way to seniors.

Misinformation and disinformation attack all sectors. When we talk about AI in different sectors, it’s not just a matter of working on skills. There are cyber-fraud, misinformation and disinformation attacks that compromise our economy, whether it’s health-related or otherwise.

To give you a quick example, the Centre hospitalier de l’Université de Montréal, or CHUM — I know them because I work with them — has just released an initiative called Ma santé éclairée. With the doctor shortage, patients are getting their information from TikTok. They’re misinformed and they come to the doctor with a misconception. With Ma santé éclairée, we are working on the issue of the disinformation and misinformation that is circulating. We’re all patients who need information. We really need to think about this. There are regulations, but we need to invest in empowering people. We can no longer wait at this point.

Senator Aucoin: Thank you.

Mr. Tessari L’Allié: Yes, it’s a big problem. In terms of solutions, first, there is foreign policy. If Canada tells Google to do something, they’re going to ignore us. If we create a connection with Europe and Latin America and we approach Google as a multinational collective, they will listen to us more. Second, it takes a long time to put legislation in place. We have to think about where AI is going to be in two, three or five years. We have to start with education for the moment, but in the long term, we have to think about where AI is going to be in five years. That’s where the [Technical difficulties] intelligence and everything else. We have to be careful.

[English]

Senator Mohamed: I have so many questions, I’m not sure where to start. I will use them wisely, chair.

On one hand, we hear a lot about the importance of increasing literacy and familiarity. I think about newcomers and the elderly and how difficult that is. On the other hand, how do you deter people from actually doing this? Sometimes, we’ve used the Criminal Code, and hate crimes are a really good example. There’s a crime; if it’s a hate crime, you get additional penalties.

I might be naive here. We’re in the preliminary stages of our conversations, but is there someone who could respond to whether there is some merit in addressing fraud committed using AI in terms of the Criminal Code? Would it be a deterrent? Would you add an additional penalty if certain types of technology were used? I don’t know if I’m being naive here, but what is the deterrent side of this given that it’s here, it’s pervasive and it’s frightening? Anyone can respond to that. I would appreciate you helping me think through that idea.

Mr. Gruzd: Essentially, the responsibility should still be on the companies that develop and deploy AI systems instead of on individuals. I wouldn’t necessarily support that idea. Whatever tools they’re using, they may not be in Canada. Putting more criminal responsibility on someone in Eastern Europe will not help Canadians, necessarily.

Again, our best bet is to put pressure and responsibility on AI companies, where it should be, especially for the potential harm done to Canadians.

Senator Mohamed: I’m not against that idea, but in addition to placing responsibility, are there other things that we could be doing? I get the point that sometimes it’s global, so you can’t always know that it’s Canada, but at some point there’s a routing through Canada.

[Translation]

Ms. Naffi: If I can jump in, I agree with you 100%. These attacks are occurring here. We need only think of the young people in schools who take deep nudes of their fellow students. That has to be absolutely prohibited. We’re also talking about people here who defraud seniors, spread disinformation against women and so on.

Right now, there is nothing about cybercrime in our Criminal Code. It doesn’t even exist. All we’re doing is using other sections of the Criminal Code to go after these acts of cybercrime. The Criminal Code needs to be revised to adapt to our era and the technologies that are put in place and accessible to everyone.

To give you a very mundane example involving ordinary people, a mother decided to make a deepfake of her daughter’s rivals on the cheerleading squad. She created a deepfake of these girls to ruin their reputation. A mother did that so that her daughter could have a spot on the cheerleading squad. This isn’t even a recent case. It was several years ago. This has to stop. People need to know that if they’re going to move forward with these malicious actions, they’re going to be punished in some way under our Criminal Code. We have to take into consideration the different cases that arise and make sure that people don’t think about using these tools to hurt others. Even here, we’re talking about someone who is going to do harm.

Have you seen the AI homeless man prank? Some young people took pictures of their living room, added homeless people and sent them to their parents. The parents were on vacation. They were alarmed and called the police. The police were mobilized. In the end, it was just a prank, but the young people were oblivious. Then it trended on TikTok. The young people were not even aware that by doing these things, even if they’re pranks, they’re doing harm. They dehumanize vulnerable people and mobilize our resources. We need to think about all this as we plan.

Senator Mohamed: Thank you.

The Chair: I will leave you the final comment.

[English]

Mr. Tessari L’Allié: I would add the point that enforcement is a big piece. Even if you enforce current laws, you can do a lot. However, that requires a government that has the skill set to track these things and the talent to compete with companies. Monitoring the development and use of software is really hard, so, yes, we should be enforcing it, but it is a big challenge ahead.

The Chair: Now that we’ve reached our time limit, I’d like to thank the witnesses for your discussion and answers to questions. It is most appreciated.

It is a fascinating subject. We’re just at the start of it.

I would like to ask our witnesses that if there is anything you want to write to us about, please do so by December 9. We would accept anything you might send us in terms of updates or value-added information.

Before ending the meeting, I wish to thank our entire support team for the committee, those in the forefront of the room as well as those behind the scenes who are not visible. Thank you for your work, which contributes enormously to the success of our work as senators.

(The committee adjourned.)

Back to top