Skip to content
SECD - Standing Committee

National Security, Defence and Veterans Affairs


THE STANDING SENATE COMMITTEE ON NATIONAL SECURITY, DEFENCE AND VETERANS AFFAIRS

EVIDENCE


OTTAWA, Monday, May 1, 2023

The Standing Senate Committee on National Security, Defence and Veterans Affairs met with videoconference this day at 4 p.m. [ET] to examine and report on issues relating to national security and defence generally.

Senator Tony Dean (Chair) in the chair.

[English]

The Chair: Honourable senators, welcome to this meeting of the Standing Senate Committee on National Security, Defence and Veterans Affairs.

I’m Tony Dean, I represent the province of Ontario and I am the chair of the committee. I would invite my colleagues to introduce themselves, starting with our deputy chair.

[Translation]

Senator Dagenais: Jean-Guy Dagenais, Quebec.

[English]

Senator Oh: Victor Oh, Ontario.

[Translation]

Senator Boisvenu: Senator Boisvenu, Quebec.

[English]

Senator Richards: Dave Richards, New Brunswick.

Senator Anderson: Margaret Dawn Anderson, Northwest Territories.

Senator M. Deacon: Marty Deacon, Ontario.

Senator Dasko: Donna Dasko, a senator from Ontario.

Senator Boehm: Peter Boehm, Ontario.

Senator Kutcher: Stan Kutcher, Nova Scotia.

Senator Cardozo: Andrew Cardozo, from Ontario.

The Chair: Thank you, colleagues.

For those watching live across Canada, we focus our attention today on disinformation and national security. We have two strong panels of witnesses with us, so we’ll jump right in.

In our first panel, we’re pleased to welcome Farhaan Ladhani, Chief Executive Officer, Digital Public Square; and by video conference, Nicole Jackson, Associate Professor at the School for International Studies at Simon Fraser University.

Thank you both for joining us today. We will begin by inviting you both to provide your opening remarks, to be followed by questions from our members. Mr. Ladhani, you may begin whenever you’re ready.

Farhaan Ladhani, Chief Executive Officer, Digital Public Square, as an individual: Honourable senators, thank you for the privilege of speaking with you today.

I’m the chief executive officer of Digital Public Square. Since its inception, our team has been focused on connecting communities with reliable information. Over the last several years, we’ve studied disinformation in a number of countries.

I’d like to touch on three areas to hopefully set the table. The first is context that may be relevant to addressing supply of disinformation given the evolving ecosystem of its production, dissemination and consumption; second, to highlight recent evidence addressing the demand for disinformation content; and third, consideration for a more agile policy approach to actually tackling disinformation, conscious of the more significant, emerging challenges in front of us.

In our work, we see three different systems for governing the supply of information online. There’s the closed, highly centralized model that uses persistent surveillance to monitor information and censor content that’s incompatible with the prevailing narratives of the state. This also leaves the rules sufficiently opaque as to promote self-censorship. There’s an open but increasingly regulated system that is seeking greater accountability from platforms. Recent news from Brussels, which I’m sure many of you have seen, demonstrates a clear example where 19 technology companies will be subject to the Digital Services Act, one example of how governments are aiming to impose compliance and regulatory standards that are intended to promote safer online spaces. This will require these companies to take specific action on disinformation or suffer financial or other penalties. There also remains a decentralized model with limited central governance around content. Each of these three systems incorporates approaches to filtering, data storage, routing and the involvement of foreign companies for service delivery.

The other key actor in the information ecosystem, namely platforms, and their approach to content moderation has become the standard for how people experience the online world in each of these systems. Some define what content they will remove, including information that might incite physical harm or violence; others look at context, who the behaviour is directed towards, whether a report has actually been filed, the severity of the violation and whether the topic is of legitimate public interest. Each has differing appetites for external engagement and consultation with communities. These same companies are tangling with the current economic climate, increased automation and competition on AI innovation and consequential reductions or disbanding, frankly, of trust and safety teams. When you compare this with the scale of the challenge, where content has a half-life of 24 to 80 minutes on mainstream social platforms, addressing the supply of disinformation cannot alone be considered a solution to the problem.

So what have we learned? At Digital Public Square, we are interested in the demand for mis- and disinformation and what constitutes reliable information. People and their communities make up the most important part of this ecosystem. In looking at our recent work, including mis- and disinformation on COVID-19, vaccines and the conflict in Ukraine, we have found that 20 to 25% of Canadians may have an increased vulnerability to mis- and disinformation. There are some common characteristics, including those with low trust in institutions and in traditional media, as well as those with high levels of fear and grievance. We have found that our tools can have a positive impact on reducing the harm of mis- and disinformation. With COVID-19, we found that by increasing engagement with reliable information, we can indirectly increase the likelihood that individuals improve their preventative health behaviours and increase their vaccination intent. In our work on Ukraine, we found a positive effect on increasing knowledge on areas that are the very subject of mis- and disinformation, thereby reducing their harmful impact. In all this work, if we want to reduce demand, we see the importance of investing in solutions that meet communities where they are. Reducing demand and the effectiveness of disinformation increases the cost, thereby making it more difficult for pernicious actors to drive at efforts around destabilization.

So where does this all go? The disinformation problem, while ages old, is made worse by its changing shape and our inability to accurately measure its effect. We’ve seen the problem of scale and velocity of distribution over the last couple of decades. I would say that’s tiny compared to the scale and velocity we are about to witness with production. The cost of AI-supported production, from deep fakes to synthetic content, is quickly approaching zero. Generative AI didn’t author this statement, but if it had, a casual consumer in the very near future couldn’t tell the difference.

In parallel, societal trends are exacerbating our collective vulnerability to disinformation that preys on our fears and grievances. Trust is in decline. Digital and media literacy gaps both in Canada and at home are making our communities more susceptible to content that reinforces social, economic and cultural divisions. As these divisions are reinforced, including by external actors, alienation and polarization can grow, with consequential impacts on our national security.

I’d like to leave you with one approach to consider. We need to match the pace of change with more agility. We need an iterative model for testing policy solutions to the present and emerging challenge of disinformation. It’s an ecosystem problem. To address it, we need to bring together the key players — government, researchers, civil society and companies — into a single conversation where policy prescriptions can be designed, potential harm and mitigation can be tested and the results of implementation can be fed back into a cycle of learning. This will not yield consensus, but it could deliver a more viable approach to actually addressing harms while capitalizing on opportunities that impact individuals, communities and our institutions.

Thank you. I look forward to your questions.

The Chair: Thank you, Mr. Ladhani. I think it’s fair to say you’ve set the table very well for the discussion this afternoon.

We are now going to hear from Nicole Jackson. Ms. Jackson, welcome back to the committee, and please proceed whenever you’re ready.

Nicole Jackson, Associate Professor, School for International Studies, Simon Fraser University, as an individual: Thank you, Mr. Chairman. Good afternoon. It’s an honour and privilege to join these discussions.

I’m an academic. I work in international studies and Russian foreign policy. My comments today are based on my recent study of government responses to foreign disinformation in the context of the war in Ukraine. I urge the Canadian government to continue to advance its efforts to achieve a balanced and responsible approach to address foreign disinformation both domestically and abroad. It is particularly urgent that it prepares to confront the challenges that disinformation may pose in future wars.

First, I will outline two main challenges that the government faces. Second, I will outline two important findings of my study. Third, I will provide three implications from the study for responses in future wars.

First, disinformation, yes, is a complex, rapidly evolving, transnational challenge that poses significant dilemmas for Canadian decision makers and Canadians who rely on accurate information. Looking forward, it is alarming to imagine the consequences of AI’s automation of persuasion that could be wielded by well-resourced state and private actors, both in peacetime and in war.

Second, the government needs societal help to address this issue. The challenge is to clarify the role of government, as well as civilian and private actors, as they address together the causes, demands, processes and consequences of disinformation. There are many unknowns, and it is imperative to continually reassess the advantages, limits and possible unintended consequences of a wide array of options that exist and that can be used to respond. Impacts on freedom of speech and privacy have to be considered, and who is responsible and why have to be clarified.

The results of my study show that, first, during the current war in Ukraine, there has been a dramatic change in how information is produced, manipulated and distributed. This has both amplified and extended the reach of mis- and disinformation, often in real time. The war must be viewed within larger races to control the content and flow of information. In the future, narratives and claims will likely have even more power to influence domestic and external support for wars, to buttress or bring down morale and to deceive on the battlefield.

Second, the study outlines Russia’s attempts to control the content and flow of information on three major fronts: on the ground in Ukraine, more broadly in the larger political war and domestically within Russia.

Third, the study shows that while the Canadian government perceives that Russian disinformation poses an urgent range of threats at multiple levels, it has responded with a scattering of different kinds of actions. These have been aimed, first, at Ukraine, to strengthen its civilian and military resilience; and, second, at Canada, to strengthen its societal, institutional and technical resilience and to impose new costs and punishments on some Russian actors spreading disinformation. These responses rarely have addressed countries outside the West or the Kremlin’s domestic disinformation in Russia, yet both areas are crucial for Russia’s support for its war in Ukraine.

These understandings raise three implications for responses to disinformation in future wars.

First, for maximum effectiveness, foreign mis- and disinformation must be addressed holistically in new national defence and security updates. The dilemmas foreign disinformation pose will become more urgent with growing geopolitical rifts with Russia and other states. I urge the government to consider which kinds of disinformation are of most concern and why and to outline a balanced approach in alignment to the threat landscape. Government rhetoric about threats is exceedingly broad. It is not possible or desirable to respond to everything, so the government can better clarify its role and what options it will take as a responsible actor. However, at the same time, the government and Canadians need to remain flexible and able to function with uncertainty in the face of evolving challenges.

Second, Canada would be well advised to build and act and continue to act within and alongside international coalitions as well as at the provincial and local levels. Foreign disinformation poses a transnational set of challenges that demands holistic responses within these larger international coalitions and alliances and domestically with provincial governments, civil society and private actors across all of Canada. Internationally, Canada should develop coalitions that include states and civil society outside the West, including — being idealistic a bit — Russia’s civil society and diaspora abroad.

Third, an interdisciplinary national conversation is needed, I believe, to address issues posed by disinformation alongside that of other foreign interference — for example, an interdisciplinary whole-of-society hub for discussion, research and policy, including on the effectiveness and unintended consequences of different options. Disinformation is context-specific, and it will help to continue to differentiate between different actors, different types of disinformation and underlying power dynamics. Transparent frameworks are needed, for example, when addressing whom and what to ban or sanction, and who gets to claim truth and declare when a claim is simply a controversial opinion and not a dangerous one. These will be very difficult conversations, but they are necessary to develop the trust between society and government and a culture of national security resilience.

To conclude, it is imperative for the government, alongside domestic and international actors, to articulate a balanced, responsible and transparent approach to address foreign disinformation. Second, it could profitably consider what is coming next to prepare for the dangers that it will likely pose in future wars.

Thank you very much.

The Chair: Thank you, Ms. Jackson.

We now go to questions from our members. Before proceeding to questions, though, I’d like to ask participants in the room to please refrain from leaning in too closely to the microphone or to remove your earpiece when doing so. This will avoid any sound feedback that could negatively impact committee staff in the room.

Mr. Ladhani and Ms. Jackson are here with us for one hour. In order to ensure that each member has time to participate, I will limit each question, including the answer, to four minutes. Please keep your questions succinct and identify the person you are addressing the question to.

I offer the first question, as is our normal course, to our deputy chair, Senator Dagenais.

[Translation]

Senator Dagenais: My first question is for you, Ms. Jackson. I saw that your research interests include paramilitary groups in Russia. I would like to hear what you know about the Wagner Group.

We know they are mercenaries whose recruitment efforts exploded last year and who were deployed to fight in Ukraine. Who are they really? Should they be considered criminals on Vladimir Putin’s payroll? Who controls them? Are they paid by Russia? Above all, does this organization have any ramifications in Canada or the United States?

Ms. Jackson: Thank you very much, Senator Dagenais.

[English]

I understand you’d like me to speak about the Wagner Group that has been widely discussed in the media in connection to its involvement, for example, in the current war in Ukraine. I’m guessing you’re interested in how much agency it has and how much influence, maybe, President Putin has over this particular group of mercenaries that has worked in the Middle East and other places such as in Africa as well as in Ukraine.

I’m not an expert in this area; that is not my focus. What I can say is that, based on everything that I have read — and this is secondary information — it has, obviously, played a significant role, and there is increasing, as has been widely reported, friction between the government and the leadership, as such, of the group, and, from what I understand, decreasing morale of the troops that work both within this group, the mercenaries, and, more broadly, the Russian troops. I really couldn’t comment further on a special relationship between Putin and his control over this group other than it seems like Putin has a pretty strong hold over these groups at this moment.

[Translation]

Senator Dagenais: Turning back to the war in Ukraine, Canada has made some major commitments, both militarily and economically, to support Ukraine and condemn the regime of Vladimir Putin. Considering all of Canada’s efforts, after one year of war, can we say today that Canada will get anything out of this? Should it not instead try to rebuild relations with Russia, given the importance of the Arctic passage, among other issues?

[English]

Ms. Jackson: That’s an excellent question. I think you’re asking if Canada has done the right thing and is it going to see back on its investment in this war.

I think the Canadian government and, obviously, the majority of Canadian people have decided to support Ukraine, and they’re following Ukraine’s desire to fight and to continue the fight. Ukraine has not yet articulated at what point they will be ready for negotiations. Canada has decided to support Ukraine in all of these multifaceted different ways.

My own fear from the beginning, and it’s been made public because I’ve written about this in newspaper articles from the beginning, was simply a warning. I know a lot of people know this, but it was not going to be an easy war. In terms of the Russian state perception, it is existential for them to continue to hold on to this particular area of Ukraine — at a minimum, Crimea and some territories around Crimea — and it would be extraordinarily difficult to dislodge.

I don’t have a crystal ball into the future. Things can change rapidly in times of war. However, if I had to guess, I don’t think this is going to end soon. It could be a very long time.

Senator Oh: Thank you, witnesses, for being here.

What countries are the most significant sources of disinformation in Canada? How is this disinformation affecting our national security and government? Maybe I can hear Mr. Ladhani first.

Mr. Ladhani: Thank you for the question.

To enumerate the total volume of disinformation from any one country as compared to another is challenging for a couple of reasons. You may have direct disinformation from a particular state, and then you’ll have disinformation that comes through proxies, and then you’ll have disinformation that is multiplied by everyday citizens. If we’re looking at the total effect on any one state actor’s active disinformation in our country, we would have to find a way of enumerating each of those three areas to be able to produce a composite picture, and that’s pretty tough.

I think it’s fair to say that we’ve seen and observed — and your next panel is going to talk about this — the scale of Russian proxy and Russian-supported mis- and disinformation that has been prevalent in Canada, both in the lead-up to and subsequently upon the start of hostilities in this country. It’s fairly significant. They are not alone. There are many other actors, both state and non-state, that are driving that content into the country.

Regarding the second comment, which is how does it actually affect our national security, there are two areas where there are some real significant consequences. One is the destabilizing effect on social cohesion that that content has in society. Again, measuring that is pretty difficult because the actual effect of disinformation on any one individual’s choice is pretty tough to estimate. That said, through the work we’re doing, we do know what happens when you provide good, high-quality information to people. It improves the choices that they might make about the policies they support, their considerations of risk, and how and where they might find alignment to government content or information. So we see the very positive effect. If we look at the other side of that coin, we can consider what the negative implications are of that disinformation in society. While we can’t quantify its specific effect, we can see the inverse through our studies clearly demonstrating the positive effect of providing high-quality information.

The second part, coming back to social cohesion, is that when we have fears and grievances in society, that polarization that exists can be deepened and exacerbated as a result of the disinformation. That leads people to considerations about who they trust, how they work together, what problems they choose to solve and who it is that they believe when they’re sharing information. All of that, collectively, has a significant impact on our democracy and subsequently on our national security.

Senator Oh: Many people go to YouTube for a lot of information. Do you think that a lot of the information on YouTube is unreliable? My wife likes to forward YouTube videos from one person to another. I told her, “Stop forwarding it. That is not correct information.” You have to verify a lot of the information on YouTube and the internet.

Mr. Ladhani: At the start of the pandemic, I used to sit on chat calls with my mom and my family. They would share information about the solutions or the problems associated with COVID-19. These were in the early days. They would say, “You get it through 5G; Bill Gates is responsible.” You’ve heard stories about the impacts of this. These aren’t people in black coats who are trying to destabilize society. They’re just people sharing information that they think is accurate. While we are in the midst of looking at disinformation intended to produce a specific effect, we have to be conscious of the distribution of content that might not be disinformation but could be provoking destabilizing effects as well and raise the bar for literacy and information that people consider to be high quality.

Senator Kutcher: Thank you to the witnesses.

I’ll focus on disinformation rather than misinformation and the spreading of it. One of the elephants in the room — and I’ll call it out here — is that disinformation is being used to achieve political ends by internal political actors. One only has to look at the GOP in the United States. Sometimes it’s aided and abetted by external state malignant actors who wish to destabilize, such as the stolen election and anti-vax messaging. These are designed to decrease trust in both the current government, whatever the government of the day is, and to decrease trust in democratic institutions and democratic processes — that’s the purpose of it — and thereby internal actors. What are your thoughts on the solution when it’s the role of the government that is actually the target of the disinformation in a polarized political environment which is becoming more polarized and more political?

Mr. Ladhani: Thank you for the question.

Let’s step back for one minute and come to the answer. Let’s think about what has to happen in a very short period of time when you have a piece of content, and let’s call it disinformation. I said in my opening statement that the half-life of a piece of social content is somewhere between 24 and 80 minutes. Lots of studies demonstrate the total time it takes for a piece of content to get to about half as effective as it’s going to be. That is only 24 minutes to address this problem — 24 minutes. You need to address a risk problem, a compliance problem, a financial problem, a legal problem, a communication problem, a policy problem, an operational problem and a community engagement problem. Even if you somehow manage to accomplish all that in 24 minutes, half the impact has already been made. When we see domestic actors proliferate disinformation in society, that line between foreign actor and domestic actor is ultimately eviscerated. It’s the same ecosystem. One can simply reinforce the other. As a result, the amount of activity that needs to happen to stop that activity, namely, the content from being highly prolific and in the inboxes and in the feeds of many people, is difficult. It’s why I don’t think you can solve this with a supply side problem.

When it comes to governance, government and their role, Ms. Jackson laid this out, and I agree. We need a better way of actually bringing government into a conversation with platforms and civil society in an active process of reviewing disinformation as it emerges, debunking it from the level of civil society, platforms acting to reduce its speed and velocity, and government being able to communicate effectively on the merits of the accurate information it wishes to disseminate. It can’t own that conversation. However, it can be a party to that conversation. By being a party to the conversation with other actors, I think there’s a possibility of improving trust.

Senator Kutcher: I’d like to hear from Professor Jackson as well.

Ms. Jackson: You’re asking, I believe, how we can respond to all of this type of disinformation that’s out there coming from many different actors.

Senator Kutcher: The role of government. If government is itself the target of disinformation, how does government do it without compounding the problem?

Ms. Jackson: Again, I’m not an expert in this. I can tell you what the government is trying to do in Ukraine, for example. Russia isn’t the only actor that is spreading all types of different information — narratives, outright lies, distortions, et cetera. The way I see it, there are different types of disinformation coming from different actors. Somebody within government, with private actors and civil actors, will have to decide which ones they are going to respond to. In time of war, however, these types of narratives, confusing people and people not knowing what is accurate information, can actually do direct harm to people. Ukrainians need to know where Russians are going to attack next. All this misleading and false information and false images and so on are a direct threat to our ally that we are trying to help — who is being attacked and suffering atrocities from Russia. What I have done is tried to see what we can do to start thinking through the next types of conflicts where there will be even greater tools to mislead and confuse people. I agree that they can come from all sorts of different types of actors, including private actors that can monetize this and further complicate what’s already a very complex issue.

We see a whole variety of tools — at least I do — that I’m not expert at but that we can use to confront this issue. As you’ve just heard, one of them is education. If you want the Canadian public to be ready to understand nonsense and dangerous narratives that come from Russia and other actors and affiliated actors, and which are repeated by everybody, then you need to have good education. You need to actually understand something about the history, culture and so on and Russia’s previous involvement in these countries. The way I see it, it goes from ideas of digital literacy, media literacy and all sorts of other tools in the tool box, right through institutional resilience and different tools of technical resilience, all the way out to more covert action in the offensive cyber realms.

I completely agree that it depends on where we’re talking along the kill chain — as military people talk about it — where we want to act and when we want to act. That would be my short —

The Chair: I’m sorry to interrupt. We must move on, but I’m sure we will come back to this area of discussion.

[Translation]

Senator Boisvenu: Mr. Ladhani, are you comfortable talking about Chinese disinformation?

[English]

Mr. Ladhani: We have not done considerable work on Chinese disinformation in Canada, so I couldn’t speak authoritatively to it. I think it’s worthy of study, along with others that are engaging in disinformation in this country. We’ve looked at this challenge globally in other parts of the world and certainly are able to observe some trends and patterns around the way in which China, as well as other actors, are leveraging these online tools, both for the purposes —

[Translation]

Senator Boisvenu: I have a specific question. I understand that you are not comfortable answering the question.

Ms. Jackson, are you comfortable discussing the topic?

[English]

Ms. Jackson: No, I’m not. I believe more needs to be looked into and transparently revealed from many government actors if they have information about this. As well, more studies need to be conducted about this country and other actors.

[Translation]

Senator Boisvenu: I wanted to ask a question, but I sense that the two witnesses are not comfortable answering it.

I would like to thank the witnesses for being here. They do not appear to be comfortable with the topic of Chinese disinformation in Canada.

[English]

Senator Cardozo: My question is both to Professor Jackson and Mr. Ladhani. If we can just take a couple of steps back, can you give me your thoughts about how we define the difference between misinformation and disinformation and what their objectives are? Perhaps you could answer in terms of the specific relationship to polarization in society. Maybe I’ll start with you, Professor Jackson.

Ms. Jackson: First of all, in terms of public conversation, I think these terms are often bandied around in quite a fuzzy, ill‑defined way, which just increases confusion and everybody’s mistrust about various actors talking about it, so I think it’s a great question.

There are different definitions out there, but academics tend to look at disinformation as having an intent — some kind of strategy behind it and some kind of clear intention — to do some kind of harm, with some people also looking at more intent to cause violence. That definition might work if we’re talking, again, about the war in Ukraine. Obviously, there’s an intent not just to confuse or persuade people in different kinds of ways but to actually do outright harm. They’re trying to win the war, of course. They’re trying to get support and morale. But on the battleground, they’re trying to win, which involves direct harm. There’s real direct harm there.

This happens if Russia, for example, sends out some kind of tweet or something that says they have not done something or they are not going to attack this place, and then they attack somewhere and kill a lot of people, and then they make or don’t make false images of what’s going on. There’s obviously clear intent to win and to cause harm, which is then recycled and picked up extraordinarily quickly by all sorts of platforms and other different ways of disseminating knowledge. It’s not just about digital. It can happen all different ways — through embassies, think tanks and traditional newspapers, et cetera. People see an interesting point of view, and they send it on, and that causes people to either be completely confused or to think that something is happening that isn’t and so on. That’s the way I see the different definitions.

Senator Cardozo: Mr. Ladhani?

Mr. Ladhani: The characterization of intent as being a principal part of disinformation is a good working definition. I think the challenge, of course, is attribution. It can be really challenging at times to attribute a piece of content back to a particular actor that might have an intent to provoke a particular outcome. How do you determine whether or not it’s coming through an intermediary or a proxy and whether that intermediary’s intent was to cause harm? So while I think intent is a critical feature of the definition of disinformation, one of the challenges we have is being able to define this problem clearly so we can design a very specific policy approach to simply tackle it. It’s why I think this is a complex, wicked problem. However, I think this combination of intent and then ultimately of being able to validate that intent with some attribution becomes part of the design of the definition of disinformation.

Senator Cardozo: Is misinformation just something that is more benign, like they just got some of the figures wrong by mistake?

Mr. Ladhani: I wouldn’t characterize it necessarily as benign. It can actually be really dangerous. It also covers a much wider array of content that could be factually inaccurate and misleading. There may or may not be intent, but it is not purposely designed to provoke a particular action. It contains inaccuracies that may, in fact, produce that outcome, but it may not have been purposely intended to produce it. So it covers a much larger grey area, I think, of content.

Senator Boehm: Thank you to our witnesses for being here.

Mr. Ladhani and Professor Jackson, you coincided on the issue of having an agile policy approach or a single policy conversation that would bring together government, civil society, private-sector companies and maybe other actors as well. The question is how to do that and how to do it effectively. Certainly on the international side, there’s been some success in the G7, as I recall, with the establishment of a policy mechanism on threats to democracy, for example, which is related to this when you have a situation where the G7 had worked as an incubator to bring things to the G20. This is not something you could readily bring to the G20, considering its composition with at least two of the large maligned actors present there. You would not necessarily get big tech wanting to come to the table either unless the conditions were very right. We’ve seen that in terms of getting them to provide witnesses to committees, including in the United States and also here.

I’m wondering if you have any prescriptions as to how this could take place. One always talks about convening power in either a positive or negative way, but what is the catalyst for this? Mr. Ladhani, maybe you can start.

Mr. Ladhani: Thank you, sir.

I actually think this challenge of foreign disinformation could be a really good starting point. It strikes a nerve, it creates urgency and it might create the conversation. This conversation is an example of that. It could be a useful case where you could start to design a policy approach that actually brings those actors together.

There’s a good example of that in Taiwan. In the paper that my colleagues will talk about in the next panel, one of the recommendations is to look at that model. In that model, you have civil society monitoring and exposing for information manipulation and interference threats. Social media and government representatives are alerted when the threats take place, and they can actually act accordingly. Social media platforms that are operating in Taiwan might choose to de‑throttle some of the identified narratives, and when required, government can produce responses to directly address in what they call “prebunking” narratives within an hour of being alerted. That’s when it’s working very well. My view on that is that we need to start taking small steps in order to design an approach that might actually be agile enough to deal with the looming challenge. Taiwan is a good example of a model that brings all three parties together. They’re able to identify the content using civil society, researchers and people who are actively observing; they have relationships with social media platforms whose policies allow them to reduce the velocity of that content; and government can be better positioned to come back with the truth as quickly as possible.

That accurate role for government there isn’t to decide what might or might not be a piece of disinformation at any given moment, but civil society can alert the cluster of people to begin the process. Government can choose to act or not act. If we do that for some time, we’ll identify areas that help to determine where communities feel vulnerable to disinformation, coming at it with a community-first lens to identify the boundaries of that content, identifying where social media companies are willing to act and where they won’t, and seeing where government is choosing to proactively message or not message. Out of that, you might be able to develop a more robust policy framework that starts to put rules into place where you see the harms of inaction actually growing and manifesting. I hope it’s a more involved approach.

Senator Boehm: Thank you. Is there a moment for Professor Jackson to answer this, or are we out of time? Perhaps we can pick it up later.

The Chair: There are 20 seconds. Go ahead.

Ms. Jackson: I completely agree with almost everything that was just said. We can’t have government policing speech or having civil societies think that, or all the problems of trust will actually increase and spiral. That returns to my point about needing a conversation. There are conversations happening all over Canada about this. It’s an interdisciplinary problem. There could be a way of getting different groups and representatives together to move forward on what, in some sense, is a classic global governance problem with different actors.

Senator M. Deacon: Thank you both for being here.

I think I’ll direct my first question to Mr. Ladhani. I’ll just jump on something you said earlier. Hopefully, we are really beyond the point of asking if we have a problem. As senators, we’re constantly asked what we are doing about it, so at least we’ve moved to the awareness part of this.

There was an article in The Economist recently about some online games. I think we need to keep our mindsets open, think outside of the box and try to turn every rock on this. With that in mind, there are some online games that can be used as educational tools with regard to misinformation and disinformation. One game came from a PhD student in Concordia in Montreal called Lizard and Lies. It has received funding from Canadian Heritage. Obviously, there is interest in what it means and what the possibility is. More broadly, could such outside-the-box tools like that become effective, more effective or part of the response in getting Canadians to think critically about what they read online? If so, how can we get individuals to use them in the first place?

Mr. Ladhani: Thank you for the question.

My organization develops and leverages games on a regular basis to drive better critical thinking around mis- and disinformation, both in Canada and abroad. I’m a big believer in the approach. There are some useful characteristics that games create for us to reconsider our pre-existing views and opinions about things that may be inaccurate or false. We’ve seen significant evidence through repeated experimentation and trials of this work of the effect of increasing knowledge from individuals playing our games, meaning they’re more accurately able to identify accurate information, and we see greater alignment between people from different perspectives and from more diverse backgrounds, whether political or demographic, agreeing on a basic set of facts.

Through what are called randomized controlled trials — the idea you can test a particular type of approach against an active control; think regular content on a web page or a blog against content in a game — we see significant improvements in people’s intention to improve their preventative behaviours when it comes to positive health outcomes, including vaccination, and better risk assessments as to whether they would be willing to accept rolling closures of large-scale event venues in the context of the pandemic.

Senator M. Deacon: I’d like to stop you there because I’m running out of time and I want to jump on what you’ve said and look at the demographics. This is for both of you. Thank you for that answer.

Again, we’re all trying to figure this out, but another aspect I’m reading that’s really surprising — and maybe it shouldn’t be — was with respect to something in Current Directions in Psychological Science, which found that adults over the age of 65 are more than seven times likely than younger adults to engage, take and believe fake news. My particular concern about that is that the older generation is more likely to vote, and we need to ensure that they have correct information. With that realm and some of the solutions you’re looking at, how do we work with that older demographic with warnings and educational tools that they may have left in their years?

Mr. Ladhani: I’ll be very quick.

We actually see pretty significant uptake of our products among people across the demographic spectrum, including those 65 and up, males and females, across the country. The idea that games are limited to people who are younger is actually inaccurate. We see a significant take-up. The design of them has to consider what is going to be interesting for people in that age category and where they’re from. That is why meeting communities where they are is so important, because if you can design them for those communities, you can get a pretty significant effect in terms of take-up.

The Chair: Do you want to add anything very quickly, Professor Jackson? Thank you. We’ll come back to you.

Senator Richards: Thank you both of you for being here.

In The Master and Margarita, Mikhail Bulgakov’s brilliant anti-Soviet novel, people live in a world of rumour and terror because of state-sponsored disinformation. Now, with the advent of artificial intelligence, this is a problem moving at breakneck speed. Sooner or later, if it’s not quelled, as Bulgakov’s novel states, we will enter a world of fear and frenzy.

Another worry is this: Who decides among us and among our champions of security what is actually disinformation and what is not? How can we be positive that this won’t work against us as much as for us?

Mr. Ladhani: I don’t know that we live in a world where a single person, individual or institution can be the arbiter of truth anymore for communities. Communities are going to decide for themselves what they believe to be accurate or inaccurate on the basis of a number of inputs. I believe “because the government says so” is no longer good enough for those communities to trust the veracity of content. Often, they won’t because it’s the government.

That’s why it’s so critical that we find approaches to working among each element of society to be able to create the conditions for communities to feel like the content is theirs and to feel they’re able to challenge the ideas the state might be presenting, with others, including people they trust, in order to arrive at conclusions as to what they believe to be true or not true on any given issue. A preponderance of the state simply broadcasting content, as if that’s going to change the dynamic, I would argue is making it worse, and we are seeing the blowback effects as a result.

Senator Richards: I agree with you.

Ms. Jackson, could you comment on that, please?

Ms. Jackson: In some sense, I would commend the government for starting over the past decade — I’ve written about this — to raise awareness about multifaceted challenges that are coming from a whole range of mis- and disinformation. In the Ukraine war again, declassifying intelligence at the right time has helped. We need to keep doing this type of thing when possible to keep society’s trust.

I personally think that we need to be very careful when we start talking about bans. [Technical difficulties] if you want. Banning RTV or sanctioning. The government has put targeted sanctions on a significant number of individuals and media entities from Russia. The problem with that isn’t necessarily doing that, and maybe more of it should be done, but it needs to be how that happens, why those particular actors and why at those times needs to be very carefully explained to the Canadian public, because otherwise, I agree, it will continue to increase mistrust of those people who are already skeptical about governments for a whole variety of reasons. Hence, we need to be transparent and really careful as we think through what some of the drawbacks could be. One of the big worries about mis- and disinformation is about this increasing level of distrust in elites and scientific information and governments, et cetera. We don’t want to feed into that by the government doing something that members of society, some of them, just won’t react well to and won’t understand.

Senator Richards: Thank you.

The Chair: Thank you very much to both of you.

Senator Anderson: Thank you to the witnesses.

My question is for you, Mr. Ladhani. You spoke about the need for an informed approach to address mis- and disinformation and to provide high-quality information.

There is an assumption in Canada that there’s equity and equality across Canada. There is not. There is an increasing global interest in the Arctic. Within the Northwest Territories and Nunavut, there is a reliance on more traditional methods of news, including radio and traditional Northern-based newspapers, as well as a diversity of languages. In the Northwest Territories, there are 11 official languages, and 9 of those are Indigenous languages. The cost of access to internet is also prohibitive and in some areas absolutely unreliable. How are the needs of the Arctic and the three territories being addressed when it comes to disinformation or misinformation in Canada?

Mr. Ladhani: I wish I could clearly articulate how that’s being done. The fact that I can’t says something about the level of work that’s being done, not because of the absence of intent. My suspicion is they’re probably not as well served as they otherwise ought to be. That’s true for many communities around the country. That’s true when it comes to languages. I suspect that’s true when it comes to access to available online tools. I think that it’s certainly true when it comes to the cultural considerations of how content ought to be both designed and created to have the appropriate impact amongst those communities. It’s a challenge. I think it’s a challenge for all of us, but the fact is that it’s a challenge that we’ve got the tools to be able to address if we have both the intent and the desire to do so.

Senator Anderson: Thank you for your information.

You also spoke about the need to engage governments. In the Northwest Territories, 50% of the population are Indigenous, as well as there being land claim agreements, self-governing agreements and modern treaties. Also, in Canada, there are 25 modern treaty holders that cover 40% of Canada. Is there any engagement with these Indigenous governments or organizations that you are aware of?

Mr. Ladhani: I can give you a quick example from our work on vaccine mis- and disinformation. We launched a program. Over 100,000 Canadians have engaged. What we learned out of that is that both the approach and the content needed to be much more specifically designed to engage Indigenous communities. We learned that from listening sessions and from engaging with the communities to provide us feedback on the approach. That ultimately was realized in a dedicated approach to engage Indigenous communities to address mis- and disinformation in a very specific, bespoke platform that was co-designed with them. We found the efficacy of that significantly greater than that which may otherwise be available to other Canadians. It’s exactly that kind of tailored approach I think is necessary on these issues. We’ve seen evidence that it can be done, it can be done well and have a positive impact, but it has to be purposeful and designed from the get-go.

Senator Anderson: Thank you.

Senator Dasko: My question is to both of you. First of all, I’d like you to think about examples of disinformation that you have seen that have been especially impactful. Mr. Ladhani, in your work in the health area, and Professor Jackson, in your work on Russia, can you give a couple of examples of what you believe to be the most impactful incidents of disinformation, and what was the impact of that disinformation? I’d like to hear that.

Second, as a general question to both of you, how sophisticated are the purveyors of disinformation in understanding their target audiences? Do they just put it out there or, in fact, are they really sophisticated? When I was in the polling business working on social marketing campaigns, we had to know what the target audience was, what the messages were for them and how to get it out. I wonder about these people who are creating the disinformation. They have proxies, of course. How sophisticated are they? Are they just putting it out to the Canadian public and hoping it sticks, or are they sophisticated and putting it towards the 25%? That’s a different problem if it lands on the 25% as opposed to disinformation affecting other segments of society and thereby having a different kind of impact. Those are a couple of questions that I will ask both of you.

Ms. Jackson: In terms of the question about how sophisticated disinformation is coming from Russia, my personal understanding, again, is there is all types of information coming out from Russia.

If we look at the war in Ukraine, a lot of this is about trying to gain support for their war from certain segments and regions of the world. They’re trying to get their support. I don’t have a secret line into intentions, but it seems pretty obvious that they are trying to convince Canadians that it’s not going to be worth their while to continue to send equipment and so on to Ukraine and that they should stop. Some of it is simply normal state persuasion in that sense, trying to say, “Hey, this is why we have these objectives and interests, and we want you to understand.”

Then there seem to be a whole variety of actors affiliated with the government allegedly, some of them that are inspired by the government, some of them that are the direct links to different types of security and military groups within the government, that have tried in a more sophisticated way to go after particular groups. This isn’t my specialty so this would be secondary work, and I think you will hear people talking about it in your next committee round where they’ve found that this has happened in a systematic way.

For me, the real worry is that this is going to become more sophisticated over time and maybe not dropped off in such a crude way as it has in the war in Ukraine and that it’s going to become more sophisticated and more of a threat in the future. That’s what I see is going to happen with states and other private actors with a lot more money and more sophistication with the new technological advances and so on.

You will hear later in detail about different examples that Russia has been involved with. In terms of the war, a lot of it has been about disparaging, for example, Chrystia Freeland’s position as a Canadian-Ukrainian or attacking Canadian troops and what they are doing in Ukraine alongside NATO allies and so on.

Other specific ones that have caused the most harm are basically when they say, for example, “We have not committed this atrocity. We are not going to go here; we are going to send our troops there. We are not even going to invade.” Then they invade, and we are not ready for the invasion. That seems like a pretty dramatic example.

As for effectiveness, I’ll reiterate that it’s very difficult to measure this in terms of Canadian support for war. A lot of people will say that they are still continuing to support the war in many ways, and they are supporting Ukraine. They’ve held on so far. Other people will say that we have not given as much support as they would like. It can’t have been as effective, this kind of disinformation, as Putin would have liked it to have been, let’s put it that way. I’ll leave it at that.

Mr. Ladhani: On sophistication, I’m worried it doesn’t have to be. It’s cheap to produce. It has low manufacturing costs. It’s cheap to test. It’s inexpensive to distribute. It’s highly disruptive if it works, and you can now produce it at mass scale with a click of a couple of buttons. Just like every marketing and advertising campaign that marketers have used to sell things from toilet paper to cars, you can just test and learn with very low input cost and very low risk. As a consequence, you can double down and double click on the ones that are working and reinforce those with more resources. So the cost is fairly low, and the risk of not needing to be sophisticated has diminished rapidly, particularly with new tools.

When it comes to the actual content itself on health, I would say that one of the areas where we saw significant impact was on information that was targeted to parents around the safety of vaccines. There will be fear and real concerns about whether or not vaccines are actually going to be harmful or impactful to pregnant women. Those are areas where fears and grievances can be preyed upon very easily. As a result, it’s those areas where we see the greatest effect of misinformation as well as disinformation. It’s that relationship to fears and grievances that exist in society, those fissures and cleavages that are really ripe for manipulation. It’s the areas that should be the locus of our work as a result.

The Chair: On behalf of my colleagues, I want to thank you, Mr. Ladhani and Professor Jackson, for terrific responses to some very tough and insightful questions from my colleagues in this room. We are very grateful for your contributions and your expertise. We wish you all the best and hope that you will continue in these endeavours, because they’re hugely important for us in this room and, indeed, for people across the country. Thank you very much.

We now move to our second panel for today. We are pleased to welcome Marcus Kolga, Director, DisinfoWatch, and Senior Fellow with the Macdonald-Laurier Institute; Brian McQuinn, Assistant Professor and Co-Director, Centre for Artificial Intelligence, Data, and Conflict at the University of Regina; and Cody Buntain, Assistant Professor, College of Information Studies at the University of Maryland.

Thank you for joining us today. We will begin by inviting you to provide your opening remarks, to be followed by questions from our members. I remind you that you each have five minutes for opening statements. I believe we’re starting with Mr. Kolga. The floor is yours.

Marcus Kolga, Director, DisinfoWatch, and Senior Fellow, Macdonald-Laurier Institute, as an individual: Thank you, Mr. Chairman and members of the committee, for inviting me to testify here with you today.

I’ve been monitoring and exposing Russian information and influence operations for the past 15 years. Over that time, we’ve witnessed their tactics and narratives evolve. However, their objectives have remained consistent, even since the end of the Second World War, that is, to distract, distort and divide democratic societies.

The Kremlin’s primary objective today is to erode Western support for Ukraine, including right here in Canada. While Canadians have demonstrated strong resilience against recent Russian information operations, cracks are appearing. Kremlin propagandists are exploiting existing political divisions by tailoring anti-Ukrainian narratives that connect with the far left and the far right. Among them are narratives that seek to distract us by shifting blame for the war onto NATO. The Kremlin aims to erode Canadian public support for sending weapons and aid to Ukraine by advancing narratives that manipulate Ukraine’s past problems with corruption and casting false doubts about the government’s trustworthiness. The Zelenskyy government is regularly accused of neo-Nazism by the Kremlin, which uses this narrative to justify the war and to attack the credibility of Ukraine’s government. That neo-Nazi narrative is also part of a broader, ongoing campaign that’s been deployed by Moscow since the Cold War to discredit regime critics.

Parenthetically, the Ukrainian and Central and Eastern European communities in Canada have long been targets of this hate-based disinformation campaign, and this has recently led to a rise in incidents of violence towards them. Over the past 15 months, cars and homes belonging to Ukrainian Canadians have been vandalized and community members have been threatened and intimidated. Last year, a letter was sent to the Estonian Honorary Consul in Toronto threatening to spread anthrax in the community if the Estonian government continued to support Ukraine.

When the Kremlin’s anti-Ukrainian and other narratives are adopted by the far left and far right, they’re scrubbed of their Kremlin origins and, once amplified by them, they’re exposed to audiences that can exceed hundreds of thousands and even millions of viewers. On the far-right, former FOX News host Tucker Carlson frequently advocated for positions that aligned with the Kremlin to his 3 million nightly viewers. Carlson was so effective that his recent firing was lamented by ultranationalist Kremlin propagandists like Vladimir Solovyev, who actually offered Tucker Carlson a job on Russian state media. Carlson has also hosted far-left Kremlin aligned activists on his show, like Aaron Maté, a Canadian contributor to a far-left media platform called The Grayzone. The Grayzone contributors regularly appear on Russian and Chinese state media and speak in support of the Assad regime in Syria and Venezuela’s authoritarian leader Nicolás Maduro.

What appears to be emerging is an alignment of the far right and far left based on their common anti-Ukrainian, anti-NATO, anti-establishment and anti-democratic views. There’s evidence that some of this is being coordinated by the Kremlin. British journalist Catherine Belton recently uncovered a Kremlin operation to create a German anti-Ukrainian coalition between the far-right AfD and the far-left Die Linke. Members of both parties recently participated in an “anti-war rally” in Berlin where protesters demanded an end to EU and German support for Ukraine. According to leaked Kremlin documents, this alliance of the German far left and far right was an explicit goal proposed inside the Kremlin last summer. The Kremlin also ordered the development of a campaign to dampen support for Ukraine through anti-war narratives. The documents also detail meetings between Russian officials and members of both those German extremist parties.

There’s evidence that Russian state media and diplomats in Canada have also been coordinating efforts to inject pro-Kremlin narratives into our information environment. Canadian far-right and far-left activists have frequently appeared on Russian state media channel RT over the past five years. A recent report by journalist Justin Ling outlined efforts by Russian diplomats to promote disinformation about Canadian elected officials to Canadian journalists and columnists. Last month, a Canadian far-left activist boasted on social media about meeting with Russian foreign ministry officials in Moscow.

Our recent study, The Enemy of My Enemy, examines these Russian narratives and how they’re amplified by the far left and far right. I’ll let my colleagues take you through those findings.

I look forward to your questions. Thank you.

The Chair: Thank you, Mr. Kolga. We will now hear from Mr. McQuinn and Mr. Buntain, together or in whatever order you wish. Over to you both.

Brian McQuinn, Assistant Professor, Co-Director, Centre for Artificial Intelligence, Data, and Conflict, University of Regina, as an individual: Thank you very much, Mr. Chair and honourable senators.

To begin, one of the things we want to emphasize is that part of the research we are going to communicate with you was funded by the Canadian government. When we talk about what can be done — and this is a partnership with the previous speakers as well — I think there are things being done that are quite important for us. A bit of what I’m going to talk about today is the challenge we see, much of which has been touched on so far in this session, why we go about tackling it the way we do and some of the findings of the report that we were just speaking about.

To begin with — and one of the previous senators touched on this before — it’s always important to realize that polarization isn’t some natural outcome of society. It is actually something that is the goal of many foreign influence operations, so it’s always something that needs to be emphasized and remembered. It is not just a sort of natural byproduct. There obviously are some elephants in the room that were touched on. I think we can get into why that benefits certain people more than others, but it’s important that that always be front and centre in how the social media ecosystem can actually enable that and sometimes incentivize that.

It’s also important to emphasize that we cannot expect social media companies to help much in this. I use Facebook as an example. We are partially funded by Facebook so it’s important to say that, but 87% of Facebook’s moderation is dedicated only to the U.S. That represents 10% of the actual traffic, which means 90% of the world, which includes Canada, receives a fraction. The idea that social media companies will do a great deal when it comes to Canada is an expectation that I think is ill‑founded. I think this is where we, as society and as leaders, really need to take ownership over that problem.

I think everyone is in agreement, from what I’ve heard so far, that this is an unprecedented threat. We believe, therefore, that we need unprecedented collaborations. To some extent, the individual speakers today, including previous speakers, are part of this broader team that we’ve brought together, partly funded by Canadian government. It’s always dangerous to ask researchers what you need more of, because invariably it’s more research, but at the same time, I think it cuts across multiple organizations with multiple different approaches.

For us, the key aspect that is often lost in this is that we have to track the networks that are actually doing the disinformation. A lot of the processes and research that is done often looks at the outcome of those, namely, what are the narratives and the hashtags. We actually believe you need to track the actual networks in real time because it will allow you in theory to be able to respond almost instantaneously to what’s emerging from those networks instead of always being reactive and, in many ways, behind the response because these timelines move so quickly.

In order to be able to do this in unprecedented ways, we need conflict experts, like Marcus, who are able to bring the nuance that is necessary, because this can’t be done by AI. Even though we have AI specialists — and that’s part of the expertise that our centre brings — that is not enough. You need conflict experts. We also need people like Cody who are the crisis informatics who understand the networks and the social media platforms. We also need the AI experts, but a specific kind, namely, human-in-the-loop. These processes are moving so quickly that, despite what is often said, AI machines will never be able to respond quickly enough without the human expertise at the core of those systems. The last part is computer vision experts. Because so much is now visual, we also need that expertise that brings everything together.

The purpose of this report was to begin a sense of the relative impact of these networks and these efforts. We often hear that foreign influence operations have minimal impact on Canada. We wanted to test that empirically. What we found speaks to the opposite of that. We wanted to look at the size and reach of the ecosystem engaged. The ecosystem in Canada, which is made up of both the far left and far right, as Marcus was talking about, are some of the most active online communities in Canada. We were able to measure and judge that by looking at two comparable networks. One was the 338 members of Parliament. We looked at that entire ecosystem and considered how active that ecosystem is. We also looked at the 20 most influential accounts on Twitter in all of Canada to be able to measure against that. Against the members of Parliament, the pro-Kremlin networks produced 27 times more content, follow three times more accounts and are followed by as many followers as all the members of Parliament. When we look at the most influential accounts, the Russian program networks produce four and a half times more content, follow twice as many people and are followed by only a quarter as many followers. However, those are the 20 most influential accounts in the entire country. They shouldn’t even be on the same scale, but they are.

We are often asked how we measure influence. It’s something we will be able to look at more closely in the coming years. The fact is the Russians seem to think it has a lot of influence because they keep investing in it. That’s an important point that’s missed. They obviously think it’s having an impact; otherwise, they wouldn’t be doing it at the scale they are.

One of the last points is the role of average Canadians. Eighty-four per cent of the people involved in that 200,000-account network are average Canadians — ones who don’t tweet a lot and who aren’t necessarily incredibly active on social media. That is both an opportunity and a threat. The opportunity is that these are average Canadians so they can be reached and engaged, but that also means that these strategies and influence operations require average Canadians and are amplified by those average Canadians. Therefore, they leverage their engagement. That is, therefore, a challenge because that’s not something that can be controlled in that sense, if we were even to try to go about doing that.

I have two final points. One is that in the three months before the Russian invasion, we saw a fourfold increase in the content related to Russian disinformation directed at Canadians specifically, either specific Canadians or Canadian-related content. It was premeditated before the invasion to shape the ideas and values about what was about to happen. Again, obviously Russia thought that was important or they wouldn’t have done it, but it shows a sophisticated, coordinated effort to be able to ramp that up in advance of the actual invasion. That has continued to be invested, basically increasing 8% every month since.

The Chair: Thank you, Mr. McQuinn. Will you add something, Mr. Buntain?

Cody Buntain, Assistant Professor, College of Information Studies, College Park, University of Maryland, as an individual: Thank you. It’s a pleasure and honour to appear to talk about disinformation and online manipulation, and their implications for national security.

I’ve organized my testimony into two main points: first, what we know about disinformation and technology, especially in Canada; and second, the paths forward for improving these spaces from the perspective of national security.

To begin, issues of information integrity, disinformation and manipulation are socio-technical in nature, often brought about by technological amplification of existing social problems. While online spaces have certainly exposed new vulnerabilities, at the same time they’ve been massively useful for audiences in terms of their value to the public. What we find during crises or moments of unrest and disaster is that the public relies upon those platforms. But at the same time — and this is how I started to work on these issues back in 2013 during the Boston Marathon bombing — people rely upon those platforms but are also much more vulnerable to online manipulation, especially the kinds of manipulation that target or direct blame for these issues onto out-groups, which is anybody who’s not part of your tribe, such as people who aren’t of your political orientation, people from outside your country, those who are not of the same ethnicity, et cetera.

This vulnerability to animosity is particularly concerning for the modern information ecosystem, because we have good evidence that this sort of animosity drives additional engagement on social media. In other words, these kinds of anti-social negative messages benefit from what’s called algorithmic amplification, where an algorithm is actually pushing particular kinds of content to those audiences.

If you couple those results with concerns and research about increases in incidents of anxiety that are self-reported in Ontario, Canada and the United States, we have online populations that are increasingly under stress, increasingly turning to social media and online information spaces for their information and knowing that these spaces are pushing negative and anti-social kinds of content.

It’s in that context that disinformation is particularly effective in exploiting fractious topics like Black Lives Matter, civil rights or COVID lockdowns. Such campaigns are especially adept at exploiting politically opposed sides, because these organizations or groups already have targeted groups they’re familiar with toward which to target their hostilities: the opposing political side.

If we bring this back to Russian-specific disinformation in Canada around Ukraine, it’s unsurprising that we find disinformation efforts that target both sides here.

When we started with an initial set of Russian-aligned accounts that were pushing pro-Kremlin messaging in Canada, we used tools from network and data science to expand our set of accounts to look at who the most influential movers or elites in these topics were. We find that these accounts are highly partisan and highly influential. We refer to them as “partisan elites.” They exist on both sides of the political spectrum. As Dr. McQuinn has already mentioned, these elites have substantial audiences on par with some of the most active and popular accounts online, from the Montreal Canadiens to celebrities and local journalists. Unfortunately, we cannot effectively combat disinformation by focussing on elites alone, because their audiences play a substantial role. As Dr. McQuinn mentioned, 80% of audiences that are spreading these messages are average Canadians with relatively few followers.

While Canadian-specific narratives around the Ukrainian diaspora and Canadian sanctions are popular among these audiences, these narratives are not the only ones that gain traction. They exist alongside general anti-NATO, anti-Western and pro-Kremlin topics where these international topics and elites from other countries also have substantial traction within these Canadian audiences. That means we have to look broader to a more global theatre of information conflict, and we have to take steps to address these evolving threats.

This need is evident in the overlap between Canadian, U.S., U.K. and international information ecosystems. Even the most censored information spaces have some degree of global integration. These porous digital borders allow disinformation to spread easily across national borders, as we see with QAnon and its proliferation, and with the advancement of anti-U.S. or anti-Western vaccines.

We also must not relegate ourselves to studying these disinformation efforts retrospectively. Given platforms’ capricious constraints on external researchers like ourselves, characterizing the ecosystems around these topics retrospectively tends to be prohibitively expensive. Answering questions that have been brought up in this room about how big the audience is and what the most influential piece of disinformation is ends up being hard and costly to answer retrospectively.

We can’t rely on these technology companies to provide for our national security either, and while we have good models for how these companies can engage with local partners and civic society, these partnerships exist on the goodwill of these companies. That goodwill can be rescinded at any moment, as we have seen with Twitter’s changes to how it allows academics to research and access the platform.

Our needs for public-private partnerships, prospective programs and continual assessment of potential influence efforts will only grow as our competitors and our strategic adversaries become more sophisticated, and we have things to say about levels of sophistication, to the question earlier.

We have little guarantee that the methods that we use to track Russian disinformation today will work tomorrow, and we have good evidence that disinformation playbooks from other strategic competitors like China are substantially different from the playbooks of Russia. What we use on Russia is unlikely to be successful when we look at disinformation from China, leaving substantial open questions for our national security and the integrity of information worldwide.

Thank you again, and I look forward to your questions.

The Chair: Thank you very much, Mr. Buntain.

We’ll now proceed to questions. I remind members that we have until 6:10 for this panel. I ask you to please keep your questions succinct and identify the person you are addressing the question to. Again, four minutes will be allotted for questions. We’ll begin with our deputy chair, Senator Dagenais.

[Translation]

Senator Dagenais: My question is for Mr. Buntain. Politicians and government institutions make significant and strategic use — or so they think — of social media. The last American election campaign showed us how much traditional media can be sidelined by Twitter, Facebook and other social media.

In your research, have you been able to determine the percentage of users who used those modes of communication to provide accurate information as compared to those whose objective was manipulation? To what extent is the credibility of democratic institutions threatened by groups seeking to destabilize those in power?

[English]

Mr. Buntain: Thank you very much for the question. As I understand it, the question is, what’s the degree to which the public and politicians are using social media to share accurate versus inaccurate information?

There’s a philosophical question about this. Certainly, social media is incredibly useful for sharing information. We engage in this idea of collective sense-making, that truth is like a collective thing that we agree on, the thing we come to agree as being correct. Actually, social media can be very useful for that. Professor Amy Bruckman at Georgia Tech has great work on Wikipedia, where some of the front pages on Wikipedia are the highest quality content the human race has ever produced because you have so many people engaged with this kind of content.

We look at social media, specifically things like Facebook and Twitter. Politicians certainly use these platforms strategically. There’s good evidence we have from political operatives and surveys of them that they’re targeting specific audiences on one platform differently than they target audiences on another platform. Often, they’re pushing particular agendas to different audiences, but these platforms are relatively open to telling people that they’re wrong. The best way to get good information on the internet is to post the wrong thing and get people to yell at you.

In terms of how much good versus low-quality information is being shared is a difficult question. On average, we see that the vast majority of information is either accurate or is unknowable. You can’t verify it. We do see a sizeable, though small, proportion of information that is clearly inaccurate. This question about what’s accurate versus inaccurate tends to be a red herring in this case, because the vast majority is stuff that we can’t verify directly: people sharing their personal experiences and what has happened with them through COVID, lockdowns or around Ukraine.

It’s a great question.

[Translation]

Senator Dagenais: Social media provide important information as well as trivial information, although the trivial information can in some cases be troubling to certain people who are named or targeted. Should we try to correct everything, even at the risk of giving importance to those seeking to spread disinformation, or should governments always engage publicly to establish the truth? How successful are efforts to reestablish the truth after false information is spread on social media? Can you shed any light on that for us?

[English]

Mr. Buntain: To the first question about whether we should be trying to fact-check everything or make sure people are posting correct things, generally the answer is no. We have good evidence from recent studies that fact checks by themselves are not particularly useful at changing people’s beliefs. They do suppress some amount of incorrect sharing, but they don’t change the underlying mindset that leads to that sharing.

David Rand and a number of other professors have looked at this from a psychology perspective. We don’t engage in social media as an audience. The public doesn’t generally engage in social media from a standpoint of accuracy; they engage from a standpoint of emotion. They’re not primed to answer questions about whether the information correct or not. When you prime them to do so, they actually do much better, but generally, they’re not there for that.

As to whether disinformation has been effective, in some sense, the fact that we exist in this room demonstrates that it is effective. At the very least, the narrative that Russia can do whatever or that China can reach out and touch your average Canadian or American or whomever, that has been profoundly effective even if that’s not entirely true, even if our evidence suggests Russia has not actually been able to move the needle in a lot of these different things, but they can amplify a particular kind of message.

Senator Oh: Thank you, gentlemen, for being here.

According to a 2019 American Intelligence Journal article on disinformation, Russia has carried out disinformation campaigns in a number of North Atlantic Treaty Organization countries. To what extent do disinformation campaigns led by foreign actors or countries affect national security and sovereignty? This question is for anyone.

Mr. Kolga: I can begin. I’m the front-line activist and civil society actor, so I’ll try.

The first modern instance, at least since the end of the Cold War, where we saw Russia trying to directly meddle in the affairs of a foreign country — in this case, one of our NATO allies — was Estonia in 2007. In 2007, the Russian embassy tried to foment unrest and destabilize Estonia’s democracy using historical disinformation, and they tried to provoke the Russian minority into riots. This was combined with a series of cyberattacks that were conducted, we know now, by the Russia government. Estonia, thanks to its experience of being occupied by the Soviet Union and with its long-term experience with Russian disinformation, proved to be resilient against these efforts. This was really the first testing ground for Russian information operations. Of course, the Estonian government was able to push back against those riots, and things turned out quite well for Estonia.

We saw future attempts at using information operations to destabilize various democracies. We saw it in Georgia in 2008 with the attacks in South Ossetia and Caucasia. That was accompanied by information operations. The Russian government tried using information to interfere in various elections in Europe. We saw, of course, what happened in the United States in 2016, which was, I would say, sort of the nuclear bomb of information operations. The effects of that operation are still reverberating today.

I don’t think there is any sign that Russia will curb its operations. We’ve heard several times during the past hour, hour and a half, that these operations are inexpensive but are able to have a significant impact on our democracies, especially in terms of Russia’s objectives, which is to distract, distort and destabilize democracy. It has been happening for quite some time. Unfortunately, for much of the Western world, including Canada, it took a war and tens of thousands of civilian lives and billions of dollars in damage in order for us to fully acknowledge it, recognize it and start thinking about what we’re going to do about it.

Senator Cardozo: I want to get a sense from you about what happens with the stuff that’s online coming from the far right or far left. At what point does it seep into the mainstream of our political system? I’m thinking of things like COVID and the convoy. What we were hearing out in the fringes was seeping into our political parties.

My second question, which I’d like to throw out at the same time, is if there is a role for government in this. The federal government is planning to have an online harms bill. What are your thoughts about that? Maybe I can start with Professor Buntain, Professor McQuinn and Mr. Kolga.

Mr. Buntain: Thank you for your question.

To your first question about this process where information starts in the partisan community and then spreads, we have some evidence that this starts often in small communities, on relatively niche platforms or relatively niche online spaces. This might be private Facebook groups, private Reddit communities, 4chan — these alternative information spaces. In particular, it seems like the far right is very adept at getting their information from these small groups into progressively more mainstream spaces until now they’re on the front page of foxnews.com or something along those lines. There is definitely some process or pipeline for that information. At the same time, we have good research that shows that often these messages get refined and circulated in small groups before they break out even on more mainstream platforms.

To your second question about the role of government here, it is incredible to me that we have a whole industry around technology platforms that we know propagate negative emotion that can have profound negative impacts on their users — and we have many different examples of this — and it has escaped regulation in a way that nothing else has. If I build a new car and create a new airbag, there’s certainly a role for regulation there. Therefore, I absolutely think there’s opportunity for government to step in and tell platforms that they need to understand the harm they are doing, have initiatives where they’re actively doing that and be transparent and communicate to the public what those harms are now. Right now, they have no incentive to do that, and the only time that information comes out to the public tends to be through whistleblowing or leaks as we saw with Frances Haugen when she came to the U.S. Congress and told them about all these documents Facebook had internally that showed that Facebook causes harm. We didn’t know that publicly.

Mr. McQuinn: Some of the previous studies done in Canada show that 51% of Canadians have at some point been exposed to some sort of Russian disinformation. That is an extraordinary number. What we have seen in our research at the centre and otherwise is that during the pandemic, a lot of these very disparate communities on the far left and far right — extremist communities — metastasized in ways we are only now beginning to understand. Very little work is done. A lot of the work is done in the U.S. A lot of this research focuses on the far right in the U.S. and there only. It doesn’t come across the border.

Therefore, I think that when it comes to the role of government, there’s a huge opportunity for a lot of the centres — not just our centre — to do a lot more in understanding the specific nature of how these networks work in Canada. They are different than how they work in the U.S. because we don’t have a Fox News. We have very different elements. There is a lot to understand about how they overlap in unique ways that are quite different than in other countries.

[Translation]

Senator Boisvenu: My question is for Mr. Kolga. You worked on a particular disinformation case, the case of the MP Kenny Chiu who, after introducing a bill in the House of Commons on the creation of a foreign agent registry, was apparently the target of disinformation which he claims lost him the election.

I heard on Radio-Canada that you investigated this case. Can you send us or provide us with some of your conclusions about that disinformation campaign? Can you also tell the committee which sources of information or which social media, for instance, were used by the communist government to spread that disinformation?

[English]

Mr. Kolga: Thank you very much for that question.

DisinfoWatch was the first Canadian civil society organization to detect and expose Chinese government-aligned information operations that were deployed during the 2021 federal election. We initially monitored Russian state media — English-language state media — Global Times. They had published a narrative that was extremely critical of the Conservative Party foreign policy platform and its leader and threatened retaliation against Canadians if the Conservatives were to win that election. That’s the first narrative we detected.

Then, because of our close work with Chinese civil society and community organizations, they alerted us to various narratives that were moving around on Chinese state-controlled social media platforms — WeChat, for example. These were narratives that had migrated from WeChat onto various different local Chinese community online platforms that were targeting a member of Parliament, Kenny Chiu, as you noted. Just before the election, Kenny Chiu had introduced a private member’s bill that was introducing a foreign influence registry act in the House of Commons. That act — his private member’s bill — was targeted on those platforms, suggesting that the member of Parliament was trying to introduce legislation that would curb the voices of various different minority groups inside Canada. We exposed those narratives and tried to explain them to the Canadian public. Whether they had any specific impact on the outcome of that election in that riding is unknown. We don’t know what that impact was. But the fact is that these narratives were out there, they do align with the Chinese government, and that should be a concern.

[Translation]

Senator Boisvenu: We also learned that police stations were created by the Chinese Communist Party in three Canadian cities. These police stations still exist, in Montreal, for instance. They have not been closed. Did they play a role in that disinformation campaign at the local level?

[English]

Mr. Kolga: We don’t have any specific evidence of that. What we did was that when we were alerted to these narratives that emerged on various different platforms, we analyzed and exposed those. We didn’t look into the role of these police stations.

Mr. McQuinn: This is something the centre is actually beginning to do this summer. This is the next step. A natural progression from the work we published on Russian influence operations is to look at other state influences and make those links to understand — this is the work of Cody and others — how online disinformation relates to actual events on the ground in the kinetic world, the real world. In this case, you have a much more sophisticated operation because you have online disinformation with actual operations on the ground. That will be a matter of surveillance, as well, of online spaces and how those translate into specific strategies to target certain groups and communities. This is all what we’ll be exploring in the next few months.

Senator Richards: I’ll direct this question to Mr. Kolga. How unequal do you think the Russia and China relationship is, and what does China gain besides oil and a pathway to the North? I think it gets an entire destabilization of the democratic West with Chairman Xi pulling Putin’s strings. Could you comment on that?

Mr. Kolga: Thank you.

I’ll be very brief on this one because I’m sure we want to get back — I’m on record as saying that President Xi regards Russia as its discount gas station at the moment. It’s true. China is benefiting from cheap gas prices and cheap resources overall thanks to this war. I think it’s in China’s interest to see a freezing of this conflict and not a full resolution of it — perhaps some sort of an armistice that would ensure that Western sanctions are maintained on Russia. Like I said, President Xi is benefiting from these low prices, and he wants to see this situation continue. President Xi is in control of this relationship. Vladimir Putin desperately needs China’s money — the revenue from the gas and the resources that he’s selling. He also needs China’s weapons, and hopefully China won’t be sending those or any ammunition along. It’s indeed a lopsided relationship.

Senator Richards: Do you think that Xi could have any influence in stopping the war?

Mr. Kolga: Xi might have influence in stopping this war if we allowed him to have that influence. President Macron went to China, and apparently he and President Xi are working out some sort of a proposal. Yes, I’d imagine that if Xi wanted to tell Vladimir Putin to pull back his forces back to where they were in February 2022, I’m sure that would help pave the way towards an actual real peace.

Senator Kutcher: This question is for Mr. Kolga, and then the others if we have time. Russia promoted health disinformation, especially anti-vaccine, leading to social discord and distrust in public health. It happened well before Russia’s war in Ukraine. It linked the left wellness mamas with the libertarian right. It was a fascinating phenomenon to watch. More recently, Frank Graves and EKOS have created a disinformation index, and one of the interesting things there is high rates of health disinformation are strongly allied with Russian propaganda and distrust in democratic institutions. Your thoughts on this relationship between health disinformation and Russian-directed propaganda? Any thoughts on how this linkage might be used to help us better understand how to counter malignant state-produced disinformation?

Mr. Kolga: That’s an excellent question.

We first detected that Russia was targeting vaccine hesitancy and health to try and destabilize various nations and societies already back in 2019, well before COVID. They were targeting vaccine hesitancy in the Western United States amongst the far left and elements of the far right specifically on childhood vaccinations. The WHO actually came out with a report in 2019 identifying vaccine hesitancy and disinformation as being the number one threat to global health at that point, in 2019.

When COVID emerged, a number of us who had been tracking Russian information operations anticipated that Russia would target COVID and various aspects of it and exploit the coming pandemic to advance their interests. Russia has become very good at identifying the most polarizing issues in our society and basically sinking its teeth into both the left and right and tearing. COVID was fertile ground for those sorts of operations.

In March 2020, the EU’s East StratCom warned that Russia would be using COVID to polarize our society, but they also warned that they would try to intensify the effects of COVID and the pandemic. Indeed, they did that. We saw Russia promoting vaccine hesitancy throughout the pandemic and tailoring messages to groups that were promoting anti-vaccination and anti-lockdown narratives. That continued throughout the pandemic until the anti-lockdown trucker protests here in Ottawa. We saw Russian state media providing a platform for many of those extremist voices within that movement who would otherwise not be speaking to groups of more than 20 or 30. Suddenly, RT was providing these individuals with a global platform for their views. This contributed to the legitimizing and amplification of them.

As I said, Russia is very good at identifying those polarizing issues, the most divisive issues in society, and exploiting them. By understanding this, we can also anticipate which issues Russia will try to exploit in the future. What we should be doing as a society is making sure that we get ahead of the curve and that we explain to Canadians, to our media and to our elected officials as well why they’re going to focus on specific narratives and what their expected outcome is and, by doing that, raising awareness and hopefully inoculating Canadians against them in the future.

The Chair: Great comments. Very helpful.

Senator Boehm: Mr. Kolga, something you said in your statement resonated with me. You talked about Russian disinformation focused on Die Linke and on the AfD in Germany. Of course, in that country where those two parties are strongest is the former East German states that were part of the former East Germany.

That took me to Africa and a question that Senator Dagenais asked in the last panel. That is about the Wagner Group and generally how disinformation seems to be falling on fertile ground with respect to the war in Ukraine in Africa as evidenced by the number of abstentions on the various UN General Assembly resolutions and the like. There is no counter-narrative there, which makes it problematic. I’m wondering if you have any thoughts on that, or indeed our other panellists.

Mr. Kolga: Again, it is an excellent question. Thank you for that.

Indeed, Wagner has been extremely active throughout sub-Saharan Africa over the past number of years — in Libya, South Africa, Mozambique and Madagascar as well. They have been actively engaged in disinformation operations, often to support strongmen to help keep them in power, along with their mercenaries. This is clearly having an effect on Africa. As you mentioned, 17 countries abstained on the resolution. We need to be paying much closer attention to Africa. I published a report on Chinese and Russian government influence in Africa about a year and a half ago. What we found was that we have basically ignored Africa for the past decade or so. Because we have not been taking an active role in Africa, meaning the Western world, Canada included, China and Russia have identified this as an opportunity. China has been extremely active since the end of the Cold War. Russia is becoming active again. Unless we take steps to start engaging in Africa again, I fear that Russia — it’s primarily Wagner — and the Chinese government will dominate in the region and have an unobstructed path to advancing their interests there.

Mr. Buntain: I think this question about the Wagner Group is a really interesting one because it touches on the capabilities for disinformation that are not under the purview of states. Nothing says that only state actors have to use these tools. Some original work that Dr. McQuinn and I started on was understanding how the Taliban used social media for influence in the takeover in Afghanistan years ago. These non-state actors are quite adept at the use of this technology for influence. We know in Africa in particular that Russia beta tests a lot of their influence efforts in African countries because the social media platforms don’t have the will really right now or the resources in terms of people on the ground to address them. It’s just free ground for them to test out what they’re doing.

Senator M. Deacon: Thank you for being here today.

As a comment made earlier, Mr. McQuinn, we would love to know more about how the networks differ. You made a comment that it differs in the U.S. and Canada. If there’s something we can gain or learn from you beyond this meeting, that would be great. Based on the work you have over the summer, you might be meeting with this group again. It sounds like it’s a welcome back already.

I have three areas and will do the best I can. I’m going to leave our social platforms for a moment and go to the war zones and the actual war fields. I’m wondering about the effect and impact of social media on conflict. In at least one instance, which means probably many, Ukraine has used Russian troops social media posed to target Russian positions. We’ve seen that cellphone use has been prevalent on both sides of this conflict. Are there any lessons learned so far that Canadian Armed Forces can take away from the war when it comes to troops and social media? I know and I trust that the Canadian Forces are far more disciplined in this area in terms of giving away their positions. However, given that we know and we have seen that disinformation can radicalize anybody, is there adequate training for our troops on how to spot mis- and disinformation?

Mr. McQuinn: If they’re both looking at me, then I’ll start with two things, and then I’ll let my colleagues jump in.

Recently, the President of the National Red Cross visited Ukraine. I raise this because the disinformation campaign that followed around International Committee of the Red Cross’s role in taking children and this sort of QAnon was so effective that the local Red Cross stopped working with the ICRC. It was able to create that kind of division within the movement. I would argue that it is the cutting edge of what’s going on right now in Ukraine as far as social media and how it’s being used.

The piece you’re touching on is kind of the role of state militaries and how they use social media. Historically, as you’ve pointed out, soldiers are not allowed to use it, for obvious reasons; they’re giving away their positions. What has also been shown is that Ukraine has been able to dominate the social media world by having their soldiers basically tell their stories online. This is something the U.S. military is looking at. Look at our previous research on how the Taliban dominated the media landscape of Afghanistan and was able to dislodge Western powers easily by the end and to take over years, if not months, before anyone anticipated.

I think everyone is watching what’s happening there and trying to understand what this means practically for state militaries. It’s not without a cost because you can target people as a consequence. Everyone is watching as to how this is going to unfold.

Mr. Buntain: The way social media is used in Ukraine right now is very interesting, even from a non-state-actor perspective. We have some studies looking at how Ukrainian militia is using places like Twitter and Facebook to essentially crowd source donations internationally. They post on Twitter or on a platform called QuickNote.io saying, “Here’s my bank account number. Send me money.” Or, “Here’s this location where you can send us drones or materiel.” They then post pictures of the materiel that they’ve been given. It’s an interesting question about how much of that is correct and true, or are they using it to present some view that people are donating all this materiel to them as a way to influence what’s going on? We do see an interesting division between Western platforms and how the Ukrainian military and militias are using them and how Russians, who don’t have access to those platforms, are being dominated in this space.

Mr. Kolga: Our forces have been targeted by Russian disinformation operations, the ones that are stationed in Latvia. Most recently, during COVID-19, Russian state media in Latvia tried to promote a story about our forces spreading COVID in Latvia to try to erode Latvian support for the mission. That’s only one case, but it is happening.

The problem with our forces is that they don’t at the moment have the capabilities to push back on psychological operations largely because of bad press that they have received on the back of bad decisions about training operations in the past. Right now, the forces are completely vulnerable to these sorts of operations, and it’s something that I think this committee and certainly Parliament needs to be looking into.

The Chair: Thank you very much.

We have to go, I’m afraid, to a final question, which comes from Senator Dasko. That was a rich discussion.

Senator Dasko: I have read that although the efforts at disinformation coming from Russia have increased — and you have elaborated on this through the networks left and right and through all of the efforts that the Russians have made — they have not been particularly successful since the invasion of Ukraine, partly because there is so much actual real information about Ukraine that it’s hard to spread disinformation about Ukraine. Mr. Kolga, you talked about themes of neo-Nazism and so on. If we look at it from a macro level, two new countries have joined NATO. If you look at these kinds of developments, and if you look at Europe and the support for Ukraine and so on, clearly the Russians are not winning the PR war — that is, just by looking at these kinds of developments. Can you comment on that? Again, it’s something I have read, so I’m not making this up. Maybe it’s disinformation that I’m getting. Anyhow, I have read that these efforts really have not been all that successful, even though they are huge, as you’ve said, very significant.

Secondly, what do you think about the strategy, among the strategies, to combat disinformation by revealing the facts about disinformation? Is this a good strategy? I ask because we’re hearing a lot about what the Chinese government has done in Canadian elections. It’s caused a lot of problems, too. A lot of people are really upset about it. From my point of view, it’s always important to have this information, but then it’s causing a lot of disruption too because we’re hearing about it. “What are we supposed to do? What about the Chinese?” I offer that as an example. Is it a good strategy to deal with disinformation by revealing, via our security agencies, the kinds of efforts that are being made?

Mr. Buntain: I’m happy to answer some of this.

The question about effect is an interesting one because it hinges heavily on how you define what the appropriate impact is. There have been some studies from my old alma mater, the Center for Social Media and Politics at NYU, that said during the 2016 election, Russian disinformation was not super successful in moving the needle for exactly the reason you described, namely, there’s so much information about the U.S. election. The vast majority of exposure was concentrated. A wide variety of people saw it, but those who saw the most of it were a small and specific minority who were going to vote their particular way anyway. We’re having these conversations. People are angry about potential disinformation, and we have this notion that these countries can reach out and touch us. If that’s the goal, then I think they have been quite effective with that. It depends on how we define what their primary goal is.

To your second question about combating disinformation, there’s definitely been a lot of work on inoculation and how useful that is. There is value to that. Inoculation around elections is really tricky because it’s not clear how to do this. We talk a lot about how companies should be partnering with civil society. I think governments should be doing that too and giving this information to civil society. We saw at the beginning of the Ukraine war where Western powers were declassifying information about bioweapons and this claim that the Russians were going to put out about bioweapons in Ukraine. That was an effective strategy for countering that particular narrative. I think there’s room for it, but it has to be applied in particular ways.

Mr. McQuinn: Watch Zelenskyy and other leaders within Ukraine and their expertise at social media, and then to give free rein to their military to do the same thing, was a game changer. If they were operating like most leaders and state militaries would, that question would be different. It’s not that Russia is not winning. I think Ukraine is winning in that case and Russia is losing that online battle. It can be effective, but you’re combating against other strategies that are also being used in that space.

Mr. Kolga: I’m not so optimistic. We’ve seen some of these Russian disinformation narratives migrate into the far right and the far left, as we’ve outlined over the past hour, hour‑and‑a‑half. They’ve appeared on channels like Fox News. Polling out of the U.S. has demonstrated that support for Ukraine is softening, and considerably so among voters who identify as Republicans. I’m not saying that it’s those viewers who were watching Tucker Carlson’s show — some 3 million people — but it is having an impact. If we don’t take care of it and address that, that softening will expand.

The Chair: Given the nature of the subject matter today, it’s good to finish with a cautionary note. Much work still remains.

Colleagues, this brings us to the end of our meeting. It’s a privilege to thank Mr. Kolga, Mr. McQuinn, Mr. Buntain and all of our witnesses today. This is a hugely important topic. We’ve been waiting patiently for this panel, and it has been very rich. I thank my colleagues around the committee table, as I normally do, for drawing the very best from a wonderful panel. We heard in the first panel about policy and regulation and about the broad range of potential regulatory responses and options, as well as the need for agility. All of that sets the stage for this panel, which has taken us into the practice of disinformation — where it’s happening, how it happens, the variations in it, the fields in which it is being practised — in war zones, in the vaccine context, lockdowns, extending to the truckers’ dispute that occurred outside this building and into the field of elections. We’ve also learned that it’s not just government; it’s the Wagner Group, the Taliban and other agents who are connected to or funded by governments, in many cases. This has all been very rich. We’ve learned a lot. We will be looking for follow up from this discussion. This is a very rich start of an inquiry that we’d like to have, and we’re very grateful to you. Thank you.

Colleagues, before we adjourn, there is another item of business. I’d like to just mention the correspondence shared with us by Senator Dagenais about the upcoming CANSEC defence and security trade show in Ottawa. Are there any objections to proceeding in camera to discuss this item briefly? Seeing none, we will suspend briefly to thank our guests, and then we will proceed in camera. Thank you.

(The committee adjourned.)

Back to top