Skip to content
BANC - Standing Committee

Banking, Commerce and the Economy

 

THE STANDING SENATE COMMITTEE ON BANKING, TRADE AND COMMERCE

EVIDENCE


OTTAWA, Thursday, March 22, 2018

The Standing Senate Committee on Banking, Trade and Commerce met this day at 11:30 a.m. to study and report on issues and concerns pertaining to cyber security and cyber fraud.

Senator Douglas Black (Chair) in the chair.

[English]

The Chair: Good morning and welcome, colleagues and members of the general public who are following today’s proceedings on the Standing Senate Committee on Banking, Trade and Commerce. My name is Doug Black. I’m a senator from Alberta, and I chair this committee.

Mr. Carlin, I’m going to ask the senators to introduce themselves so you know who is in the room with us in Ottawa.

Senator Wetston: Howard Wetston, Ontario.

Senator Marshall: Elizabeth Marshall, Newfoundland and Labrador.

Senator Unger: Betty Unger, Alberta.

Senator Marwah: Sabi Marwah, Ontario.

Senator Wallin: Pamela Wallin, Saskatchewan.

[Translation]

Senator Ringuette: Pierrette Ringuette from New Brunswick.

[English]

Senator Stewart Olsen: Carolyn Stewart Olsen, Ontario.

[Translation]

Senator Dagenais: Jean-Guy Dagenais from Quebec.

[English]

The Chair: This morning is our eighth meeting on our study on issues and concerns pertaining to cyber security and cyber fraud, including cyber threats to Canada’s financial and commercial sectors, the current state of cyber security technologies, and cyber security measures and regulations in Canada and abroad.

I am pleased to welcome, from San Francisco, John P. Carlin, Chair, Global Risk and Crisis Management, Morrison & Foerster LLP, which would be one of the leading law firms in the world. We welcome and thank you for joining us, Mr. Carlin. If you would be kind enough to begin with your opening statement and then the senators will have questions.

John P. Carlin, Chair, Global Risk and Crisis Management, Morrison & Foerster LLP: Thank you, senator. It’s a pleasure to be with you this morning from San Francisco. I can’t think of a topic more important to the security of Canada, the United States and the world. It’s gratifying to see the interest of this committee on the subject.

I would like to give a brief overview of the threat and some changes that were made in terms of how the United States responds to that threat and then turn it over to questions.

One thing I saw when I was heading the National Security Division, which was the first new litigating division at the United States Department of Justice in about 50 years, it was a division created after the tragedy of September 11 to ensure that, in the future, we effectively shared what we learned on the intelligence side of the house about new and evolving threats over to the law enforcement side of the house so that, as lawyers, we could work creatively based on what the intelligence showed us the threat to be, to come up with ways to disrupt the threat and change the metric of success so that success was no longer holding someone accountable after the terrorist act had occurred when families were grieving or had lost loved ones. That success was seeing where the threat was going and preventing the attack from occurring in the first place.

I give you that background because, when I was there, one area where we saw the threat changing was cyber-enabled attacks. I will give one example. We were watching, both in the United States and Canada, as the Islamic State and the Levant became more sophisticated at using social media, exploiting it, just as al Qaeda had used Western technology — in that case, aviation— to turn it against us and into a weapon, we were watching the Islamic State and the Levant adopt a strategy of crowdsourcing of terrorism. They tried to turn human beings into weapons by reaching out to them from their safe havens abroad and encouraging them to kill where they live.

That caused an immense increase in the number of cases we were handling at the Department of Justice linked to international terrorism.

One of those cases shows us, I think, where the cyber threat is going. Imagine a Canadian company with a trusted brand throughout the world that discovers there is a relatively unsophisticated hack, a small amount of personally identifiable information that has been stolen. This is happening to the vast majority of companies throughout the world day in and day out.

In this case, after that information was stolen, and it looked like they had handled the information, they got a note through Gmail with spelling errors that said, essentially, “Hey, we’re mad you threw us off your system, and we want you to let us back on to your system, and if you don’t we are going to embarrass you by releasing the fact we have gotten into your system and stolen this information. And, by the way, we want about $500 worth of bitcoin.”

Again, the vast majority of companies would not view this as a high-risk threat. It looked as if they successfully kept the individual off the system.

What makes cyber different— and I know this committee has been struggling with— is the problem of attribution. If that company had not worked with law enforcement that was fed by the intelligence, including information from our Five Eyes partners, we would never have found it wasn’t the criminal it looked like. He was a criminal. It was a guy named Ferizi who had moved from Kosovo to Malaysia, which is from where he launched this attack with a co-conspirator. He was a criminal and he really wanted that $500. But he also had become friends, through Twitter, with one of the most notorious terrorists in the world at the time— a man named Junaid Hussain, who had moved from around London, where he had been convicted, and then released as a computer hacker, had become an extremist, and moved to the very heart of the Islamic State of the Levant in Raqqa, Syria.

He never met Ferizi, the Kosovo extremist who moved to Malaysia, in the real world. They became friends entirely online and through Twitter. What Junaid Hussain convinced Ferizi to do was take this personally identifiable information that was stolen from a trusted retail brand inside the United States and give it to him.

Junaid Hussain was not interested in $500. What he was interested in, consistent with the Islamic State of the Levant, was crowdsourcing of terrorism ideology, and using that information to try to kill. He culled through that list to look for the names of what looked like American law enforcement or state officials and then used the information stolen from the retailer to create a kill list. By using Twitter, they pushed back out to the United States with specific instruction to kill these people by name where they live. They call this threat the “blended threat.”

This is one version of it, where it looks like, and really was for one purpose in the criminal sphere and hitting the private sector, turns into another sphere in the way the information is distributed where it can be used to kill, for terrorism, for another purpose.

In this case, the two parties — the government and the private sector— worked very well together, and effective action was taken. Ferizi was arrested pursuant to the United States law enforcement process in Malaysia and ultimately extradited and sentenced to 20 years of incarceration in the state of Virginia.

Junaid Hussain, who is in ungoverned space outside the rule of law in Raqqa, Syria, was killed in a publicly acknowledged military strike by central command.

In order to deal with this blended threat, as you can see from that case, it’s extraordinarily complex. You have to be able to share information across national boundaries. That case alone involved five to six different countries. You need to be able to do it at the speed and scale of the threat, which in cyber moves very quickly. Then you need to have a blended approach. Just like the threat is blended, the approach to response needs to be blended that cuts across different tools of state power. In this case, the military tools were used, state department diplomatic efforts were used in order to do the extradition. The criminal justice system was used, intelligence authorities, including sharing arrangements with Five Eyes partners, were used to try to figure out who did it.

As complicated as those intergovernmental moves are of sharing information, what makes this different than the response and the changes prior to September 11, and where we, in the United States, are still figuring out how to make this part of the equation more efficient is, as complicated as the intergovernmental sharing, even more complicated is how do we incentivize private sector companies, who are now, as this case demonstrates, on the front end of national security risks, to share information with speed and scale to government so it can be acted upon?

On the other side of the equation, how do we in government get better at sharing information often collected in the first instance through sensitive sources and methods? For years, the default has been it’s classified and stays only within government or shared with close partners like Canada. How do we get better at speed and scale, sharing that information back to the private sector so they know what the threat is?

One area we changed was we started using the criminal justice system in conjunction with the Treasury department’s ability to authorize sanctions to make the information public. That started with the first case of its kind in 2014 where we indicted five members of the People's Liberation Army Unit 61398 for economic espionage. It included a case involving an individual who travelled to Canada where we needed law enforcement cooperation with Canada, named Su Bin who ultimately was brought to the United States and charged with economic espionage, to making public using a statement from the FBI on the North Korean attacks on Sony Pictures, to the indictment of members linked to the Iranian Revolutionary Guard Corps for their attacks on the financial sector. We went from never having brought a case linked to national security actors through the criminal justice system, to starting to more regularly bring such cases.

Most recently is the indictment through Mueller’s special council investigation of the 13 Russians for their role in undermining confidence in the fair and free democratic elections inside the United States.

The criminal justice system alone can’t solve the problem, but we found it’s a good vehicle for making public what we are seeing, being able to share with the private sector and partners around the world and use that as a basis to take other actions, be they defensive measures or working with partners on things like sanctions.

Thank you again, and I’m happy to take your questions.

Senator Stewart Olsen: Thank you for being with us today. It’s actually pretty terrifying what you’re talking about.

I notice there seemed to be two components, a reactive and proactive. I know reactive and quick responses is where we absolutely have to work, but proactively, what should we be doing?

Mr. Carlin: So I put that in two buckets of potential proactive activity. One that’s linked to reactive would be deterrents. Right now, in terms of the capabilities, the ability of those on offence to attack outstrips the technical ability to defend. To phrase it another way, there is no Internet-connected system right now that’s safe from a dedicated adversary who wants to penetrate the system. That’s true of nation states, North Korea, Iran, Russia or China, but also increasingly true for sophisticated organized criminal groups.

One element of the proactive strategy needs to be defining in advance what red lines are that are going to hopefully engender response that cuts across nation state lines and says we have certain principles such as one should not target a private company with state means, be they military or intel services for the benefit of that private company’s competitor. That’s the agreement President Obama and President Xi reached after the PLA indictment that then the G20 adopted. That’s an example of a clear red line occurring. Another example now would be if you take steps to undermine free and fair democratic elections, there will be a response.

My own view is part of that setting of the red line should include essentially a dead man switch. If we reach the conclusion you meddled with a free and fair election or violated another red line, we won’t tell you exactly what the response will be, but in advance we will say there will be a response and we can list the elements of that response to include multilateral sanctions against the behaviour, similar to regimes we have had in other areas like the use of chemical weapons or terrorism. That’s bucket one.

Bucket two would be improving defensive practices. Part of that is educating and making clear that if there is no Internet-connected system safe from a dedicated adversary’s attack, we need to encourage through regulation and through education the idea of risk management and resilience so there is planning in both the private sector and government as well as to what you do the day after the breach to ensure you can continue to deliver core services and confine the damage.

Senator Wetston: Thank you for being with us today. I would like to congratulate you on your exceptional work in the public service in the United States. It is obviously important work, and even becoming more important as you bring that skill set to the private sector, so congratulations.

I would like to pursue two areas with you, if I may. One is in the area of cooperation that you’re discussing. I’m not sure if you’re aware of this or not, but in the most recent budget of our federal government, they have outlined a national cybersecurity strategy. It was just last week. I won’t get into it with you, but I’m sure you would agree with the notion of having a national cybersecurity strategy.

One of the things I wanted to pursue with you is there was a noticeable absence of what I consider to be a very important part of this, which is the role of the Department of Justice. Why do I say that? Obviously our Department of Justice would be very involved with all of the entities that have come before us, but I have not seen any Justice officials appear before us that I recall. I wanted to ask your advice on this given your experience in the U.S. I know you’ve dealt with Canadians in a great deal.

You were in a position at the National Security Division, which you said was created as a result of 911. Would it be your advice, looking at the situation today, that would be a necessary type of activity and role that a Justice department might take and should develop?

I know you’ve spoken in the past about the risk mitigation associated with the various types of entities we deal with today; crooks, nation states, terrorists. That variety poses special problems. I’m sort of asking an institutional question if you can share that with us.

Mr. Carlin: Thank you, sir. I do think it is something to consider.

One of the reasons we did the shift was linked originally to terrorism, which was an area that cuts across both national lines and across the traditional divide between the collection and use of intelligence, on the one hand, and on the other hand, the use of the criminal justice system. I know administering that system, post the reform, at least in the United States, the ability to be able to make use of that intelligence information was vital to confronting the threat in a way that ultimately saved lives. It’s almost more important in some respects in this new arena when it comes to the cyber-threat for exactly the reason you outlined, the multiplicity of threat actors— criminal, crooks, spies/nation states. The Ferizi case is just one example of this increasing trend of blended threat.

I’ll use Russia as one example. We’ve seen case after case now where there’s a huge cybercrime problem. One estimate according to one nonprofit, CSIS, says approximately $650 billion last year alone in global theft. That, however, is not totally amorphous. Behind that threat is one country in part, Russia.

They are doing another study now but some estimates would say nearly 50 per cent of that crime is emanating from Russian-organized criminal groups who do not respond to requests for law enforcement cooperation to hold those groups responsible.

Recently there was a case where the criminal group was literally called Infraud and its motto was “In fraud we trust.” It was a group where criminals got together and combined tools they had to hack into companies, used stolen information from one place to sell to another. They had reviews that would say whether or not it was a trustworthy crook, just like Amazon. It would say, “Five stars. I’ve bought from this crook before. His information is fantastic.”

Or, which always amuses me: “Two stars. This crook is not trustworthy. I don’t recommend buying from them.”

So that’s on the one hand, the criminal side. On the other side we’re seeing links with the intelligence service of Russia. There was the Yahoo breach where, according to the indictment that’s been made public, when the United States asked for law enforcement cooperation against one of the top 10 thieves of credit card information in the world, instead of providing cooperation, the Russian intelligence services signed that individual up as a source and started making use of the information he was stealing.

So when you see how blended that threat is, it becomes difficult, I think, for a criminal justice system to respond effectively unless it can make use of— and has the legal right to make use of — some of the information collected on the intelligence side of the house. And vice versa. It’s very hard to have an effective response nationally and with partners unless you can talk publicly about what you’re seeing.

The criminal justice system is a great tool to make public so one can have the conversation and then work together to say, “How do we raise the costs to our adversaries so they stop this behaviour,” on the one hand, and to the prior senator’s question, “How do we make clear what we’re seeing tactically so that private sector companies can invest proactively in how to defend themselves?”

Senator Wetston: Thank you very much.

Senator Wallin: Thank you. I appreciate your comments so far today. Most interesting.

We just were hearing testimony yesterday from our own government officials on their state of readiness and putting together systems and coordination between departments, et cetera.

Two comments on related issues. First, how many countries do you believe have dead man’s switch capability now, that could actually react and are reacting without, as you say, making a lot of public noise?

Secondly, I was just reading a piece this morning in the wake of all of the Facebook stuff. You talked earlier about weaponizing people. It was interesting. We have Julian Assange, who is considered, of course, a villain in the piece, and Mark Zuckerberg has always been wonder boy and man of the year. But essentially both of them have either knowingly or unknowingly weaponized data and people.

I would just like your thoughts on that. Thank you.

Mr. Carlin: On the first, right now I don’t think there is a mature deterrence system in cyber that is consistently applied across countries. It’s a place we need to get to. There has been an increase, which is a positive trend.

If you think of the approach as three prongs— we used to keep it secret. When it comes to cyber, we are actually much better at attribution than the average public would think, and the reason they didn’t think we were any good at it is because we kept it all secret for a while.

One, we need to figure out who did it. Two, make it public more often as a matter of course. And three, after making it public, impose consequences.

Recently you have see attribution to the North Koreans of the ransomworm WannaCry— which did an enormous amount of indiscriminate damage— and subsequently NotPetya, the Russian-related version of the ransomware started in the Ukraine that caused nearly $1 billion worth of damage— again, indiscriminately — throughout the world as it went and locked up different systems ranging from hospitals to corporations.

You’ve seen actions now of publicly doing the attribution to say WannaCry was North Korea and NotPetya was Russian. It was a little slow, but it’s better that it was done.

What you have not seen afterwards in terms of your deterrence retribution point, you haven’t seen, really, effective collective action on the scale of what occurred. I’m concerned if we don’t act collectively by going public without effective action we’re actually sending the opposite message and encouraging more and more irresponsible and provocative behaviour through cyber means.

On your second issue, the issue of weaponizing information, it’s something we’ve learned and are starting to adjust to but still working on. When you look at, for instance, what happened with the North Korean attack on Sony, I can tell you we “war gamed” for years what it would look like if a rogue nuclear armed state attacked the United States through cyber means and we never figured it was going to be about a movie about a bunch of pot smokers, that this was going to be the big national security incident or first of its kind.

One thing you learned about that case is it was a destructive attack. It turned computers, essentially, into bricks. Number two, it involved large-scale theft of intellectual property. Both threats we thought a lot about before.

It’s not really why anyone remembers that the attack, though. What they remember about the North Korean attack on Sony was the weaponizing of information, the theft of salacious emails from inside the company that were used by untraditional media to distribute and then mainstream media picked up on those and became, essentially, part of the North Korean plan to try to stop Sony from exercising free speech.

That’s a very difficult— first we need to identify that that’s the new tactic. We saw something similar that the Russians then did in the elections. It’s a harder problem to stop, but I think you’re right; it starts with awareness of how it occurs. Then the next step, I think, is for those who are masters of that technology in the private sphere to start addressing their resources to see how they can stop their platforms from being abused to weaponize information.

Senator Wallin: Just a quick follow-up. Other than Mr. Zuckerberg’s apology and “Gosh, we’re going to look into this,” I mean, this is a big issue. What can and should he do in the next 15 minutes or few days?

Mr. Carlin: Making it broader than one company, because there are a variety of different social media companies now and with the new technology has come new ways, creative ways, to exploit it by the terrorist groups or nation states.

Earlier on the big issue we were confronting, when I was on the criminal side, with the first social media was those who exploit children started using the platforms and you saw the companies get more and more sophisticated at applying new technology that would identify child pornography content, got better at coming up with non-profit, non-governmental groups that stored that information. Any company could tap their expertise to use it, such as NCMEC, the National Center for Missing and Exploited Children. That combined effort helped to thwart that issue.

We applied a similar model when it came to terrorist exploitation of social media that has really improved the response of social media companies now from where they were three or four years ago.

What you’re raising is this use by nation states to put fake news or other means to influence elections. That’s the newer threat. I think the resources, expertise and changes we saw when it came to child exploitation and terrorism are a good model.

What makes this version particularly hard— and I don’t pretend this one is easy— is how it conflicts with our first amendment in the United States but in Canada the general recognition of a right of speech and how do you administer the system in a way that’s consistent with that fundamental democratic and civil society virtue while confronting this threat.

There, I think it’s figuring out ways to confront an anonymity and have transparency as to who the speaker is. That is, admittedly, easier said than done.

Senator Unger: Thank you very much, Mr. Carlin, for your presentation.

You talked a lot about cooperation being a very essential component of trying to deal with these threats. How does Canada compare in terms of, I guess, cooperation but, also, our own readiness and any weaknesses that you would care to comment on?

Mr. Carlin: I wouldn’t presume to judge the Canadian expertise, because I’m more an expert on U.S. law and government.

One suggestion that the senator had raised before as an area to look at, and that I know we have been moving towards and has been helpful, is what your mechanism is for taking what’s collected from the national security and intelligence side of the house and turning that into product, whether it’s by using the criminal justice system or other means that can help feed both critical infrastructure but also the public at large as to what the threat is and what defensive measures they can take to effectively confront that threat.

That’s an area where more work, I think, needs to be done. A second issue would be to continue to improve mechanisms for speedy cooperation across borders so that law enforcement action can be taken in one country on behalf of the other, while being respectful of that country’s laws and institutions. Right now, the cyber adversaries shoot the gap and they’ll commit activity in one country that affects many other countries.

The more we can move towards speed-of-cyber information sharing, and then the ability to apply that information regardless of where the individual lives, the better.

Senator Unger: You created a threat analysis team to study potential national security challenges posed by the Internet of Things. Would you explain that, please?

Mr. Carlin: I’m glad you asked. If you think conceptually, one reason we’re here studying this issue is that, over a 25- to 30-year period, we moved almost everything we value in the West from analogue form — books and papers — to digital form, and then connected it through an Internet protocol — TCP-IP — that was never designed for security. We did so without properly calculating the risk from making that move as terrorists, crooks, spies and nation states moved to exploit this now digitally stored information that’s connected through the Internet.

That’s where we are now, playing a game of catch-up in terms of laws, regulations and practices of both the private and public sectors. The Internet of Things would be an exponential increase in the surface area of devices that are online digitally, sharing information and connected through that same insecure protocol, the Internet. That ranges from your refrigerator, to your lights, to the cars on the road.

If you think about it, what we shouldn’t do is make the same mistake we did when it was just data that we were moving. These are real objects, and if you look in the United States, we have already had a proof of concept hack that showed through the entertainment system, one could access the core braking and steering systems of a car that caused a recall of 1.4 million Jeep models that had already been sold to the United States public.

We also had a proof of concept hack with pace makers, medical devices that were actually in people’s bodies — in their hearts — where a 12-year-old could essentially hack and kill. When they originally rolled out, they were well tested and they worked, but what they didn’t do is test whether they would work if someone deliberately tried to make them not work as designed.

This is a critical moment for our respective governments to not make the same mistake again of not thinking through risk and try to ensure and incentivize that there is security by design so before the device is rolled out and reaches the general public, it’s as secure as it can be against this type of threat.

That can range from, instead of having default unencrypted devices, having the default be that they’re encrypted and password protected, to when it comes to something like an automobile, ensuring the entertainment system and core braking and steering systems are segregated by design so a bad guy who gets into one can’t jump over to the other.

The reason we had the group studying it is it’s such an important moment to get this right before we’ve already moved and made ourselves vulnerable and people’s lives are actually lost, at which point I’m sure we will have the incentive and drive to fix it. But let’s fix it before someone is either hurt or dies.

Senator Marshall: Thank you very much. As Senator Wetston said, our federal budget came down last month, and there’s a very small, obscure section in the budget that proposes the sharing of confidential tax information on Canadian taxpayers with other countries around the world. Based on the information I had, it would probably be 20 or 30 countries and would include the U.S., China and Russia, as well as a number of other countries.

Do you have any knowledge of what the U.S. is doing with regard to the sharing of tax information with other countries? What do you see as the risks associated with the sharing of confidential tax information with other countries?

Mr. Carlin: I’m really not an expert on the sharing of confidential tax information. I know to access it, for U.S. law enforcement purposes, there is a special regime and authority called the 26 U.S.C. 6103 that governs when you can use it for investigating potential crimes, but I don’t know the rules and procedures for sharing it at the request of a partner.

I’ve seen some requests that came in through multilateral assistance treaty requests, but I apologize, senator, I’m not aware of the provision.

Generally, it’s a type of information I know we invested resources in to try to improve the security around the theft of that information through cyber-enabled means. It was one of the areas of focus when we learned the hard way through the OPM, the Office of Personnel Management, breach where we lost so much information about government employees.

Across the board, in government, we did something called a cyber sprint to improve the basic blocking and tackling, if you will, of cybersecurity and start identifying crown jewel information that needs to be protected, and that tax information would fall into the category of sensitive information that needs extra protections.

I also know, on the other hand, sharing that information to track down criminals, terrorists, nation states and tax evaders is important, and so is having secure means with authorities to share.

Senator Marshall: You talked about the blending of information, so that people who are interested in gaining confidential information can get it from this source and they match it up with other information and combine it to use it for other illicit purposes. Do you see that as a possibility?

Mr. Carlin: Sure. If you went on the so-called Dark Web today, that portion of the Internet that’s not indexed, and you have to know particularly where you’re going, where you see criminals essentially creating bazaars of trade in illicit information. That sensitive information about individuals is for sale and gets a premium, essentially, the more sensitive it is. You’re seeing, along with health, general credit information and tax information would be of value.

What you don’t know, when it reaches that secondary criminal market, is who is going to obtain it. We’ve seen nation states use that method to obtain information we worry about. We haven’t seen it on scale, the terrorist groups doing the same thing. And regularly criminals are selling to each other for different criminal purposes.

Senator Marwah: Thank you, Mr. Carlin, for coming and presenting to us today.

You mentioned in several articles you have been interviewed on that in the world of cyber our data rules and regulations are stuck in the 19th century. If we had to go at certain rules and regulations, and one is the data rules surrounding data storage outside the country and now you talked about the Internet of Things, what other regulations need to be addressed?

I suspect your knowledge would probably apply to the U.S., though I suspect that if you have it in the U.S., we will have it in Canada. I appreciate that it might be the same in the U.S. So which ones would you go after?

Mr. Carlin: That is a great question. A couple of different areas. One, in the United States now, and we’ll see what happens because it’s attached to the current budget bill where there is a vote taking place as we talk, but the so-called CLOUD Act, which would be a means by which— one of the problems is let us say there is a Canadian law enforcement matter of a crime of local jurisdiction and a murder takes place in Canada. State and local police are constantly confronting the issue that information critical to that purely local crime of citizen on citizen is often now held by a foreign company overseas. So there is a great frustration from state and local law enforcement that it’s not practical for us to routinely use the Mutual Legal Assistance Treaty or MLAT request for these traditional local crimes. That request can take from a year to 18 months to process. They want to be able to use the laws of Canada, to serve process in Canada and get a response for a local crime.

The CLOUD Act would present a mechanism that says, essentially, if the two regimes have equivalent privacy and civil liberty protections, one can use those equivalent legal processes, court orders, to obtain information for local crimes. It sets up a way for two court systems to be able to do that.

On the one hand, I think that’s a great model because it encourages countries to raise the bar in terms of the protection of civil rights and civil liberties to get access to that ability to serve process; on the other hand, it modernizes a system that increasingly relies on digital data so you can perform routine law enforcement functions.

If we don’t come up with solutions like that, the worry will be the continued trend towards mandatory data localization, which is not good for civil rights and civil liberties. Also, it’s not good for our economic growth development and productivity. Similarly, it is not good for national security purposes, where you want to be able to share information across like-minded information.

So that’s one area: modernize our ability to serve law enforcement process consistent with our respective values to obtain information across borders.

Another area would be consistency on data breach notification and requirements. I see this even more now that I’m in the private sector advising clients. It is a dizzying array of different rules and regulations. In the United States alone, we have around 48 different State data breach notification laws, many of which are slightly different. For many companies they’re doing business in multiple countries and regions throughout the world simultaneously. A breach usually affects those multiple countries.

So to work together to try to have data breach notification that’s consistent or to explore safe harbour ideas where if the bulk of the breach occurs in one country and you meet that country’s laws, the other country accepts that as a mechanism for doing breach notification.

Ultimately, what we want to do is incentivize companies to come forward voluntarily with the information.

The third area we’ve talked about a bit already today would be thinking through the information governments collect and what the mechanisms are to share that with the private sector so they can take action. That’s a new problem with this technology. We’ve never had the private sector so much on the front lines of what used to be core governmental responsibilities and threats like national security threats. We need to rethink our rules and regulations accordingly.

Senator Marwah: What about our privacy laws? Do you believe our privacy laws have kept pace with where technology has gone?

Mr. Carlin: I think privacy law needs to be iterative. The technology is shifting so quickly that in some areas there has been a lot of focus and attention on personal identifiable information and how to make a notification if that is taken.

But the threat to privacy may not be through a breach of that information. It might be the ability to put it together and apply new tools like machine learning or as we develop artificial intelligence to create great opportunities, so you want, on the one hand, incentivize that investment, but also new threats to privacy.

Our respective regimes, they’re just at the beginning of thinking through what the implications are of the application of artificial intelligence to bulk data sets and what the rules should be in that area. The technology is changing so quickly. It’s new. So that’s an area of continued study and hopefully will be a common approach in the future.

[Translation]

Senator Dagenais: Following the events that took place recently on the Facebook platform, I would like to know whether people can take concrete, individual actions to better protect their identity.

Last year, in an interview on American television, you said that a social insurance number is the primary identifier in both the United States and Canada, but that this method is now outdated. What could replace the social insurance number as a personal identifier, and have actions already been taken or planned, say, by governments, including the American government? I would like to hear your point of view on this issue.

[English]

Mr. Carlin: I’m glad you raised that issue.

In its first instance, the old social security cards used to say on the card, “Don’t use this for identification purposes.” Over time, partly encouraged by well-meaning regulation, particularly in the financial sector, we started to require companies use it for exactly that purpose. If that was ever a good idea, it is not now.

The current top official in the Trump administration for cybersecurity, Rob Joyce, has stated publicly that, like many, his personal identifiable information has been stolen at least six times. And it’s not particularly helpful as a consumer to get a notice that your information has been stolen, such as your social security number, because what as a consumer can you do with that information?

I think there is a study group that Rob Joyce has alluded to inside the United States to start examining what it might look like to move from the social security number to other forms of multi-factor identification. It’s an area where technology has caused the problem, in part, because so much of what identifies us is stored online and has been stolen, accumulated and is now sold criminally.

But the technology may also be the solution insofar as, going back to the machine learning type part, it may be we have unique identifiers that will identify you in the aggregate by examining many different facets of what you’re doing and where you’re communicating online. There are multi-factor forms of identification.

It seems like we need to start moving in that direction and quickly. When you look at some of the largest data breaches that have already occurred, let alone how much without a breach at all— things people are making available about themselves online— if we don’t move to fix this, it's a problem that's going to cause an increasing amount of loss due to fraud and other means.

[Translation]

Senator Dagenais: When the perpetrator of a cyber attack is identified, can you tell us how you proceed with charges? Have there been trials in the United States? If so, I would like to know what kind of evidence must be submitted to get convictions.

[English]

Mr. Carlin: In each instance we followed our normal protocol before bringing a criminal case, even when we thought it was unlikely that we’d be able to obtain the defendant, meaning we had enough evidence that would be admissible in a criminal court, and we believed it more likely than not that a jury of the defendant’s peers would convict them beyond a reasonable doubt. And we have brought trials and had people confront evidence in the courts and been successful with those theories.

Often, actually, once seeing the evidence that’s admissible against them, many of the defendants have pled guilty when they’ve been able to reach the U.S. court system, including the one that we couldn’t have done without the cooperation from Canada— and I know Canada had been under great pressure at the time from China, and we greatly appreciated the cooperation and the use of the normal procedures of the Canadian criminal justice system in the case I referenced earlier of Su Bin.

In order to do it means having court processes that you can obtain logging in real time, often information using appropriate court processes, where you can essentially examine the bits and bytes and be able to introduce them. What happens often, if it’s in the national security sphere, that information may also be collected through intelligence means. That is very hard to introduce in court, so instead try to collect information using normal criminal processes, on the one hand.

The second change that we made was it is bits and bytes and it is electronic evidence, but at the end of the day, it’s human beings behind the keyboard. We shouldn’t discard the normal investigative techniques that we use, including flipping and gaining the cooperation of witnesses through plea bargains, using undercover agents to penetrate some of these organized criminal groups, and then third, applying techniques like the use of the behavioural analyst unit, the so-called profilers, who for years helped when I used to prosecute serial homicide or sex offence cases. You would get FBI expertise who knew how to examine the forensic evidence and use it to say this is or isn’t consistent with a particular behavioural profile.

More recently, the FBI has been having people who are cross-trained as cyber experts and then trained in using the forensic evidence you get in cyber to build a profile of whom the adversary might be. That was a technique we used when trying to figure out who did the Sony attack.

Senator Ringuette: Thank you so much for being with us today. I’m going to ask a few simple questions in regard to the example you gave us at the beginning of your comments about corporate awareness and how corporations are cooperating with police authorities.

For instance, I’m amazed a major U.S. corporation, having a blackmail email to get 500 bitcoins, had the awareness to alert your policing authorities. The corporations are kind of in a bind, “Give us 500 bitcoins and your information will not be distributed worldwide,” taking into consideration the impact with the customer and the value of the corporation notifying of the breach.

I guess two things: Statistically, how often would such a situation occur? How often would a corporation alert your policing authorities? Should there be regulations in regard to the corporation alerting the police, notifying the police authorities of every little blackmail issue they encounter?

Mr. Carlin: Statistically it’s difficult to answer because — and this has been part of the problem in coming up with the full scope and scale of criminal activity in this space, because we’re only aware of the ones that are reported. We’ve tried to do surveys. I think we did one recently. Some have placed it about a third of companies having received ransomware. Some have placed the number higher based on anonymous surveys.

I know when I was in government— and I see it now in the private sector— it seems like an enormous number of companies, small, mid-sized and large, are being hit on a weekly basis with ransomware type requests. The increase of these extortion ransomware attacks will continue as long as there are a decent number of companies paying off those who are extorting them from overseas. Right now, from the perspective of the crooks, it’s a ripe market.

What they usually do is hit in smaller amounts, and that way it’s easier to pay than try to report. And in fairness, from the point of view of the company or private sector, I think they think many times what’s the point of going to law enforcement with a $300 or $500 ransomware, even a couple of thousand dollars? They’re not going to do anything. It’s overseas and it just increases the potential nuisance of trying to be responsive or the embarrassment on top of having to do the payment without providing us something helpful in return.

It’s a difficult issue, like kidnapping, where there may be circumstances— it’s easy to say in the abstract, and the general policy from the FBI is to discourage people from paying ransomware, in part because you’re not guaranteed to get the data back anyway, because it increases the likelihood someone else is going to get hit or, if someone finds out that you’re a company that’s willing to pay, that you’ll get hit again.

At the same time, right now, if you don’t have the focus on resilience, it could literally be a life-or-death situation in some circumstances. We’ve seen hospitals and local police departments hit with these requests. For a company, they have to weigh potential loss of life or injury, or a situation that is so material to their ability to do business it would put them out of business, so their fiduciary responsibility versus the payment.

I think it’s an area to explore. I’ll use the phrase “carrots and sticks,” but what are the incentives that would encourage a company to come forward and tell the police? Right now it’s a mixed message, at least in the U.S.— and I think it’s similar in Canada— where some regulators may use the fact that you came forward against you and to say you had insufficient defences. Others are encouraging you to come forward as a victim. It’s a bit of a mixed message.

I think we should review the carrots and sticks and decide, as a policy matter, if we want companies coming forward routinely, they should get something in return for coming forward, even if that means some immunity from liability. You have to find the right balance so that you do it in a way that doesn’t encourage people to have lax procedures. You need to have procedures that encourage resilience and preparation so if they get hit with ransomware, they don’t need to pay and they can get back to doing business.

Senator Tkachuk: Just a couple of questions. On the question of cybersecurity, there’s the criminal element of people trying to steal stuff, but there’s also, as Senator Marwah talked about, the question of privacy. You mentioned the Internet of Things and fridges all being tied into the Internet and all the rest of it.

I’m wondering about the invasion of your own home by cybercrime, people being able to see into your home. We use wireless to access Netflix. I’m always amazed that actually works. We use wireless to access YouTube. Obviously, televisions are becoming much more sophisticated than they used to be, rather than cable. When I use Netflix, if I watch a duster on TV, the next time it comes on it’s got a whole bunch of other westerns. So they know what I’m doing. If I watch a show on cable, they don’t know what I’m watching unless they phone me and they say, “Are you watching ’CTV News’?” I think they have to phone me; I’m not sure.

How long will it be before wireless can access your home? We have computers in cars. We’re talking about self-driving cars. All of those personal things controlled by the Internet, I think they are an even greater threat to our way of life, actually. I wonder if you can comment on any of that nonsense I’m trying to espouse here, but I think it’s serious.

Mr. Carlin: I think you’re right that privacy and cybersecurity are inextricably linked ideas. It’s already occurring. Devices are generating more and more telemetric data— information about how the devices are working. There’s an enormous upside. We have focused a lot in the cybersecurity hearing, and rightly so, on the down, but that information could save lives when it comes to preventing accidents in the future, or provide critical information to doctors that helps save lives in the health provision.

What’s key is thinking around the area of, again, security by design. How can we incentivize and encourage rolling out the devices to make sure bad people are not going to use them for unintended purposes? There’s work to be done there. In certain regulated industries it’s a little ahead, but by and large that’s not happening and the devices are rolling out on scale without thinking about the security equation.

Second, we’re in a race a little bit. There are countries who don’t share our values, who are trying to win the race for artificial intelligence. Whoever wins the race will have a huge advantage in terms of committing future cyber-enabled crime or national security activity. It’s going to fuel the way those intrusions and defences work. And then later, the breakthroughs in quantum mathematics might render irrelevant much of what we do to defend right now on the encryption side. So there’s a race, too, to see who does quantum first.

In order to win that race, I think you need to incentivize the private sector to be able to collect and analyze that data efficiently. However, the rules and regulations, to your point, should be on use. As we think through it, we should encourage the ability to apply and learn how to analyze that data for productive purposes but set limits on use. That’s an area that is newly being explored with the change in technology.

What I’m finding now in the private sector is a lot of companies are starting to collect. That’s just the way the device is working. They’ve never been in this sphere or sector before. They’ve never thought of themselves as a tech company or information-collecting company, and they’re just starting to discuss in the boardrooms and C-Suites, “What does this mean, and what do we do with this data we’re collecting?”

So it’s a good time for bodies like yours to help set the rules of the road for them as they think about what to do in the future.

The Chair: Thank you very much, Mr. Carlin. You have been extraordinarily helpful to us. Your experience, both in the public service and in the private, has been a very helpful combination, and we’re indebted to you. Thank you very much for helping us, and you have been a real help.

With that, we will adjourn the hearing, with our thanks to Mr. Carlin in San Francisco. Thank you very much.

(The committee adjourned.)

Back to top