Skip to content
SOCI - Standing Committee

Social Affairs, Science and Technology

 

Proceedings of the Standing Senate Committee on
Social Affairs, Science and Technology

Issue No. 19 - Evidence - March 30, 2017


OTTAWA, Thursday, March 30, 2017

The Standing Senate Committee on Social Affairs, Science and Technology met this day at 10:33 a.m. to study on the role of robotics, 3-D printing and artificial intelligence in the health care system

Senator Kelvin Kenneth Ogilvie (Chair) in the chair.

The Chair: Colleagues, we have a quorum, and I'm calling the meeting to order.

[Translation]

Welcome to the Standing Senate Committee on Social Affairs, Science and Technology.

[English]

I'm Kelvin Ogilvie from Nova Scotia, chair of the committee. I'm going to invite my colleagues to introduce themselves, starting on my right.

[Translation]

Senator Seidman: Judith Seidman from Montreal, Quebec.

[English]

Senator Stewart Olsen: Carolyn Stewart Olsen from New Brunswick.

Senator Raine: Nancy Greene Raine from B.C.

[Translation]

Senator Petitclerc: Chantal Petitclerc from Quebec.

[English]

Senator Hartling: Nancy Hartling, New Brunswick.

Senator Frum: Linda Frum, Ontario.

Senator Dean: Tony Dean, Ontario.

[Translation]

Senator Cormier: René Cormier from New Brunswick.

[English]

The Chair: Thank you, colleagues. Just before I go to our distinguished witnesses for today, I want to get your agreement that we will stop 10 or 15 minutes early. We have a budgetary item for the committee that we need to get committee approval for.

Is that agreed, colleagues?

Hon. Senators: Agreed.

The Chair: I remind you that we are continuing our study on the role of robotics, 3-D printing and artificial intelligence in the health care system. I'm going to identify both our witnesses by video conference, and then I will invite one to present initially.

We have appearing via video conference from Belgium, Dr. Reinhard Lafrenz, Secretary General of euRobotics, SPARC, Partnership for Robotics in Europe.

Appearing via video conference from Arizona — and we appreciate his efforts to get to a site that we could connect to for today — from the Association for the Advancement of Artificial Intelligence, Dr. Subbarao Kambhampati, who is professor at the Arizona State University.

We welcome both of you to our meeting. I'm going to invite Dr. Lafrenz to present first.

Reinhard Lafrenz, Secretary General, euRobotics, SPARC, Partnership for Robotics in Europe: Thank you very much for the invitation. It's an honour for me to speak in front of this committee.

Let me introduce briefly how we are structured. We are euRobotics, an association, and we have a public private partnership called SPARC. The public private partnership is between our association where I am the Secretary General and the lead head from KUKA, one of the large robotics manufacturers, is the president. The other part of the public private partnership is the European Union represented by the European Commission.

In this partnership we want to foster innovation change from the very low basic TRL research level. I don't know if you are familiar with the TRL concept, technological readiness levels. It starts basically on a white sheet of paper with the first ideas and ends up with products on the market for the successful mission or goal of that value chain. We try to integrate all of the stakeholders into one big community.

The idea is to create individual topic groups which are driven by our association. They are focusing on various dimensions. The dimensions can be technical ones such as perception, 3-D vision manipulation and application area focusing domains such as underwater health care and rehabilitation.

These topic groups are structured in a way that they provide input to us. This is normally extracted in a moderated process. We create input for the next work program and for the funding course to the European Commission that finally decides which needs to be funded but takes our input significantly into consideration. This is the basic setting.

For the value chain we try to integrate the start-ups and the spin-outs from universities as a valuable source for future products that might be on the market in 5 to 15 years, depending on the complexity of new systems and the amount of hardware involved. If it's a pure software product. It's much faster than in any case where we have to deal with mechatronics or even materials. We have a timeframe between 5, 10 and 15 years until the first idea is on the market. This technology transfer can be supported at various levels.

The European Union through the European Commission issues calls. Some are on more basic research in the lower readiness levels, technologically speaking, and others are closer to domain applicable prototypes in the relevant scenario. In the relevant end user environment they can be applied and tested together with a potential end user of the technology.

Relating to health care, this could have some logistical robotic systems for transportation that are already on the market but there is still room for improvement. There can also be rehabilitation equipment, either established in an institution where people come or borrowed through an insurance or on a private basis to customers at home who use the actual technology used by surgeons, which is a very specific and narrow but very important market.

Regarding the statistics, I would not give too much statistical data because the basic data are available through the International Federation of Robotics. The International Federation of Robotics issues an annual yearbook about robotic statistics. They are now in two volumes, one for industrial and the other one for so-called service robots which is more important in this context. Service robots can be distinguished between home or domestic ones and professional service robots. Most of the robotics technology related to health care would fall in the professional service robotics category.

We have specific topic groups working on the social, economic or legal aspects. As you may know, there are currently initiatives to regulate robotics in general, driven by the European Parliament. We are involved in the discussion. We have close links to several of the parliamentarians. We invite them to specific meetings and try to provide the input. However, this is not easy because our stakeholders from basic research to large industries have a broad bandwidth of opinions.

The relation to AI and the other more data-driven technologies will be more and more important in the near future. The close links to apply AI on physical systems will shape our future in the next five to 20 years. I cannot be more precise because no one knows how fast these technologies will develop.

Regarding the strategy from our side as the association, we want to form brokerage services that finally provide leverage. Normally, our public-private partnership foresees funding from the European Commission. Depending on the funding mechanisms it's either 100 per cent or 70 per cent. It's 70 per cent when it's closer to the market and a more mature prototype, and 100 per cent when it's more related to basic research.

However, the industrial or private side also wants to triple this amount to 700 million coming from the EU so that we end up with 2.1 billion euro at the end. This amount of money is not directly related to individual projects granted by the European Commission. This money includes everything related to our activities, including research in labs where there is no complete public access to it.

This will contribute to the growth and development of the market and hopefully create new jobs or change to new job profiles. One of our concerns is that further automation will have to create jobs or maybe change job profiles. Regardless, we need to be sustainable for the overall economic situation in Europe.

That is basically my introductory statement. Thank you.

The Chair: Thank you, Dr. Lafrenz. I will now turn to Dr. Subbarao Kambhampati and invite him to make his presentation.

Then I will open the floor to questions from my colleagues. They will identify in the first instance which of you they are addressing. Following that answer, the other witness an opportunity to respond as well.

Dr. Kambhampati, please go ahead.

Subbarao Kambhampati, Professor, Arizona State University, Association for the Advancement of Artificial Intelligence (AAAI): Thank you all for the invitation. It is quite an honour to get a chance to speak with you today.

I am a professor of Computer Science at Arizona State University. I am also the current President of the Association for the Advancement of Artificial Intelligence, AAAI for short, the premier scientific organization devoted to the study of artificial intelligence. AAAI has a very significant membership and partnership from Canada. Professor Alan Mackworth of University of British Columbia was a former president of our organization. We have actually come to Canada multiple times for our annual conferences.

I'd also like to mention that I am an inaugural trustee of the Partnership on Artificial Intelligence, a consortium that was formed last year to focus on the responsible development of AI technologies. The partnership includes industries mainly Google, Apple, Amazon, IBM, Facebook and Microsoft, as well as non-profit organizations, including AAAI and the American Civil Liberties Union. It's expanding.

I thought this may be of interest to you, if you have any questions on the initiatives in the U.S. and worldwide regarding the social impacts of artificial intelligence.

I have been involved in AI research for over 30 years now. My current interests are on planning and decision making in human-in-the-loop and human-aware AI systems. I will say a little more about that later.

While I have a good understanding of the current state of AI and its applications, as I mentioned to your staff I am not an expert on the applications of AI on health care specifically. I did take the time to read the transcripts of your committee's meetings since February and thus have some understanding of what you have already discussed.

As you have already heard, artificial intelligence as a discipline aims to get computers to show behaviour that we would consider intelligent. Intelligence is multifaceted. In my discussions about progress and impact of AI with people outside our field, I often find it useful to contrast the progress of AI to the way human babies acquire their intelligence.

Informally, babies start by showing signs of perceptual intelligence: how to see, hear and touch the world around them. Then they get the physical and manipulative intelligence: how to walk, roll, manipulate objects, et cetera. They then acquire some forms of emotional and social intelligence: how to model other agents and their mental state. Only then do they get into the realms of cognitive intelligence: doing well in standardized tests, playing games like chess, et cetera.

It is interesting to note that the progress in artificial intelligence happened almost in the reverse direction. We had systems showing high levels of cognitive intelligence quite early on. There was the boom of medical expert systems in early 1980s. In the 1990s we had Deep Blue defeating Kasparov. However, it was only recently that AI systems have reached adequate and maybe impressive levels of perceptual intelligence: how to see and hear the world around them.

Surprising as it might seem at first glance, this reverse direction of progress is quite understandable. To a first approximation, AI researchers started first by teaching computers things they consciously know how to do. Perception and manipulation are things that we do quite unconsciously, so the only way to get computers to do them well was to make them learn. For this, we had to wait for the availability of large-scale data. The so-called deep-learning systems that you have heard about existed since 1980s but blossomed only after the Internet made data, be they images, speech signals or text, easily accessible.

This perspective also helps us put in context the recent flurry of interest and commercial applications of artificial intelligence. Advances in perceptual intelligence made it possible for AI to reach a much wider audience. It becomes much easier to experience the fruits of AI technology when your cellphone recognizes your voice and the images around you.

The next wave of developments for AI is expected to come from harnessing the strides already made in cognitive intelligence and connect them to perception, reasoning, planning and action.

Turning to the health care applications of AI, by the late 1980s AI systems were already being used for clinical assistance. They were the big drivers of the expert systems boom. However, they had to be hand-fed inputs. Just as Deep Blue, the chess system, couldn't quite recognize a chess piece but could beat Kasparov, medical expert systems could not see the patient they were diagnosing.

There is a funny anecdote about Mycin, the first expert system, happily diagnosing the hand-fed symptoms of a faulty car engine for internal medical problems.

The new wave of AI applications are able to combine perception with reasoning and diagnosis, for example, reading X-rays, cardiograms and photographs. These new technologies can help us tackle some of the most intractable problems related to health care including human errors in hospitals that according to some studies is the third leading cause of deaths in the United States where I live.

One long-standing tension in our field of artificial intelligence has been with artificial intelligence versus intelligence augmentation, AI versus IA, for short. Most AI research outside health care applications focused more on getting systems to operate autonomously and on their own rather than with humans in the loop. When you have humans in the loop, as would be the case with especially applications in health care, the AI systems need aspects of emotional and social intelligence. In particular, they need to be able to model the mental states and intentions of their human teammates, behave in explicable ways, show appropriate emotional responses and provide adequate explanations of their recommendations. This is the only way they can earn the trust of the humans in the loop.

Human-aware AI systems are crucial for applications of AI in health care, especially when you start looking at systems that interact directly with patients, whether to encourage healthy behaviours or to provide direct home care for the elderly or injured.

The advances in AI research and the widening use of AI technologies have also brought to the fore concerns about responsible and ethical use of these technologies. While voluminous amounts of health-related data are increasingly available and can be leveraged to provide better health care, it is critical to put in place best practices that preserve the privacy and confidentiality of the patients. This requires both technical developments, such as the use of blockchain technology, and policy decisions.

Another issue is data bias. In an inclusive multiracial and multicultural democracy such as yours, it is crucial that the predictive models guiding health care decisions are learned from data that is truly representative of the entire population.

Initiatives and organizations including my own organization, AAAI, and Partnership on AI which I mentioned earlier, are looking into some of these challenges but much remains to be done.

I want to end, if I may, with a personal recommendation to the committee. The so-called deep-learning revolution owes a considerable debt to the farsightedness of the Canadian funding agencies in continuing to support research into neural networks when much of the rest of the world had moved on to other things. Canada richly deserves to reap the benefits of its investments into this basic research. I am heartened by the astounding entrepreneurial activity in the Montreal and Toronto corridors.

As impressive as its recent successes have been, deep learning is only a part of the broader AI enterprise. The good news is that the Canadian AI research community itself is a whole lot broader and multifaceted. I hope that your government continues to support basic research across the full breadth of artificial intelligence.

Paraphrasing one of Canada's famous sayings, broad-based support for basic research is the only way to ensure that you skate to where the puck is going to be rather than where it has been.

I thank you for your attention, and I am happy to take any questions.

The Chair: Thank you both very much. I'm now going to open the floor to my colleagues.

I want to again remind you to please identify who you are directing your question to, in the first instance.

Senator Stewart Olsen: Thank you for your presentations and for taking the time to do this for us.

Dr. Kambhampati, you make an interesting point, one that I tried to get to yesterday with our witness that didn't explain as well as you do that predictive models are learned from data that's truly representative of the population.

My concerns are where the data comes from. How are we moving forward with ensuring that our data are sound? There were mentions of Google yesterday, but everyone knows that you can't always trust what you're finding on the Internet. How are you looking at ensuring that it's accurate and representative?

Mr. Kambhampati: There are two different issues here. General learning systems including the image recognition systems have basically learned from publicly available data. Much of the deep learning based medical imaging systems started off with a particular benchmark set of data called ImageNet that was not representative of the part of the world we lived. It was somewhat different in that it had a rather unhealthy fixation with dogs. Of the 1,000 categories of the things ImageNet has, 200 of them are dog pictures. Just as we dream in terms of human faces, the current deep- learning systems tend to dream in terms of dog faces. I bring this up only to point out that just because the data are available doesn't necessarily mean that it is particularly representative of the world we live in.

When it comes to things like health care data, thankfully it's not the case that health care data are directly available on the Internet. It should be very carefully controlled in terms of access to the data. My point is that it's very important for governmental agencies to ensure that the data they are collecting are collected from representative populations.

As much as I've been working in AI forever, Alexa still doesn't recognize my voice quite well because most of these voice-recognition systems are apparently not trained on my kind of voice. It's fine if it's just a voice-recognition system, but if I were to have a heart attack and I go to a hospital where they are trying to decide my fate based on the data that is not representative of basically my body type, it would be a very bad situation.

This is more a recommendation about how governments and the public agencies should be making sure the data they're collecting is truly representative of their entire population.

Senator Stewart Olsen: Dr. Lafrenz, could you tell me in your exciting world, for the benefit of our audience, what are you finding in Europe that is one of the most exciting new things coming out in medicine that you would say is just amazing?

Mr. Lafrenz: When we speak to surgeons, for example, there are several areas. One is surgical robotics. They say we have high-level simulation technology and many analytic measurement technologies, but the actual surgery, especially when it comes to bones or cutting out some parts, is still pretty traditional and not a modern thing. There is tremendous room for improvement to enable surgeons to get robotic technology to support them in actual surgery.

A second big part of health care related improvement is possible in the area of rehabilitation where you would normally only get once a week treatment with a home device that is rented to you. You could exercise even a couple of times a day, which could make a huge improvement.

I'm referring to technology such as exoskeletons and other specific technologies. We had some in my previous project when I worked at the TU of Munich in Germany as a researcher. We had some projects related to arm and hand rehabilitation. Also AI technology was used to a certain extent to improve the movement, to learn from the capabilities and to adapt to the individual needs for individual motion panels. In the health care environment we can think more about the logistic part and automation of some service departments in hospitals. It's not one focus; it's really a broad area.

Senator Seidman: Perhaps I will start with you, Dr. Lafrenz.

You described at the very outset a structure for integrating the many aspects of development and the many stakeholders. You started out by saying technology readiness levels, integrating the stakeholders and individual topic groups, just to remind you of what you started with that I'd like to ask you about. Then your input is sent to a funding body and they make choices based on your input and other things, presumably.

I'm trying to understand the connection between euRobotics and SPARC and how this whole enterprise functions to maximize priority settings?

Mr. Lafrenz: First, euRobotics is the private association. We have around 270 members from small research labs up to the big corporations including ABB, end users such as Proctor & Gamble and others that define the needs of their specific areas. Therefore we are able to collect in a systematic way the end users, researchers and industries that create components, products or software components the actual technology which will end up in the market. All this information is collected through structured workshops. SPARC is more or less the contract between us as the association representing large parts of the community in Europe and the European Commission.

We provide the input in a way that the commission asks us to give more or less suggestions for the next work program. It's up to the European Commission to finally decide, but they are carefully listening to what we extract from individual workshops on various levels within separate topic groups on health aspects. Then a prioritization is done which is discussed with all topic group leaders. The representation is forwarded to the board of directors of our association and they come up with a final suggestion to prioritize focus areas or whatever they are called in the specific context. Then the European Commission defines the budgets related to individual funding mechanisms and topics.

Senator Seidman: The reason I'm asking is that clearly in our study we are trying to understand the way forward for Canada. My question concerns the best way to structure any kind of integrated system for development of AI robotics in a country so that it is prioritizing and effectively developing some kind of actionable plan to move forward in this area.

Do you have recommendations to make about the kind of body or structure that would maximize an ability to jump forward in this area?

Mr. Lafrenz: Based on our experience it takes a long while until we get all stakeholders on board and able to communicate. Even the same words and sentences are understood from large industry on a research technical level up to the management level.

We started with community building about 10 years ago and now we have big fora where people meet. In the beginning it was more separated with one industrial day, one academic day and one mixed day. Last week it was different in Edinburgh. We had more than 800 people from European robotics covering all the different dimensions reflected in the topic groups.

On the process to come to action, my personal suggestion would be to have a structured dialogue between all the stakeholders to extract the needs of the users and to see whether valuable potential from the research side flows into activities; then have a look at where the funding is already available or where there is no need for funding because of the industry's own interest to proceed; and then identify the gaps.

Health care is a typical example where we need public funding because health care is one of the areas like transport where everyone, every single citizen, is involved and needs to benefit from it. This is typically a requirement where public money is well spent.

On the other hand, to proceed not only within the country but also internationally, we need to come up with common standards. We need to define international standards to be able to create an ecosystem where a product or component produced in Canada is easily interoperable with other systems or components created somewhere else in the world.

The standardization is not only at a technical level. It also requires harmonization of regulation and legislation. Specifically related to Europe but also within several provinces within Canada, the traffic regulations are individual. You might have a problem. You need to provide several different software which is switched if you cross the border. This is not only a personal frustration for you if you have to reload at the border but also something that would make international trade and development easier. It is the same for health care, for the data protection rules and so on.

Senator Seidman: Thank you very much Dr. Lafrenz. I appreciate that.

Dr. Kambhampati, I have a similar question for you, building on the points you made at the end of your presentation to us wherein you said that Canada richly deserves to reap the rewards of its investments into basic research and that they had done a lot of groundwork that led to the development for deep thinking.

Canada, it has been said, hasn't been so good at going from basic research to market. We might do a lot of basic research, but somehow we don't seem to manage to see it through to the final stages. Why do you think that is and what recommendations would you have to make in the area, specifically to a government, so that they might be able to improve this?

Mr. Kambhampati: First of all, I am in an academic setting, so I am probably not the best person to recommend how to start industries. I want to make a small clarification. What I meant about Canada's investments was on deep learning, not on deep thinking. Deep learning is a specific technical term, which is a particular type of neural network based learning.

It is interesting that research is ultimately a human endeavour. People find things to be hot at certain times, and they are involved in the mentality of working in a certain area. To some extent to me, funding agencies should have a more overarching attitude toward ensuring that multiple important areas are supported. To me, when I look back to the early 2000s pretty much everywhere else neural network research had more or less come to a standstill.

These are all the different places where people were strategizing on where exactly it would be but apparently they were completely wrong because there was no way of predicting exactly what sorts of research will work. Otherwise, it won't be research.

To me, it's great to think about the fact that Canadian funding agencies continue to support neural network research, which is now obviously one of the most impressive technologies that have made the AI scope broadly applicable.

The only point I was making was just as we did not know exactly where we were going to be back in the 2000s, we still don't know right now which technology will be taking us to the next level. From the research point of view, I believe it is very important to make sure that you have a broad-based portfolio. Of course, exploitation of the research, not in a bad sense but in the sense of using the research that you have done to bring it to the marketplace, is a very important aspect. I can clearly see a lot more about how the industries, all the Googles and Facebooks, have started having labs in Canada. Every day I hear news about more money and more start-ups in the Montreal corridor in particular and more recently in Toronto too.

I don't have any specific advice on how to bring research products to market, but as a researcher and as somebody representing the AI community as a whole I am grateful that your funding agencies had the foresight not to follow whatever was currently working back in 2006. Just as they didn't know then they won't know now. It's very important not to assume that whatever is currently working is the only thing we ever needed.

Technically speaking, neural network deep learning is a particular solution to doing perceptual learning. That's a small part of artificial intelligence. There are many other aspects. Even when we come to health care applications, for example, as I pointed out, things like the ability to model other people, other people's minds and mental states, wind up being extremely important.

The human factor aspects have sort of been the Rodney Dangerfield of AI research for a long time. They didn't get enough respect, but I think they have to get a lot more respect going forward because an extremely useful system is of no use if people don't want to use it. People decide to use it or not use it based on how comfortable they feel with the technology.

Another interesting point is that as medical technology and support systems become advanced, human doctors in the loop will need to keep up with the technology. That would involve essentially these technologies being able to explain themselves, providing explanations and showing explicable behaviour. These are things that in my view will be as important as making appropriate applications of AI to health care scenarios.

Mr. Lafrenz: I fully agree. As for my background, just to explain why I'm able to address this point, I did my diploma thesis on neural networks a long time ago. One of big issues here is an explanation component to extract from neural networks and other AI technologies a clear statement of why and how the decision was made. This is absolutely important. We are still lacking acceptance especially when it comes to levels where a system needs certification in a security area, be it for driving a nuclear power plant or in the medical environmental or to control an airplane.

Without this explanation, it's hard to get the certification for it. We need to work toward bringing AI to a level where it's certifiable.

Senator Raine: Thank you both very much. It's most interesting.

This question would be for Dr. Kambhampati. You are doing artificial intelligence and trying to gather information from as broad a base as possible. Yet, in reality as humans, we are all a bit different. Your voice, for instance, is different from my voice.

Does it not make sense when you gather all the information to be able to run it through a screen that more accurately pinpoints an individual to whom the artificial intelligence use is being targeted?

Mr. Kambhampati: You are talking about customization. There are two aspects to this. One of the issues of the current technology, especially the learning technology, is that it is quite data intensive. Humans tend to learn at least at the cognitive level from very few examples.

Most of the technology that's working right now requires large numbers of examples. That is very important if you want a system to be personalized to a single person. For example, a voice-recognition system should of course train itself to my voice, but it takes time for it to get enough samples of my voice for it to be trained. Similarly it is even more so in the case of health care scenarios. Unless I fall sick often enough, it will not learn exactly what kinds of things will happen to me.

You ultimately have to put me in some kind of an equivalence class of perhaps Asian-American immigrants and then make sure there is enough data gathered about that particular subgroup.

Once you have the system fielded and if I have a personalized health care assistant just for me, over a period of time it will learn mostly from the data on my own health. I guess that will happen after it has been fielded. Actually, before it is fielded it still has to be trained on data that is representative of the group that I might belong to.

The bigger issue is just to make sure that you don't take 10,000 people's health records. You have to make sure that the records you are training on are connected to the groups, to the population distribution, in an appropriate way.

I don't know whether that answers your question.

Senator Raine: It does and it doesn't because obviously there is a possibility of getting into racial profiling in terms of data-gathering. I don't know whether that would be helpful. Even further, you could do some screening in terms of genetic screening so you know how the data works from a genetic point of view. I guess we are still learning all this.

Mr. Kambhampati: As I mentioned toward the end of my statement, there are many important policy issues to be thought through carefully about data gathering. Clearly, things like racial profiling are probably less worrisome in the case of health care because in some sense my health depends on the race I come from. It's not just a science.

It's also the case that there are certain kinds of health information, especially the personal health information that needs very strong privacy and confidentiality protections. It has been seen sociologically that people do not divulge their data if they do not feel confident that it won't be misused against them. That can lead to an unfortunate vicious cycle of people not divulging data which then does not help the health care that they will eventually get.

It's important upfront to make sure people know their confidentiality will be protected. As I said, it should not only be done but should be seen to be done. Confidentiality and privacy should not only be there, but they should be seen to be provided.

There are some activities beginning. I am thinking about a couple of things like Google DeepMind folks started talking about how there is a danger that blockchain technology can be misused and how to ensure that patient data are not being misused.

Notice that blockchain technology came from Bitcoin and electronic currency, but it's now actually being usedto ensure that both transparency and privacy are afforded for health care data. As we go forward, this sort of thing will get to some of the very important points you are raising: How do we make sure that people provide the data at the same time as we make sure that we don't misuse it and they know we will not misuse it?

Senator Frum: My question was exactly on the subject you have been discussing, professor. Let me ask both of our witnesses to expand a little more broadly.

Studies seem to show there is resistance on the part of health care professionals and I also think patients. Anecdotally, I was recently asked by my doctor to log in all of my information. Maybe it's a generational thing and a younger generation would be more receptive to these kinds of requests than my generation.

We know about the privacy and confidentiality challenges. Are there other barriers when it comes to adopting digital records in the health care field? Other than privacy and confidentiality can you think of other barriers that exist that prevent the widespread acceptance of the technologies?

Mr. Kambhampati: First, you pointed out there may be resistance for people to share information with this advanced technology. There has also been research showing that in certain cases and for certain demographics they much prefer talking to computers than to people.

There has been published research showing that in the case of psychiatric disorders, for example, people are much more willing to be upfront about what is ailing them when they think that they are talking to a computerized medical assistant rather than to a human. I think we are trained to worry about people judging us, whereas we still believe that computers don't judge us.

With this particular technology they had people on the other side of the computer just to make sure that there was a proper process in place. They used it just to ensure that people were much more forthcoming. Things are more complicated. In some cases people would be more worried about dealing with technologies but in other cases they preferred to deal with the technology.

Going back to my earlier point in terms of the barriers, especially if you have systems that are interacting with humans, it's very important that people feel the systems have people skills. At the risk of repeating something I said earlier, the idea of developing interfaces that people will feel comfortable with has had a checkered history in computer science in general and AI in particular.

Some of you probably remember the Microsoft Office paperclip assistant that was supposed to help people. Everybody would shut it off because it was really annoying. It did not understand your emotional state. As a researcher once pointed out, it came up because you were frustrated. Yet, it would have this silly grin on its face and you'd just hated it because of its reaction to your problem. You clearly want these assistants to have emotional and social intelligence. That's just something we took for granted.

With AI in general these problems weren't as important before because researchers determined that those things came easily to us so it should not be that hard to get computers to do it for us.

In reality, dealing with humans is infinitely harder than playing chess. That is where the new research on AI is going to work and will be extremely important, but it will be a big barrier for people who feel comfortable about using technology. If you just think it is a savant asking the same question multiple times without realizing your mental state, you will shut it off.

The Chair: Dr. Lafrenz, do you wish to come in on this?

Mr. Lafrenz: I have just one short remark. In several cases it's not about privacy; it's more about the acceptance issue especially in the health care environment.

In several studies people felt more comfortable to call a machine during the night instead of waking up a human being, especially when it comes to more intimate things like going to the toilet or so. A supporting machine is much easier and can even prevent infections and other things. There is a high degree of acceptance. However, in other areas it's more about public communication and trying to educate the broader population to show the benefits of such technology.

The Chair: Dr. Kambhampati, I see you want to come back in on this.

Mr. Kambhampati: I have just one other thing which I was trying to talk about earlier and is relevant to this question. One of the aspects of the systems being usable by people is that they need to explain themselves. We talked about this earlier.

Oftentimes an AI explanation is seen as talking to yourself and saying, "I did this because I wanted to do it this way.'' We know that explanations are really all about the other person's mind. For example, a doctor explaining the diagnosis to another physician gives a very different explanation than to the patient they are treating. That is because these explanations have to depend on the models of the person you are giving the explanation to.

This is yet another place where the systems have to learn to track mental states of the humans they are interacting with so that they are not just talking to themselves and saying, "I did this because my model said so.'' The model might be shallower or deeper than what the system is actually handling.

Senator Cormier: Thank you, gentlemen, for your very good presentations. We had occasion previously to talk about training. I wonder what you consider will be the main challenges in terms of training for the users of these technologies. Feel free, both of you, to answer this question.

Mr. Lafrenz: That's a very tricky question, mainly related to core scientific considerations. Maybe it's better if Professor Kambhampati would answer first, and then I could maybe respond.

Mr. Kambhampati: I'm assuming, just to make sure I got your question right, you're asking how people using these technologies should be trained so that they can use them in the right way. Is that a correct interpretation of your question?

Senator Cormier: Yes, but I'm thinking about the medical user, the doctors and the patients, of course.

Mr. Kambhampati: Indeed. I think that's a good question. I don't have a complete answer, but one of the issues is to make sure that the doctors in particular have an understanding of the limitations of the technologies they are using. It's not as simple as one might think.

For example, right now we have learning systems that are quite impressive in recognizing, for example, one out of a thousand categories of objects. For example, if you show them a school bus, it will say it's a school bus. If you show them a temple in South India, it will say temple in South India. It's very impressive.

The problem is when we see a human doing these kinds of tasks we subconsciously put a closure around their capabilities and say, "If you can see this school bus, then if I show you a school bus that is a slightly different colour you will still say it's a school bus. If I show you a slightly different temple, you'd still say it's a temple.''

Our visual systems have valid fallacies and problems but we have a sense of them. Whereas automatic technologies oftentimes are quite different from what we are used to. In particular, coming back to something like a school bus, it has been shown that the way the state-of-the-art neural networks recognize these things is not robust enough that sometimes if we just add imperceptible noise to the school bus suddenly the system will say with 99 per cent confidence that it's an ostrich. You and I will still see it completely as a school bus but it will say it's an ostrich.

Sometimes it will come to decisions for which the explanations are actually quite silly. For example, one of the things the system might be able to predict with very high confidence is whether something is a Siberian husky. It is really impressive, but, on the other hand, you'll realize that what it's doing is it's seeing that there is ice in the picture and if there's ice in the picture it must be a Siberian husky. That's not the way we recognize Siberian huskies.

It's important for doctors to know when the system is right and to have a sense of its failure modes. On the technology side, AI researchers are trying their best to make sure that they can also make the systems give this confidence level. It's also important for people in the loop to have a sense so that they will not just be sort of mesmerized by the magic of the technology.

In the case of a school bus being seen as an ostrich, it turns out that anything can be made into an ostrich by adding some arbitrary imperceptible noise. It's important almost to train doctors to this very surprising failure, more so that they know how to take the decisions being recommended by the system with the appropriate grain of salt as we go forward.

Mr. Lafrenz: The exact way to train people depends on the user group. A medical doctor is expected to go through a training course and learn directly to use the technology and its limitations. A completely different story would be to train a patient, even one with dementia, to use a technical device.

It's really impossible to give a clear answer. It also depends on the age of the person and how well they are familiar with modern technology. What has to be done is a general guideline should be structured to more or less include this in schools or even preschools to get used to modern technology as a more societal measure for the future.

Senator Cormier: Dr. Lafrenz, do you consider there are cultural challenges to the training? What do you have to say on that? I'm curious.

Mr. Lafrenz: On cultural differences, for example, if you look to Japan, they are much quicker in adopting any kind of robotics technology. They appreciate having a robotized dog or any other pet at home. Whereas here in Europe in certain communities we are discussing whether it is ethical to cheat and give a person with dementia a robotized dog which could only run out of battery power whereas a real pet would starve.

There are real differences that need to be taken into consideration. There is no easy answer on how to do it.

Mr. Kambhampati: I'm aware of some work being done in the air force office of scientific research on how different cultures trust technology. The exact point came up that certain cultures are indeed much more accepting of technology than other cultures. It's a very interesting thing to keep in mind. You can't change the cultures, but it is probably easier for certain cultures to consider robotic caregivers than certain other cultures. Personally, I am not too sure that I am a big fan of robotic caregivers when I become old.

There is also a different point. Already there is a generational gap in terms of technology acceptance. I'm pretty sure we're all aware of our children and grandchildren who are way more accepting of technology and probably sometimes more willing based on my point of view than some of us. We have to consider this big aspect.

Mr. Lafrenz: As a side remark, we all know the digital native generation and some people already talk about the robotic native generation, which is the next one and is, of course, AI.

The Chair: Dr. Lafrenz, I'd like to ask you a couple of questions and then I'll go to Dr. Kambhampati.

One of the questions that I have is with regard to deep learning. In your discussion of your organization you focused largely on robotics. The research and application aspect of deep learning as an identity itself, is that similarly organized in the EU in terms of setting priorities, financial support and looking for potential applications?

Mr. Lafrenz: We don't have a public/private partnership on deep learning at the moment. However, we are considering AI including deep learning and other AI technologies. Finally, all these technologies ending up in the real world to control something in the physical world is a valuable contribution to robotics.

A very easy example is last week in Edinburgh when we had our European robotics forum with a large number of participants. There was a special presentation on AI and robotics. One of the key notes came from Google DeepMind. We asked them to show us where they were.

AI technologies for robotics are currently more in a research state rather than in real applications. Real applications are only in very few cases, but everyone in the community sees that AI technology is a field that contributes not only to robotics but will definitely change our future lives in even the next five years.

The Chair: I want to follow up. You mentioned in passing during your presentation or in answer to one of the questions the idea of robotic assistance in the home. I want to give a little background to my question.

We have just completed a study on dementia. We have looked at the European situation and it's very clear that parts of the European Union are very far advanced relative to other countries in terms of recognizing how to help people diagnosed with dementia to continue to function in society as long as possible and to provide settings. The Netherlands is particularly advanced in this regard and so on.

One of the issues that arises is the use of robotics in terms of being an assistance in home care or even in so-called housing settings for people with dementia. Robots have been trained to some degree to recognize the actions of the individual, to keep track of medications and things of this nature.

I'm assuming because the interest in helping people with dementia is further ahead in the EU than in many other countries. There is an extra interest in moving robotics further in home care, with specific focus on perhaps patients with dementia.

Could you give us a sense of advances that you are aware of in that field?

Mr. Lafrenz: I know that there are several projects addressing this part but I'm not into the technical details. Maybe I could follow up with our stakeholders who are actually working hands on in this area.

However, we have to distinguish two things. One is the pure information technology-related assistance providing information and providing services without physical support. Another would be real physical support of some patients like handing over some medication or other things where there is a direct interaction.

Both areas are addressed through various projects. You mentioned the Netherlands. They are also advanced in context of smart home and assisted living. These kinds of projects are in a phase when they interact with real citizens, with real patients.

I cannot tell you the latest progress of specific technology in the last year or so.

The Chair: I'll go to you, Dr. Kambhampati, to see if you have anything to add to this specific question.

Mr. Kambhampati: Actually, the use of robotics assistants to help people with cognitive disabilities including dementia has been looked at for some time in AI literature.

One very important aspect that we need to realize is that there is a significant cognitive element to it. It's not just being able to help but figuring out what the intentions of the patient are and figuring out the best time to offer help or the best time to offer a reminder. This becomes extremely important and again requires these assistants to develop a mental model of the patient.

Work was done as far as back as 10 years that showed it was very important to give Alzheimer's patients who were being helped a reminder that connects to the routine they are already following. For example, if you're already in the kitchen that would be a good time to remind the patient to take medicine. Basically, making sure that the reminders and help are given at the right time rather than just throwing it out into the open. This becomes the important aspect.

As I said, ultimately the robots have to be able to move and help physically when they need to do that, but there's this very significant aspect of actually knowing when to do what, which involves building a model of the patient and keeping track of their intentions and what they're trying to do. That becomes very important.

Also, even for robots in close proximity to the humans it's very important for them to project their own intentions.

The Chair: Right, I understand that. We are aware of robots that have been trained to follow with regard to dispensing of medications, keeping times of that, and noticing the position of the individual in the room, whether they have fallen and things of that nature. I was looking to see whether there are any real breakthroughs further on. I think we'll stop that line there.

I would like to follow up with you, Dr. Kambhampati, on the issue of tracking errors in the medical system that you mentioned during your answers. We are aware, as you noted, that it's generally reported that errors in the medical system is the third leading cause of death in a number of countries. We're talking now about human errors overall whether it's improper medications or other accidents throughout the system.

I wanted to follow up with you on that subject. Are you aware of the studies that are trying to follow robotic errors where robotic assistance has been used in medicine, particularly in surgeries and so on? Are you aware of studies indicating the comparative error on the part of the robotic application versus the straight surgeon?

Mr. Kambhampati: Obviously, the robotic assistance has been for particular narrow areas. Anecdotal evidence seems to suggest that they are mostly very good in those narrow areas.

In some sense the comparison may not be fair. In human errors you don't give any benefit to the doctor when they say, "We did make sure that I'm correctly treating the specific symptoms I'm supposed to be treating,'' when the patient died because of the other complications that your treatment caused. Whereas we tend to give that sort of a benefit to the robotic assistants because we know from the beginning they are very narrowly scoped.

The only anecdotal studies that I've heard of were not particularly well-researched, scholarly studies, including an article in The New Yorker about the way the diagnostic systems have actually become much more dependable than human doctors in the cardiology department.

The Chair: We got testimony yesterday with regard to diagnostics. I think you've answered as far as you can go with regard to the question.

Dr. Lafrenz, perhaps you have something to add to the robotics part

Mr. Lafrenz: One of the most widely used robotics systems is da Vinci. However, this is manually controlled and not automated.

I'm not aware of any real good study which would show the difference. The more technical equipment included in surgeries, the further you go to where you still try to make a surgery possible. Therefore it's very hard to see.

I don't know of any recent good study which would give a clear statistical basis.

The Chair: That's what I was looking for. I was simply asking the question to have the latest hard information in that regard.

Senator Raine: I've always been interested in the transition from the introduction of all this technology to kids and the impact on children's lives. They are no longer playing outside in unstructured play. They're going on to devices to play. I think there is some impact on their physical conditioning and health.

When we talk about rehabilitation and the use of devices to help in rehabilitation, is there anything happening in the robotic world about using some kind of electronic device that would be attractive to children? I'm thinking of taking the Wii exercise to the next step where children can interact with it and become inspired to become more physically active in a manner that is monitored and controlled so that children whose health is suffering with obesity can be gradually reconditioned to be physically healthy.

Is there anything going on in the field that you know about?

Mr. Kambhampati: I'm presuming that you're aware of the Pokémon Go, a cellphone game that actually had a pretty big impact on people's mobility.

It was essentially doing augmented reality where adults as well as kids would be shown hints about where different kinds of Pokémon might be found in the real world. There were places, for example, in New York Central Park where people would all come.

The Chair: Doctor, we're going to go beyond that particular example and try to come back.

Senator Greene wants to refocus her question.

Senator Raine: I'm aware of Pokémon Go. I have grandchildren and I could see what it was doing. Very quickly on, it became amazing how many Pokémon Go were hiding near McDonald's. It's a two-edged sword, but I am more interested in something that would be prescribed by a doctor and followed for real therapy.

Mr. Lafrenz: I'm not aware of real therapy or something used on a wide scale. However, there are many individual activities in Europe. For example, we have many small companies that try to provide some toys for children where there is also physical interaction. In general, rehabilitation where a robot is involved seems to be very attractive to patients. Sometimes it's mobilizing. The resources and people are more willing to do the exercises when they have a kind of robotized trainer.

This is some evidence, but I'm not sure of whether there is any particular system around for the sole purpose of serving children.

Senator Raine: I just have one comment, then. When you gather your stakeholders together to talk about priorities for research, you should try to correlate with the rising rates of obesity that a problem are all over the world now. It would be good to focus some attention on that.

The Chair: That was the suggestion back to you.

We're going to stop that one there, and I'll go to Senator Dean.

Senator Dean: My colleague Senator Frum has spoken to privacy issues and Secretary General Lafrenz mentioned ethics very briefly. I wonder, just top of mind or high level, if anyone is talking and reviewing the ethical issues associated with this. I'm sure the European Parliament is.

Could you summarize the two, three or four big ethical questions that have surfaced in relation to these issues? Perhaps your colleague might want to add as well.

Mr. Lafrenz: I would need to read through the most recent discussions which we had just last week at the European Robotics Forum. There were a lot of workshops related to ethics, but privacy and the data protection are definitely two of those things. Also, issues related to when robotic systems should be able to interact in a physical way with a person and all human robotic interaction cases raised ethical questions.

A third big point was when it comes to decisions if an autonomous system such as an autonomous car has no way to stop and has to hit either the grandmother or the child. These kind of regulatory things influence decision making.

These are my big three points for ethics.

The Chair: On that latter one, we'll make a recommendation in our report as to which one the autonomous vehicle should choose. Very clearly, your last example is a dramatic example of the issues that are being faced in this particular situation.

On behalf of our committee I want to thank you for taking the time and the effort to get to video-conferencing facilities to be able to meet with us in a livestreaming event today. You have been very thorough in your answers, and you've given us additional information to help us with our report.

I would like to take you up on the offer you made, Dr. Lafrenz, and hope that Dr. Kambhampati would consider it as well. If something occurs to you in the next few weeks relative to our discussions and our events, particularly as you see things moving in the future that would be of interest to us, please communicate that through our clerk. We would greatly appreciate any further insights that you might give us.

Shifting gears now, we are at that point where the issue that I want to put before you, honourable senators, is a draft budget for this committee.

Those who have been in the Senate for some time will know this is the most frugal committee that has been around for some time. In fact, the budget you see before us is the total sum budget for a 12-month period for this committee. The largest part involves money for preparing a report. The kinds of reports we put out generally look like this one. It is a modest budget in that sense but the largest part is for that.

We are going to take a trip. This is only the second time since I've been chair of the committee or have been on the steering committee that we have actually done a trip. You'll be sorry to learn that both of these trips have been in the great city of Ottawa. This one is in the great city of Ottawa. We have the costs for the trips that the steering committee approved us to look into. The University of Ottawa and the hospital have been enormously cooperative. They are setting up what I hope will be a really good kind of hands-on sort of situation. Like the 3-D printing we had right here, it will help the committee get a sense of where this kind of research occurs, what they're doing, the environment and so on.

Colleagues, I'm putting this budget before you. If no one has any further questions, I will put a question to you.

There being no further questions, let me put it formally so that we are official under Senate rules.

Is it agreed that in relation to the committee study on robotics, artificial intelligence and 3-D printing in the health care system, a budget application for $8,300 for the fiscal year ending on March 31, 2018, be approved for submission to the Standing Senate Committee on Internal Economy, Budgets and Administration, following a final review by the Senate administration?

Hon. Senators: Agreed.

The Chair: Thank you, colleagues. That is required before I can take this forward. That's very good of you. I don't think there is any other business.

I'll just indicate to you roughly how we are proceeding now. Next week, we will begin a two-week study of Bill S-5. In the normal fashion we will give the sponsor and the critic of the bill the first questions in each round. We will limit each round to one question per person. We believe we have a very balanced witness list. You've all been receiving a tremendous number of things, but I think most of you, even if you've not been here long enough, are able to detect when there is an organized approach to things.

We believe we have those issues covered in the witness list. We think we have a very good witness list.

As you know, in these areas there are people who can get personal advantage from appearing on television before a Senate committee for certain things. I know you're aware of human nature in these kinds of areas, so factor all of those things into consideration.

We have a steering committee that represents all four groups in the Senate now that does include the sponsor of the bill in this case. We believe we fairly represented the issues that have been identified, and then it will be up to you to interpret the evidence that you hear before us.

That will be in the next two weeks. Then we will go back to two weeks of our study and possibly enough time to finish it off, if we're able to finish it off in one stretch. Then we have a series of private members' bills, most of which will take a maximum of one day and some even half a meeting to deal with. We will try to get into them as quickly as possible.

The kicker in the system, the unknown, is what divisions of the budget bill we will get. This committee normally gets two to three divisions of the budget bill. If that is the case this time, depending upon the extent of those divisions we may have to even try to schedule extra meetings. We try to avoid that, if at all possible, but that may be the case.

I should have said — and thank you, Shaila — that the site visit you've just approved that we take forward a budget for is scheduled for Monday afternoon, May 15, so it will not interfere with normal Senate business.

I'm giving you a head's up. The clerk will send out normal notice, but this will give you extra time in terms of advance notice.

Senator Cormier: We will have time to prepare our luggage for this trip. Since it's my first trip, I will be happy to prepare.

The Chair: As a senator you are entitled to an aide-de-camp and other logistical support. Security has already been arranged.

Are there any other questions that any of you might have?

Senator Hartling: Did I see somewhere that we are going to meet on April 10?

The Chair: Yes, we have three hours scheduled now for the afternoon of April 10. We will begin at 1:30. For your information, the public health system of the U.K. is appearing in the first hour. They have implemented a number of the issues that we're dealing with. They are in a position to talk to us about their experience. I think that will be extremely helpful to you. Indeed, in addition to our normal witnesses, they will cover from their experience of their legislation some of the issues you are being heavily lobbied on.

We are very pleased to have been able to set that up for you.

Senator Raine: In my review so far of this bill, it seems like there are two parts to it. Is there any possibility of splitting the bill?

The Chair: No, Senator Raine, only if you bring in a motion during a committee to recommend and get full support to split the bill.

Senator Raine: Am I the only one who thinks that it is two distinct subject areas?

The Chair: It would be inappropriate for us to discuss that as a committee at this point. We are on camera, and that is not an item of business for this meeting. I have explained to you the appropriate mechanism under Senate Rules for dealing with that. It will come up at consideration of the legislation.

Any further questions?

I declare the meeting adjourned.

(The committee adjourned.)

Back to top