Skip to content
SOCI - Standing Committee

Social Affairs, Science and Technology

 

Proceedings of the Standing Senate Committee on
Social Affairs, Science and Technology

Issue No. 22 - Evidence - May 4, 2017


OTTAWA, Thursday, May 4, 2017

The Standing Senate Committee on Social Affairs, Science and Technology met this day at 10:31 a.m. to study the role of robotics, 3D printing and artificial intelligence in the healthcare system.

Senator Kelvin Kenneth Ogilvie (Chair) in the chair.

[English]

The Chair: Colleagues, we have quorum and I'm calling the meeting to order.

[Translation]

Welcome to the Standing Senate Committee on Social Affairs, Science and Technology.

[English]

I'm Senator Kelvin Ogilvie from Nova Scotia, chair of the committee. I'm going to ask my colleagues to introduce themselves, starting on my right.

[Translation]

Senator Mégie: Marie-Françoise Mégie from Montreal, Quebec.

Senator Seidman: Judith Seidman from Montreal, Quebec.

[English]

Senator Stewart Olsen: Carolyn Stewart Olsen, New Brunswick.

Senator Unger: Betty Unger, Edmonton, Alberta.

Senator McIntyre: Paul McIntyre, New Brunswick.

[Translation]

Senator Petitclerc: Chantal Petitclerc, senator from Quebec.

[English]

Senator Hartling: Nancy Hartling, New Brunswick.

Senator Dean: Tony Dean, Ontario.

[Translation]

Senator Cormier: René Cormier from New Brunswick.

[English]

Senator Eggleton: Art Eggleton, senator from Toronto and deputy chair of the committee.

The Chair: Thank you very much, colleagues. I remind us all that we are continuing our study on the role of robotics, 3-D printing and artificial intelligence in the health care system.

We are very pleased to have with us this morning Ms. AJung Moon, who is the founder of the Open Roboethics Institute. We are delighted to have you here. We are looking forward to your presentation, and immediately following that, I will open the floor up to questions. Ms. Moon, please.

AJung Moon, Founder, Open Roboethics Institute: Mr. Chair, honourable senators, thank you for giving me the opportunity to speak to you today. I'm the founder and director of ORI, the Open Roboethics Institute, which is a think tank that specializes in stakeholder-inclusive approaches to studying ethical, legal and societal implications of robotics technologies. Given my background in human-robot interaction, HRI, and roboethics, I would like to focus on ethical issues pertaining to interactive robotics.

As was expressed earlier this year by a number of the witnesses who appeared before you, I too believe that robotics and AI technologies in the health care domain can provide promising solutions to the issues of global aging, shortage of health care professionals and growing demands of patients. However, there is a unique set of challenges that must be recognized and addressed as we advance our innovation economy in Canada.

Numerous studies have demonstrated that people often interact with and treat robots not quite as they would another person but also not quite as they would a vending machine or other automated machines either. Rather, we treat them as though they are a new type of species that lies somewhere between this spectrum of a living person and an automated machine, except unlike other biological species, we get to design how it responds to our physical world, what kind of decisions it makes and how it should affect us.

Then, in designing robotic systems, we must ask ourselves the following questions: What decisions are we as a society comfortable delegating to robots? And what should a robot be designed to do?

In our study at ORI, we found that there are surprising factors that can influence the answers to these questions. For example, what should a robot do in a hypothetical scenario when an alcoholic patient asks it to fetch an alcoholic drink against the doctor's orders?

When we posed the question with the condition that the patient is the owner of the robot, the majority of the participants agreed that the robot should fetch the patient the drink. When someone else owns the robot — for example, a hospital that dispatches a robot as a service — then the majority of the participants agreed that the robot should not fetch the drink for the alcoholic. This result was accompanied by the fact that half of the same participants did not think ownership should affect a robot's decision making, which is directly linked to the patient's autonomy, while the other half did.

So the question of what a robot should be designed to do requires a lot from those making the design decisions, whether the designers realize it or not.

What would an engineer have to do to make these decisions? An ideal set of steps would involve a process of identifying values that are important to the different stakeholders of the technological product. Following that, a priority order of the values may have to be determined if there are conflicts between one stakeholder value to another, such as respecting patient autonomy and protecting owners from liability issues.

Afterwards, the engineer should know how to implement these value decisions into the technology and evaluate whether the designed robotic device behaves in the world in ways that are aligned with our identified set of values.

What I just presented to you as an ideal set of steps are some of the goals that 30 international experts, including myself, have identified as part of IEEE initiative's work on how to design systems that behave in a manner that is aligned with our values, because without guidelines to help these processes, it is an undertaking that no one individual engineer can be expected to perform alone.

Given these challenges, how can we as a society continue to innovate at a competitive pace at which intelligent technologies are developing while making sure that we are moving in a positive direction?

Today there are a number of initiatives that aim to develop guidelines and standards relevant for developing these technologies. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, the committee's work of which I just mentioned to you, is one such initiative supported by a standards organization. Perhaps due to the importance of such work, this initiative grew from a handful of academics in early 2016 to a community of over 100 experts volunteering their time from across the world.

In the case of my own organization, ORI, we have been spearheading the discussion of roboethics issues in the past five years with an attempt to make our contents accessible to the public as well as academics from across disciplines. I believe our efforts both increased the awareness of the complex landscape of roboethics discussions and helped start a healthy discussion on ways in which roboethics issues could be addressed with practical solutions.

One such solution is to address some of the roboethics challenges using technological means. Researchers in AI and machine learning are currently exploring ways in which algorithms can be designed to be transparent and fair.

Such efforts from the research community will need to be coupled with ways in which we can make the designers of today better aware and trained on ethical issues that pertain to robotics and AI and how to address them in their work. For example, developers can make the decision to use one type of algorithm over another in order to create a system that is more explainable and interpretable.

In addition, organizations can make explicit the values that they consider to be a priority. For example, given a lack of a regulation that governs the amount of time a senior can be left alone with a therapy robot without a caregiver, manufacturers of such robots can decide to make explicit their stance on the value of human-to-human interaction for senior end users. This can be in the form of corporate philosophy or policy.

Encouraging such discussions in organizations can help guide not only strategic decisions of the company but also enable design teams to make their design decisions more easily.

Likewise, if we want to prioritize reflecting Canadian values in the autonomous products we design in Canada, we must actively seek ways to make it easier for companies, especially start-ups advancing these technologies, to make guided and informed decisions.

There are many more issues I would like to discuss with you today. I believe Canada has many qualities to take leadership in innovating robotics with ethics in mind. In your questions, I hope you will ask me about how ORI has been supported to date and what challenges remain in many countries where roboethics research is taking place. I hope you'll also be interested to talk to me about how we can make the new wave of technology ethics cheaper and more accessible to designers. Thank you.

The Chair: Thank you very much. I'll turn to my colleagues now.

Senator Eggleton: Thank you very much. That was an excellent presentation and focus on the ethics issues.

I'll ask one of the questions that you asked us to ask you. You said, "I hope you will ask me about how ORI has been supported to date and what challenges remain in many countries where roboethics research is taking place.''

You might also answer that in the context of the government's recent decision, which is expressed in Bill C-43, to provide $125 million to the Canadian Institute for Advanced Research for a pan-Canadian AI strategy. What should this strategy include? And how should ethics and regulatory oversight fit into that strategy? Perhaps you could answer both of those together because they both deal with money, don't they?

Ms. Moon: I guess they do.

I wanted to discuss the topic of how ORI has been supported because I think it provides a useful context for discussing ethics in this technological world.

ORI was started five years ago and has been supported mostly as a volunteer-based organization. That means my Vanier scholarship from NSERC is the majority of the funding that went into developing ORI. That is an award that I received not on the ethics-related projects that I have been able to do with my colleagues for ORI but for the human- robot interaction research that really didn't have much to do with ethics.

Part of the reason why we have been reliant on volunteer support for our work is because there is a gap between what science foundations, or NSERC, traditionally more science grant organizations, have been supporting and that of CIHR funding. There has been a gap in researchers addressing the topic of ethics as part of sciences or engineering. It may be more of a cultural issue of trying to write a proposal that will be successful and ethics as not a strategic dimension to put forward, but rather to put forward the technological innovation that may be a more successful direction for a successful grant.

That being said, I believe the government's decision to invest in AI technology in itself is very healthy. But as far as I am aware, I have not seen the strategic decisions on how that would be divided into subcategories of AI, for example, or how small organizations like ORI would be able to address some of the ethical dimensions that are not necessarily focused on developing the technology itself but addressing the ethical domain that is highly relevant and important for pushing the direction of AI forward.

Senator Eggleton: You mentioned the IEEE initiative, the Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. I recently became aware of this organization through another person who was involved in that process, but I don't know much about it.

You say you're one of 30 that are involved in this or one of 30 scientists, I think. Can you explain more about what this organization is doing? It's international, I believe.

Ms. Moon: Yes, it is. IEEE is a professional standards organization that represents many engineers internationally. Within that organization, they have started the initiative that is specifically focused on addressing these technological ethics issues. It has over 100 members today, international experts across the world.

Within that organization are many subcommittees, one specifically focused on privacy, for example, and my own committee on how do we embed values into technological systems. Within that committee, we have about 30 international experts working on this.

Senator Eggleton: Perhaps you could supply us with more information about that because it has an international perspective that could be quite valuable.

The Chair: And anything you submit, submit it through the clerk as you communicate with her.

Ms. Moon: Sure.

The Chair: Thank you.

Senator Stewart Olsen: Ms. Moon, I'm particularly interested in the application of robotics in rural areas. My province is quite rural, and they are going to be coming out shortly with initiatives on health care issues for seniors in homes.

Has anyone thought of doing a system of pods, which would be centres in a small mileage radius of home care persons? Maybe they would be looking after, say, five people in their homes, with robots or robotics in their homes, monitoring them in this way. Is that feasible? What would be the ethics of privacy of developing something like that? Has it been done?

Ms. Moon: Just to clarify, by "pods,'' you mean —

Senator Stewart Olsen: Just a location where, say, one caregiver can be, with access to all of the patients in homes via whichever way the robots communicate.

Ms. Moon: I do believe that idea has been discussed in the research community. To what extent it has actually been exercised is a question mark for me, but I do know that there have been studies that pilot-tested the idea of having a robot at home to monitor and assist a particular person. It was much more of a controlled study dedicated to one person instead of having multiple robots that are actually monitored on the other side of the line by one person, for example, which is the example I believe you're giving.

We definitely need to think about privacy implications. One of the projects I know is taking place right now is the use of a robot that monitors clients in a care facility with cognitively or mentally disabled adults or children, and due to short-staffing issues, they are exploring robotics issues.

When I was exploring the privacy issues in that domain, I realized that these care facilities typically have some sort of privacy arrangements with legal guardians. But that should not extend beyond the technology that was available when they entered into that particular agreement because robots are essentially cameras on wheels; they can be very proactive about gathering information.

A trade-off for that or perhaps a dilemma is we know from HRI research that creating that connection with a person, being able to recognize that client or patient in calling them by their name, is very important in creating the natural interaction you would have with a caregiver, for example, but you would not be able to do that if you didn't have a camera in the first place. Even if you did have a camera in the first place, you would need to have that identification function built into the system that inherently requires you to gather data and link it to identifiable data as well, such as names.

So there is a big dilemma that is still open. As far as I'm concerned, I don't believe we have comprehensive work done in order to guide roboticists through these design processes to say there are trade-offs, and to navigate these trade-offs, these are the guidelines or these are the things you need to think about.

Senator Stewart Olsen: Thank you.

Senator Dean: Thanks for the presentation, and congratulations on your award. More so, congratulations on what you're doing with the award.

I'm not too worried about the pace of the development of robotics and artificial intelligence. I think we've learned at this committee that it's probably further advanced than many of us might have thought. I do worry about the ability of ethicists to maintain pace with it, the ability of policy-makers to maintain pace with it, and the ability of governments and political actors to maintain pace with it. It's good to know that, as in other areas, we have Canadian leadership as part of an international approach to this.

Are we in danger of ending up down the road with a patchwork of ethical codes that are private sector, public sector, commercial, international, national, even subnational, in which we have a competition or differences in conflicts between approaches to ethics in this area? Is there a way to get ahead of that by thinking about a governance framework that would somehow simplify this and give those who are working in the field an ability to move as close to real time as they can and at least see the tailwind of technology as opposed to losing sight of it completely and being too far behind? Is that a reasonable question?

Ms. Moon: Yes. I do understand the concern that ethicists or policy-makers would be always playing a catch-up game, if you will. I think that's what you're trying to express.

Senator Dean: Yes.

Ms. Moon: Because the technology is moving at such a fast pace, there may always be the impression that we're always catching up to it. But there's also something to be said about actually educating the designers themselves so that even if there's a gap between the designers' decisions and lack of regulation or a governing structure, if you will, then the designers will still be able to make more positive direction-oriented design decisions. That in itself is very important, but I also feel that having a proactive approach to a governance structure that is able to move at a fast pace is a healthy discussion to have.

One of the things that I'm finding from our work with ORI, especially more recently with industries, is realizing that each and every one of the products that we look at will be very different because their stakeholders are different and their value sets are different. Even talking about the same product in one city versus another will be different as well. Do we want to have a very customized way to talk about ethics in every single scenario? Is that even practical? That's still a question mark.

One thing we are learning is that by looking at one specific case we're able to draw out these ethical issues that may end up being common throughout many different case studies and be able to regulate that or have a governance structure based on those.

Senator Dean: Thank you.

Senator Unger: Thank you very much for your presentation. My questions sort of go around concerns. In this brave new world, will people in your field be able to control these robots? I'm wondering about the possibility of robots being intoxicated. The samples you provided were interesting. Do the experts in your field have any concerns about this, about rogue robots, which I'm sure many have seen in certain movies?

The Chair: Just to clarify, I don't think the robot was intoxicated. It was serving the alcohol.

Senator Unger: Could it be?

Ms. Moon: One of the committees within the IEEE initiative that I mentioned actually talks about the general AI, which is the higher level AI that is not quite there today but something the research community is interested in. That may be what you're trying to get at.

Because I come from more of a designer's perspective, as long as we have good control or have made conscious design decisions, I don't think that should be part of our worry.

There is discussion of ethics of lethal autonomous weapons systems. I do know that is a concern because given the nature of the technology, we're talking about decisions that are made without humans for an extended period of time. We could hypothetically design those systems and practically design systems that are not so smart, that are able to make these decisions without us.

But would we make those decisions as intelligent human beings? Would we train our engineers to make those decisions? I think that's a better question to ask about the practicality of the issues we should address.

Hypothetically, I do believe it is possible. Practically, if we are talking about "dumb'' machines, then we can always make robots that make the wrong decisions all the time. That would be the easier thing to do.

Senator McIntyre: Thank you, Ms. Moon, for your fine presentation. Obviously you're very knowledgeable on the role of new technologies in our health care system.

My question has to do with ethical oversight. In your opinion, can our health professional associations, such as the Canadian Medical Association and the Canadian Nurses Association, effectively conduct the ethical oversight of robotics and artificial intelligence? If not, what framework would you propose for ethical oversight?

Ms. Moon: If I can start with the last question, robotics and AI are tools that can actually cross many different applications. A robot that I may design today could be applied or modified for health care applications and could be modified for other applications. Many of the same elements may be going into this technology that may be used in many different ways.

Because of that, I think it's healthier perhaps to look at robotics and AI, the technology itself, and the ethical oversight at that layer than specifically health care. I do believe that looking at ethical oversight in specific application domains will be necessary, but part of the bigger picture is looking at the technology as a whole rather than just looking at robotics within health care, in part because we would be doing a lot of repetitive work across different application domains if we were to look specifically at one application and not be able to take the lessons learned from another.

Senator McIntyre: Is there an overlap between robotics and artificial intelligence? How interconnected are the two?

Ms. Moon: One way to look at robots is a physical embodiment of AI. A self-driven vehicle is a great example because it is a physical vehicle that you can enter, but it also has a lot of artificial intelligence on board, specific machine learning techniques, for example. But the self-driving vehicle itself we look at as a robot.

If we were to imagine the self-driving vehicles being able to communicate with each other and learn from each other's data, that would be a layer up of doing artificial intelligence about the world that not one particular physical robot does but interconnected sets of robots do through a software framework, more of a cloud system. They are very interconnected in that sense.

[Translation]

Senator Cormier: I am going to ask my first question in French. Thank you for your very eloquent and enlightening presentation.

I was captivated by the notion of integrating Canadian values into robotics. In fact, I have many questions that come to mind concerning, for instance, the training of designers, manufacturers and users. How are Canadian values integrated into that process? What does that mean exactly for these different groups?

[English]

Ms. Moon: Perhaps I can give an example. Gender bias or racial bias would be something that I believe goes against Canadian values. These are the types of biases that could seep into our technologies if we're not careful to look for those specifically. There have been discussions within the research community where a lot of the data set came from more male participants or is more representative of the male population and hence not representative of the female and male.

In terms of racial bias, there is the example of a soap dispenser that has specialized sensors that when a fair-skinned person puts the hand below it, the dispenser worked really well; but if a dark-skinned person does that, it doesn't work well because it has not been tested rigorously for all the different races.

Does that give you a good sense of the type of dilemma?

[Translation]

Senator Cormier: Yes. How can these different concepts be included in the documentation?

[English]

Ms. Moon: One of the ways to address it is what the research community is starting to look at, which is can we look for these biases in a technological way? Can we quantify it so we know how to fix it, and are there technological solutions to address these in the data sets? It is ongoing work. It is a fairly recent development to think about machine learning systems that way, but it's a very healthy way forward.

Apart from the technological solution perspective, one way to manage it is to make explicit these kinds of values. We don't necessarily train our engineers to look for or be conscious of gender bias in design systems because we are much more familiar with the notion of professional ethics. Within that context, we talk about values, but not as part of what you design. It's really important to frame it as part of the design decisions and being aware of that because otherwise we will design these systems that will systematically impact us and be systematically biased towards one or another in an uncontrolled or undesired way.

The Chair: On the specific example of the recognition of the hand, that's based on a learning algorithm. It would seem to me that that's a simple issue to deal with in terms of eliminating bias because the extensive knowledge is not enormous. It is simply the recognition of a hand in a certain circumstance. I have been amazed that these kinds of issues, the very example you used, are in the public domain as discussion points. Yet, it's all based on the hand recognition as a deep learning issue. The number of hands that can be learned is a relatively simple issue when you get to that point of learning that structure.

I would ask you: Aren't some of these issues relatively easy to solve and eliminate bias in this case of an ethnic bias?

Ms. Moon: The hand example I gave is relatively simple, an easy example to look at.

The Chair: That's why it's dangerous as well, using examples that can easily be dealt with but make them look like they're serious problems.

Ms. Moon: Right, but I think they point at very serious problems, such as the kind of user experiences that we will have with these robots. For example, facial recognition systems also have the same problem. There has been documentation of racial bias of faces that are not being recognized for certain races. If we are systematically building these types of devices, then my experiences of working with a robot will be very different from yours, for example. Do we want to not even have a guideline that explicitly states that if your data set does not have the diversity it is supposed to cover, then we will have a problem?

The Chair: I agree entirely that we should have that, but isn't this particular issue a relatively simple one to deal with?

I totally agree. You have to eliminate the bias, but it seems to me in terms of a deep learning example that this is one in which instructing the robot to recognize the diversity is relatively easy. Is that the case?

Ms. Moon: Addressing it could be very simple in the example of the hand, but actually recognizing that that in itself is an issue is what we should be talking about. If we don't even recognize that there is a problem, then we won't be addressing it.

The Chair: I agree entirely, but in the examples that have come before us, it seems to me the issue is the very point you made, that we should agree to eliminate the bias wherever it occurs and that we should use examples that illustrate how we can do that, and the recognition, being a learning issue, is one in which we should be able to do that.

I admit that when you start getting into actions, you get into a more complex issue. I'm not going to pursue this any further. I want to stick with where we are today and to agree we have to eliminate it, and it seems to me the instruction to the programmer is you eliminate it.

[Translation]

Senator Mégie: I have one comment, and then I'll have a question. If I think of all of the witnesses who appeared before our committee, at a certain point I was worried. I saw that innovation was progressing rapidly, and that there was no ethical framework at all to frame all of it. So I am happy today to learn about the ORI institute, and I thank you very much for your work. I know that there are not many investments that are made to fund and support your activities in this area, because of ethical issues.

Senator McIntyre spoke about professional associations, but I would like to branch out a bit. In each centre, there is a research ethics committee that is not entirely focused on health issues. Since there is interaction between the robots and the person, sometimes this is invasive, as the robot administers the medication and may even administer physical care. For that reason, there needs to be a research ethics committee. Does your institute consult these committees when it is developing new products?

[English]

Ms. Moon: Sorry, I was trying to catch up with the interpreter. Can you repeat the last part?

[Translation]

Senator Mégie: Does the ORI deal with a research ethics committee?

[English]

Ms. Moon: ORI itself does not act as a research ethics committee for any other entity, per se.We do have a subsidiary that is actually looking at doing ethics assessment for specific companies. But I am aware of research ethics boards from my own studies, because I had to conduct human-robot interaction studies where humans had to come into experiment rooms, for example.

I'm not sure if it's directly answering your question, but I think research ethics boards are very useful in making sure that researchers know what to look for and how to conduct research ethically, but we don't necessarily have ethics boards that can serve, for example, startups that are very excited to deliver these health care-related robotics and AI technologies. Having an ethics board in itself would be very expensive for a startup that just wants to make things that actually work and privatize that within their limited funding.

One of the enabling solutions, perhaps, would be to make that particular type of process or ethical assessment of these robotics and AI systems more easily accessible for these startups.

Does that answer your question? I was trying to catch up.

[Translation]

Senator Mégie: Even if they can't pay for it because they don't have the means, when they want to test the product in a given environment, I think they can turn to the research ethics committee in that environment. I don't think you have to pay to obtain that service, because the research ethics committee exists precisely to avoid blunders and improper uses of the product when it begins to be used. In short, they don't need money, and I believe research ethics committees exist, at least in large cities, in large hospitals in large regions. They could call on those committees without spending any money.

[English]

Ms. Moon: I don't know of research ethics boards that function for that specific purpose, but I do know from my experience with the research ethics board at the University of British Columbia that their ethical assessment of, for example, my control study is very limited, and it's limited to that particular experimental setting as well. But we're talking about robotic systems that will be deployed to somebody's home or care facilities, so the type of issues that a board member will have to think about are very different from the structured ways in which they evaluate ethics of a particular research study.There are challenges there in terms of the content of how we actually do that.

[Translation]

Senator Mégie: I am less reassured now, but I thank you nevertheless for your answer.

[English]

Senator Hartling: Thank you very much for being here. This has been a really interesting conversation.

Thanks, Senator Cormier, for bringing up the question of values. I was thinking about that, and I thank you for the good work.

I am happy to see a young woman like you involved in talking about gender biases, diversity and things like that. The more young women involved in this, those things will be kept on the front burner.

My question is about going into the future and looking at social issues that might be involved in AI and robotics. Can you talk about what you see might be some of those issues that we could be expecting to learn about and be involved with?

Ms. Moon: There are so many.

Senator Hartling: Even a few.

Ms. Moon: I was here for the opening caucus discussion on a similar topic a few weeks ago, and one of the key issues that was discussed was jobs: How do we address the idea of robots replacing certain jobs? Are we developing those versus robots that assist people?

It is definitely a big challenge. For example, an ethics assessment should be able to actually bring that forward with a developer so we have a solution for it.

Another would be the guidelines documents that I have mentioned. There are many different aspects of guidelines documents that we should be talking about. The traditional idea of privacy does not seem adequate for robotics technologies. Looking at that particular aspect and developing guidelines would be another.

We're not necessarily talking about physical safety; we're talking about physical safety and some more when we talk about interactive robots. The way a particular robot can move and affect you and your behaviour are things that we're still figuring out as a research community — and also developing as products that we deploy into the world.

There are challenges of actually looking at how we balance the impact that we have on individual users as well as the social norms of how we treat robots as a consequence.

Senator Hartling: Lots of things to think about.

In the field, is there a gender balance in terms of people studying robotics?

Ms. Moon: I would not say there is a gender balance in terms of the number of roboticists who are female or male. There are definitely fewer female roboticists out there. It would help to have a more equal balance.

Senator Petitclerc: Can you give me a concrete, almost step-by-step — you have the end product, the device. You have the designer and then you have the think-tank ethics people. I'm trying to understand. If a business produces a device, do they have to go through ethics, or is it only if they want to? Do they have to choose you? Do they have many options?

I'm thinking about a device that's going to be in the health system. Who regulates all of that? Even the ethical decisions may differ from one group to another. I'm trying to find out a bit about the story behind that. How is it happening now?

Also in your opinion, how do you think it should be organized, or who should oversee the ethics of the ethics?

Ms. Moon: To answer the first part of the question, we do not have a governing body that enforces a company to go through an ethics process, which is a problem. We're trying to provide a solution for that. Since we don't have that regulation in place, can we make ethics assessments more accessible so that companies desire them? It's more the case where they can choose to do so.

But even today, if a particular company chooses to have an ethics assessment, it's very hard for them to find a group of experts who are able to provide that service. We have a handful of individual experts with specific training in, for example, philosophy or engineering. That's why, as part of ORI, we're trying out a particular project in which we are developing a specific framework that is systematic enough that we can provide the ethics assessment for one particular company, and then we should be able to do that with another and another. If that is successful, then perhaps that's a framework we can actually make available and suggest to others to take on as well.

Right now, we do not have an assessment framework that everyone agrees on. Therefore, it is hard to enforce anybody to do so.

Senator Petitclerc: So is it the case that some devices being used right now by Canadians have not had ethical assessments? You are saying that could happen?

Ms. Moon: It is a possibility, yes.

Senator Petitclerc: That's interesting. Is that a concern?

Ms. Moon: I would say it is, yes. The most dangerous scenario would be a particular company delivering a product for a positive goal but not knowing the ethical consequences related to that product.

Senator Petitclerc: Thank you.

Senator Eggleton: We've touched on the regulatory framework possibilities, ethnic committees, et cetera, and some of the social issues. It's also arguable that there should be a ban on the development of certain systems and technologies. Some artificial intelligence and autonomous system technologies are potentially so harmful to humans and humanity as a whole that perhaps they should never be developed, manufactured or put into use. It's perhaps similar to bans on things like human genetic manipulation, which is considered in some jurisdictions, or even the ban on land mines that came about through the result of the Ottawa treaty. It's an old technology, obviously, but I mention it because it's a case where a technology advanced so far in its manufacture and use before governments finally caught up and said this is not the thing to do. We're still trying to get rid of these land mines. Once they are in the ground, they are a danger to human life.

Should there be a ban, either in Canada or a universal ban, on certain systems and technologies? Maybe we should be leading the way like we did on land mines.

The Chair: As it relates to the health care system.

Senator Eggleton: Well, okay. I put it either way.

The Chair: If you can use a general example to give illustration, that's fine, but we're not delving into the entire world of issues.

Ms. Moon: I'm not sure about a complete ban. I'm thinking through it as I give you the answer.

Having a system that can manipulate your spending habits, for example, or your specific behaviours, especially for vulnerable populations, is a very dangerous route to go down. Imagine a care robot that is supposed to assist somebody at home for that person's independent living, but that robot also has marketing strategies to get this particular person to buy certain products. That would be an example of technologies that I would say we should definitely have a regulation for.

Senator Eggleton: If a machine was coercive and psychologically manipulative of a person, you would see that as something that should be banned or controlled?

Ms. Moon: With qualifications. Having a robot in a physical room itself can affect you in many different ways. It's hard for us to have absence of psychological impact from a robot, but a specific type that aims to manipulate that particular end user, especially a vulnerable population, would be an example of a dangerous technology.

Senator Eggleton: You don't know of anything offhand that you think is something we need to pay attention to, particularly in the health care field?

The Chair: I think she gave the example of the device with an intention has an additional feature to it that attempts to manipulate people in a certain direction.

Senator Eggleton: I'm just looking for a little more detail.

The Chair: You can go to a doctor's office these days, and as you're sitting there waiting interminably for your chance to get in, you are besieged with a tremendous amount of electronic and visual effort to get you to follow certain eye products, glasses and a whole range of things. I would assume that you will differentiate between a public place and normal advertising to something that is in a situation where it actually has some control over an individual.

Ms. Moon: If I can help with a more extreme example, we have robotics technologies that are being explored with brain computer interaction or technologies that can read your brain waves. A very invasive type of that technology will interface with your brain in a very physical way. A system that would be also dangerous and perhaps a little more detailed would actually write to your brain instead of just reading from it. Coupled with a robotics system, that would be very dangerous.

Senator Eggleton: That's a good example. Thank you.

Senator Unger: To your last point, can a device have an agenda where it can influence the brain? If a senior has a robot in their home, could the robot be programmed for an end-of-life period? That would that be a dangerous or bad technology, obviously.

As well, where does Canada rank globally on these technologies?

Ms. Moon: Your first question was whether using robotic systems as part of end-of-life support would be a dangerous technology?

Senator Unger: Yes.

Ms. Moon: Potentially. A stark example of that was given by an art piece from MIT. It's a robot that keeps you company as you're dying. It was again more of a scenario-based art work, but it would actually pet your arm as you're dying and say, "You've lived a good life.'' Are we as a society comfortable with leaving that very special moment of your life with a robot? That's a value-based question we should discuss as a society rather than a design decision of whether this would be a more effective way to help a person die.

Sorry, I missed the second question.

Senator Unger: Where does Canada rank globally in these studies?

Ms. Moon: I'm not quite sure of the quantitative ranking of robotics in general. I do believe that in terms of machine learning or AI technology, especially deep learning, Canada has done very well in leading that particular space because of the specific researchers we have here.

Senator Dean: On the example of the robot stroking my arm, it probably depends on how long I've known the robot, but it's a very good point.

I want to return to ethics and governance because a number of us have come at this in different ways, at different levels, and I think we're having trouble pinning it down because of its nebulous nature.

You helped us earlier in talking about a values-driven approach and that if we look at our values as a society and use a values-driven approach as they apply to particular sectors or technologies, we may be able to find common denominators, common questions that arise across the piece and start there. I think that's a very sensible approach. It scales it. It makes it look possible, so I think that's terrific.

I still think that's probably one element in something that is likely multifaceted. I'm concluding from our discussion today and previous discussions that we are probably going to end up with a patchwork. Hopefully, though, it's a patchwork by design rather than by default.

In my mind, my map of the patchwork is that there may be some things, as Senator Eggleton says, that we just take off the table, even in the context of the medical world, the medical profession. You're thinking about elevating core values, and those issues are important. ORI is working internationally with IEEE. That's obviously important.

As I expected, there are company-based approaches to this, some of which are likely virtuous, where we're proactively seeking the advice of ethicists, and some likely where there are proactive avoiders who are looking for the bottom line and the pure economic value proposition. Government will have a role here in some respects in legislation and regulation, as well as medical associations and professional associations in the medical world.

Is there anything else missing from that emerging map that we could put on the radar screen? First of all, do you think I have gotten those things right? Secondly, are there other pieces on the margins that will appear on the radar screen as part of that constellation of approaches to ethical questions?

Ms. Moon: Thank you for the summary of many different approaches.

There are already certain applications that have a lot of momentum in developing regulations, so they are much more of a top-down approach. Self-driving vehicles, for example, is one such industry where licensing test vehicles has already happened. I think we will continue to see these application-specific regulations that will have to be governed at a larger scale if we can manage to do so, having a layer of oversight.

But I think a combination of the top-down and a little bit of the bottom-up is absolutely essential, because for a start-up that is actually trying to get a working robot, as part of their design exercises they may come across the issue of privacy and it may surprise them: "Oh, I need to care about privacy now. What do I do about that?'' There's not much out there to address those concerns for them.

If we can talk about ways to produce content that is easily accessible and if that type of work is something that the Canadian government takes a leadership role on as we shape the innovation agenda in AI and robotics, I think that will be very helpful going forward.

Senator Dean: Thank you.

Senator McIntyre: As Senator Dean has indicated, we've all approached this issue from a different angle. I'm looking at it from a different angle as well. My question has to do with the use of a different ethics lens. Let me explain.

As I understand the new technologies — robotics, AI, 3-D printing — they're not only used in health care but in other sectors such as transportation, security, manufacturing and many others. In your opinion, should the use of these new technologies in health and health care be viewed through a different ethics lens than other industries? If not, why not?

Ms. Moon: I think a merger of the different approaches would be necessary. For example, a self-driving vehicle could actually be one of the most frequently used assistive devices for the senior population. Should we be looking at the ethics of self-driving vehicles from the health care ethics lens? I think there will inherently be a merger of how a senior should approach this or how we should be thinking about self-driving vehicles being used for the senior population and the Transport Canada perspective of the ethics issues of self-driving vehicles. I think both elements have to come into play.

The Chair: Thank you very much. I think this has been very interesting. Obviously we're looking down the road some distance, and one of the things we know from past experience is it's important to learn from the reality that we deal with in looking at the evolution of these issues.

As Senator Eggleton used a biotechnology example, I think we can learn a great deal from how the biotechnology research community and application community dealt with the issue. We faced the very same issues back then, and it was the scientific community itself that came together and developed how the technologies should be used. Indeed, governments around the world adopted the recommendations that came out primarily of that initial consultation and are still in play today in terms of the manipulation of genetics, particularly on living systems as well as the containment of research on it. Even though robotics is a slightly different containable entity, the concepts, in my mind, are not all that different. I think you've caused us to think in a larger dynamic today with regard to those issues.

I was tempted to come back and use your example of the robot supplying alcohol to a patient for which it's forbidden, but the reality is we can't do that now. They may get to the point where independent robots are able to take things off the shelves by their own thinking and do that sort of thing. We can control those kinds of things very well at this point, but there will come a time where even those kinds of issues become part of the decision about what should be allowed to roam freely in society.

This has been a very healthy discussion. I thank you very much for it.

(The committee adjourned.)

Back to top