Proceedings of the Standing Senate Committee on
Social Affairs, Science and Technology
Issue No. 15 - Evidence - February 9, 2017
OTTAWA, Thursday, February 9, 2017
The Standing Senate Committee on Social Affairs, Science and Technology met
this day, at 10:31 a.m., to continue its study on the role of robotics, 3D
printing and artificial intelligence in the healthcare system.
Senator Kelvin Kenneth Ogilvie (Chair) in the chair.
The Chair: Welcome to the Standing Senate Committee on Social Affairs,
Science and Technology.
I'm Kelvin Ogilvie from Nova Scotia, chair of the committee. I will invite my
colleagues to introduce themselves, starting on my right.
Senator Seidman: Judith Seidman from Montreal, Quebec.
Senator Stewart Olsen: Carolyn Stewart Olsen, New Brunswick.
Senator Raine: Nancy Greene Raine from British Columbia.
Senator Unger: Betty Unger from Alberta, subbing in today.
Senator Cormier: Senator René Cormier from New Brunswick.
Senator Merchant: Welcome. Pana Merchant, Saskatchewan.
Senator Eggleton: Art Eggleton, senator from Toronto and deputy chair
of the committee.
The Chair: Thank you, colleagues, and welcome to our guests.
We are continuing our study on the role of robotics, 3-D printing and
artificial intelligence in the health care system. This is our fourth meeting.
Today, we are going to hear about some of the research being carried out on
artificial intelligence at Acadia University and McGill University.
I'm delighted to have our witnesses with us today. I'm going to invite them
in the order they appear on my list, since I understand they didn't arm wrestle
for first dibs on this. In that case, I'm going to invite Dr. Joelle Pineau,
Associate Professor, Centre for Intelligent Machines, McGill University, to make
a presentation to us.
Joelle Pineau, Associate Professor, Centre for Intelligent Machines,
McGill University, as an individual: Thank you all for the invitation. It's
a pleasure to be here this morning.
My name is Joelle Pineau. I am a professor in the School of Computer Science
at McGill University and I am here representing the university's Centre for
Intelligent Machines. For 15 years, my research team and I have been designing,
building and testing intelligent robots. Some of those robots are designed to
help people with reduced mobility, who, instead of using a typical motorized
wheelchair, will soon be able to use an intelligent chair that is able to
respond to voice commands and navigate autonomously in a variety of locations by
avoiding obstacles and moving efficiently through crowds.
Other robots have been designed to allow the elderly to stay in their homes
longer. They provide an alternative presence, discreet monitoring that can
reassure loved ones and provide basic care such as reminders to take medication.
Recent advances in robotics and artificial intelligence are also leveraged to
develope new advanced treatment methods for several diseases, including cancer,
diabetes, epilepsy, mental health and many others. A robot, in those contexts,
does not have arms or legs. It doesn't look like C-3PO or R2-D2, but it has the
ability to perceive and interpret complex information and to carry out actions
An example of such a robot is the artificial pancreas, currently developed at
the Institut de recherches cliniques de Montréal. This robot learns to calibrate
the dosage of insulin that must be delivered based on real time readings of
blood-sugar levels. All this is done automatically, and the insulin doses are
adapted to the patient's physiology, food intake and activity level.
Another promising robotic technology is for cancer patients. This device
operates by delivering radiation therapy directly into a tumour while actively
controlling a shield that protects nearby healthy tissues.
The most-used medical robot these days is probably the well-known da Vinci
robot, which assists surgeons in hundreds of thousands of surgeries every year.
While robots can carry out physical actions, artificial intelligence, or AI,
is the brains behind the machine. AI is a computer program that can understand
and manipulate complex data, both small and large data sets.
In the context of health care, it's a computer program that can read a
patient's EEG in real time to determine if and when a patient is likely to have
a seizure. It's a computer program that can analyze, read an ultrasound and
identify the location of a brain tumour. It can be a computer program that
analyzes patterns of consumption in over-the-counter medicine to predict when
and where the next flu outbreak is going to hit.
Here in Canada, we are very fortunate. We have a public health care system
that is the envy of many people around the world. We also have an excellent
network of public universities. In those universities, we are carrying out
research in robotics and AI that is recognized internationally. We are also
training the next generation of engineers, scientists and researchers that will
build robots and AI for the coming decades.
Over the last three months, both Google and Microsoft announced that they
were opening new research labs in Montreal. Obviously, we all suspect they
didn't choose this location for the weather. They chose this location because it
is quite possibly the one city in the world that produces the greatest number of
PhD graduates in a subdiscipline of AI called "deep learning.''
For those of you who have not yet heard about deep learning, let me try to
demystify it briefly. Deep learning is essentially a way to teach machines how
to learn on their own. It is inspired by how our own brains work, with a
computer program that simulates a large collection of neurons all working
together to analyze data and make decisions. When our computer program simulates
hundreds of thousands of these artificial neurons using high- performance
computing infrastructure from Compute Canada and exposing these neurons to
thousands of data examples, then our program can learn to recognize complex
patterns and data, and distinguish between thousands of different symptoms and
There is a reason why Canada is recognized as a world leader in deep
learning. I already mentioned the pipeline of PhD graduates and the excellent
research labs. These are the products of years of funding to basic research by
the Canadian government. By creating and supporting organizations such as CIFAR,
the Canadian Institute for Advanced Research, which funded deep learning for
several years of very obscure development before it shot up to the world stage,
we have created knowledge, wealth and expertise in Canada.
By funding the NSERC Canadian Field Robotics Network, which brings together
11 research labs from across the country with key robotics companies, large and
small, we have created an ecosystem for robotics research and economic growth.
By creating a federally funded network centre of excellence called Age-Well,
which is home to over 100 industry, government and non-profit partners, we have
created a national network aimed at addressing complex problems in technology
and health care.
Most recently, through the Canada First Research Excellence Fund, including
one at McGill on Healthy Brains for Healthy Lives, one at Université de Montréal
on machine learning, one at Polytechnique Montréal called TransMedTech, we now
have the resources to make foundational discoveries at the intersection of AI
Let me close by saying that while I'm excited about the possibilities in
Canada for robotics and AI impact on health care, I do see several challenges
ahead. One, we must think about how to ensure that our most talented experts
stay in Canada. Two, we must think about how we will benefit economically from
this new technology. Three, and most important, we must think about how we will
use this technology in a way that benefits all of society. That means developing
the social codes and ethical principles that will guide the deployment of this
technology. It means ensuring that security and privacy concerns are addressed,
especially when it comes to dealing with people's medical records, and it means
anticipating the profound social changes that will come from this technology.
Make no mistake: We are in the beginning of an AI revolution, and the reason
I'm here today is to ensure that we work together to enable a positive impact on
every member of our society. Thank you.
The Chair: Thank you, Dr. Pineau.
We will now hear from Dr. Daniel L. Silver, Professor, Director, Acadia
Institute for Data Analytics, Acadia University. I am delighted to have you
here. Please present to us.
Daniel L. Silver, Professor, Director, Acadia Institute for Data
Analytics, Acadia University: Thank you, chair, honourable senators and
other members of the committee. I hope to complement the fine remarks of
I'm a computer scientist. I have been such for almost 40 years now. Time has
flown by. I have worked for the first part of that in industry, and the latter
part as a studying PhD professor, all of which has been largely in the area of
artificial intelligence, so that's predominantly what I'll speak on.
AI is important not only in the large urban areas, and large universities,
but it's important to rural areas and to smaller universities as well, both in
terms of health care and other areas that we're investigating.
Picking up on Dr. Pineau's comments, artificial intelligence is a technology
where we shouldn't fool ourselves; it's at the same level as nuclear energy and
genomics. It has the power to do incredible things in the near future, and it
can also do come terrible things. I don't want to make that the high point, but
I think there is a reason for some concern. It may become one of your greatest
issues, and it is coming up with methods to appropriately roll out AI.
AI is a powerful tool that can be used in health care to improve medical
decision making, diagnosis, prognosis, certainly in the selection of best
treatment methods, filtering through a mirage deep with new findings in health
care, being able to focus on an individual patient and bring the best treatments
to bear. It allows us to better use our human and material resources in clinics
and hospitals, and to be more effective and efficient in home care, which is an
area where I am doing a fair amount of work. It also allows us to have healthier
employees and ultimately healthier citizens.
At the same time, it can equally create negative outcomes, some of which I
suspect you've already imagined in your prior meetings. It can be used to
profile employees, citizens, for benefits in terms of jobs or health care. We
have to be careful that it does not degrade health care by viewing AI as a
panacea in this area. It's not. It will be a tool. It brings to mind the image
of a medical ATM that you might walk up to.
Currently in business, AI is being used as a technology for scaling up
knowledge work. It does this in two ways. First of all, it's able to replicate
certain aspects of human intelligence very quickly. It can make decisions in the
same way that a human would make in constrained areas and do it repetitively. It
allows business to scale up in terms of interacting with customers.
Second, it can go beyond the capabilities of humans. It is there now. Deep
learning, for example, is recognized as being able to create models that can
categorize images better than humans. It can recognize voice better than humans,
at least in certain constrained areas. So we're there.
We certainly are capable of using this technology for better decision making,
pattern recognition, voice recognition, using natural language processing and
machine learning and analyzing text, etcetera, "next move'' forecasting, and
also recommending what your next move might be, in any domain, as long as we can
provide the data to the systems.
In direct patient care, I would suggest this is our greatest opportunity to
use artificial intelligence but also has its greatest risks. Augmenting the
knowledge of experienced trained clinicians really should be the goal. The
ability to provide a focused, non-bias portal into the growing and changing
collection of health care information is something AI can help us with.
Augmented human intelligence through use of our near-term AI is probably the
best way to go, and then grow from there.
It has the potential to improve outcomes probably by 30 to 40 percent, I would
suggest. Many people will say that it will save us up to 50 percent in terms of
dollars. I don't believe that. I've been in ICT for a long time. Very rarely has
ICT saved us money. It does increase productivity. It means that more people can
be helped in better ways. Perhaps we can talk more about that.
Google DeepMind Health and IBM WatsonPaths are two of the leaders in
extracting knowledge from EH records and making better decisions. Bay Labs out
of San Francisco is using cheap, noisy, ultrasonic-type sensors, along with
machine learning techniques, to create better methods of detecting symptoms of
rheumatic heart disease in Kenya. There are similar techniques, using
traditionally noisy sensors along with machine-learning in AI to do some
Indirect patient care is probably our best first choice in terms of deploying
AI. There is tremendous ability to do some early adoption here, with many
improvements to the best uses of human material, monetary resources, with less
concern over privacy or direct impact on patients. For example, there is
automated planning of improved hospital workflows and logistics, constraint
satisfaction solutions for improved use of shared resources, equipment and
spaces, and better inventory management for keeping costs down and minimizing
waste through forecasting use and need for materials.
One of the areas to think about that is really important to health care, and
has been missing for a long time, is that AI may be the first step to closing
the loop in health care. What I'm referring to is being able to infer whether a
patient actually took the treatment prescribed by a physician. Did he take the
medicine through its course? It's hard to get that directly. Perhaps there are
ways of doing it indirectly through open social media.
Home health care is an area I've been working on a bit over the last couple
of years. It is a great area for reducing health care costs. There are some
risks. AI could reduce or eliminate hospitalization through ongoing
biosensor-type monitoring back to central sites. AI can effectively act like a
dedicated team of clinicians, monitoring individual patients in their homes. And
sentrian.com is a company emerging now and doing this. Data in AI can be used to
fill the gaps between different home care workers.
For those of you who have had relatives who have had home care workers,
you'll find sometimes that there is a loss of information from worker to worker.
A great example of this in Nova Scotia is Health Outcomes Worldwide for Cape
Breton. The principle there is a lady by name the Corrine McIsaac. She has been
able to successfully reduce wound care time by as much as 70 percent just by
communicating, sharing information between home care workers and providing best
practices to the point where now they are thinking of being able to take an
image of the wound, send it up to a server, and to provide information just from
the image as to what to do.
A portal that we're developing at Acadia, and that Senator Ogilvie likely
knows something about, is seniorscentre.info. It is meant to provide a portal
for seniors, but also a back-end effort is to track what those seniors and their
family members are looking for in terms of assistance, thus understanding the
greatest area of interest and need. We're using analytic methods in that spirit.
I will end with one last thought that maybe some of you have not considered
before. I encountered it in the last year and a half. It's sports analytics. I'm
increasingly becoming aware of the fact the sports analytics may be the
important tip of the spear of the best way to do medical informatics and
analytics in the space of health care. Sports teams want winning teams, but they
also want sustainable, happy and healthy players over time. That's traditionally
what coaches have done. AI is starting to participate in that. Similarly,
companies want to have healthy, happy and productive employees. They want to win
every game. Presumably, we want to do the same thing with our citizens.
I'll leave it there as I'm sure you're ready for some questions and comments.
Thank you very much.
The Chair: Thank you both very much. I will open the floor to
questions from my colleagues.
Senator Eggleton: Picking up on the last comment, maybe sports
analytics would help the Toronto Maple Leafs. I don't know; maybe not.
The question I would like to ask is from your presentation, Dr. Pineau, but
both you can respond if you like.
In your closing comments, you said:
...I do you see several challenges ahead. One, we must think about how to
ensure that our most talented experts stay in Canada. Two, we must think
about how we will benefit economically from the new technology. Third, and
most important, we must think about how we will use this technology in a way
that benefits all of society. That means ...developing social codes and
ethical principles ....
Yes, I think we do need to think about that. I am sure you have already
thought about it. Can you give us some ideas? What do you think are some of the
ways that we might be able to recommend that we meet these challenges?
Ms. Pineau: We have been pursuing these two different directions. One
direction is to start to have many more conversations with our colleagues who
are not computer scientists. That means having conversations with people who are
experts in law, ethics and economics.
It's about having the spaces and the opportunities to have these
conversations. I have attended a few events in the last year or two on the
future of AI. They brought together people from these very diverse communities.
That was very fruitful, but I would say these discussions are just beginning.
We're still developing a common language and a way to think about these issues.
I'm often asked to comment about the future prospects of AI, their impacts on
economics, security and so on. I'm not an expert in these areas; I'm an expert
in programming computers, quite honestly. I can discuss these points as an
educated citizen but not really as an expert. So we need to have these
Similarly, many of my colleagues in the Faculty of Law at McGill, for
example, are interested in technology but are not experts in the underlying
technology. We need to develop some expertise at the intersection of these
fields to start asking these questions.
Another thing we're doing is to think about embedding certain mechanisms in
our computer programs. A good one is the notion of fairness. There have been
several reports of AI systems making predictions that are not fair. They have to
display gender biases, racial biases and so on. These are the kinds of things we
are quite concerned about. Now we are developing new algorithms that have these
properties that we can actually give some guarantees about fairness and
particular characteristics. If you have an AI system, for example, that is
determining which members of society should be getting loans, parole and so on,
it's important that these systems are properly vetted for fairness. We are
developing algorithmic solutions for these problems.
Senator Eggleton: Before we go to Dr. Silver, can you drill down a
little bit in terms of how we keep talented experts here in Canada and benefit
economically from this new technology? Are we putting enough money into basic
research to be able to keep these people here?
In terms of the economic benefit, are we able to take it beyond those stages
and into production so that we can benefit from it in this country? We heard
from one witness yesterday about Project neuroArm, out of Calgary. After they
developed it so far in Calgary, it ended up going to the United States to be
How do we get the benefit economically out of this new technology, and how do
we get the experts to stay in Canada?
Ms. Pineau: There have been a lot of promising signs in the last year
or so. A number of start-ups are actually growing to be mid-sized companies.
When a company gets large enough, it becomes harder to move it. It's easy to
take a start-up of four or five people and move it to California. When you have
50 people, it's a lot harder to do.
There is a start-up in in Montreal called Element AI. It has grown in a few
months from a few people to 40 or 50 people. No, they are at the size where they
are going to stay here. There is another start-up in Toronto called NextAI that
is keeping talent in Canada.
You really need a healthy ecosystem. You need a lot of graduates coming out
of the universities. We're doing quite well, but I think we could do better with
more faculty members in targeted areas.
You need some start-ups, because out of these start-ups, a few of them will
make it to mid-size companies. Then I think it's important to have some of the
structures in place for the mid-size companies to be successful and not be
bought out too early. I'm not an expert in this, but I've talked to enough
people in industry that there are certain financial conditions that have to be
in place to allow them to grow to that stage.
Mr. Silver: I'll give two answers. One, there are challenges, not just
in artificial intelligence and robotics, around keeping great resources within
Canada. There are issues around capitalization for companies and more lucrative
money south of the border, Google in particular. Many of our great professors
are actually working for those, at least on contract, and are attracted in that
direction. Ultimately, it is a mix of money and opportunity that would make and
has made the difference for some.
In the short term, what we can do is try to better those circumstances. I'm
certainly not the expert to provide the answers in that regard.
However, in the long term, two things can be done. One is that we need to be
teaching young people more about these opportunities in areas across Canada and
early on with education in computer science and artificial intelligence — or the
possibility — so that we have a greater pool of young people coming into the
space. Ultimately, that will help a great deal.
In terms of funding, some of you may be aware there was a change in the
system over the last eight to nine years with regard to the same amount of money
being divided up and placed in different locations across the country. Large
quantities of money are going to larger universities and smaller universities
are getting very little. In terms of the food chain of bright young Canadians
moving through the system from high school to university to graduate school,
it's really important to see an even seeding of that funding so that we have
those people coming up. The ones who do desire to stay in Canada will create a
larger pool of those people.
Senator Seidman: Dr. Pineau, I will ask you a little more about that
exciting deep learning incubator that started in Montreal and has grown, as you
say, in a very short period of time. I believe you are a fellow of the start-up
institute incubator, so could you give us some indication of how the work there
could be or will be integrated into use for medicine and the health system?
Ms. Pineau: The company is called Element AI. It was launched in the
fall. It has quickly grown, with a mix of people who have a lot of good business
skills and very good research skills.
One of their missions is to actually facilitate the transfer of knowledge
from university labs to business and industry, which is likely to make it
There are really difficult translation steps in many cases. We see many
companies approach us in our university position to do that translation step,
and in many cases we don't have the resources or the time to do it. That company
is really trying to play the role of facilitating that step. They will have
several clients among small and large companies, some of them in the health care
sector and some in communications, transportation, aerospace and so on.
In doing so, we as faculty fellows — and there is about 10 of us faculty
members in university who give them a certain number of hours — are essentially
helping them select the projects, look at feasibility and help set the research
agenda. But then they have the staff on site to do the research development, the
more "D'' side of the R&D, and do that in collaboration with the clients and
the different companies.
This is especially useful for companies. There are big companies — we hear
about Google, Facebook, Microsoft — who have AI research groups on the inside,
but there are many companies that don't have that. They don't have the expertise
to build that team, and they can't necessarily attract the best experts. So
instead of doing that, they would work with a company such as Element AI to do
Senator Seidman: Google, and I believe Microsoft as well, is now
involved in this incubator.
Ms. Pineau: I don't know the term specifically. I do know that both
Google and Microsoft have announced that they are opening research labs in
Montreal. They are going to either hire or move some of their research teams in
Montreal and build a research presence in Montreal. In terms of growing that
capacity, whereas a few years ago we saw that the graduates from my lab would
quickly move to California, Seattle and New York, now they're staying in town,
and Google and Microsoft are coming to town to hire them.
Senator Seidman: In developing an enterprise like this, is there a
proportionate distribution in particular fields? Does this group say, okay,
30 percent of what we do here is going to be in the health field and go along the
line to different disciplines? How is that done?
Ms. Pineau: I don't know yet how the big companies such as Google and
Microsoft are going to do it. I certainly know that Element AI has a number of
projects they are looking at in the field of health care. One of the founders
has another start-up in medical imagining and so they are definitely interested
in this area.
For the other companies it is too early to tell. I think they are doing more
foundational research, very basic research in AI and machine learning.
Senator Seidman: It was in fact Element AI that I was referring to and
you say there is a plan for health care?
Ms. Pineau: They definitely have an interest in health care, yes.
Senator Seidman: Professor Silver, under the list of major concerns
you presented to us — and of course it's only normal that there would be some
concerns — you talk about appropriate testing and certification before
deployment. One of the issues that came up yesterday in our discussions has to
do with acceptance on the part of patients and families.
What is your sense about testing, certification and the role of the user in
development? Is that what you are referring to when you talk about appropriate
testing and certification?
Mr. Silver: There are a couple of different levels.
As you might imagine, it's possible to select a population of people from
whom you might develop a predictive model for testing. If, for example, that
population isn't truly representative of the entire population of a province or
the country, then the model may actually do fairly well within the set of data
that you have built and tested it on but actually doesn't do well in reality.
That can happen.
Typically in the business world, in targeted marketing, they will build a
model and test it from the data they have, and then they will do that thing
where they call you at six o'clock, at suppertime, and ask if you are interested
in the product. They are actually verifying, to some extent, that base model to
see if it's working out the way they thought it would. They have already
predicted if you will say yes or no. They're just checking to see if in fact you
do say yes or no, and then they'll go forward with a larger marketing campaign.
How does that relate to health care? It means building good models from large
patient data sets, for example, for doing diagnostic, predictive type of work,
and then you want to test it on an additional, independent population and do it
in a protected way such that there is the appropriate oversight to make sure
that it's not completely the machine making the decision in those cases.
Depending on what the prediction is, it may or may not be of huge impact.
Preferably a clinician is always involved and ultimately making the decision,
but that would be it.
The other thing I would suggest in this area that's already the case, a
couple of physicians have told me that under the law they always must be in a
position to be able to explain the steps that they took, the outcome, the
treatment plan. Therefore, one nice test would be to ensure that the AI involved
always is capable of explaining its decision or that it can assist the physician
in explaining how the decision was made.
I'm very much into deep learning. I love the stuff, but when it does some of
the incredible things, you don't know how it's doing it. We are beginning to
figure it out, but that's one of the challenges here.
Senator Stewart Olsen: Professor Pineau, you were discussing that we
have to develop ethics in how we use all of these things. Is that in progress
now? Because this field is galloping so far ahead, and I think it might be quite
important in our report to mention something about that. Do you know if that's
in the works or who is doing it or what's happening?
Ms. Pineau: I have to say there is space to do a lot better on that
front. Right now we are very much in the preliminary discussions. The will is
there and people are aware of it, but I can't point to a specific set of
standards or best practices that we would have. I think this is where we need to
On the medical side, there are some standards in place. The FDA is an example
and even advanced medical devices are still undergoing that procedure in terms
of evaluation. So evaluations in terms of safety and efficacy are being
More broadly for AI and machine learning, I don't think we have the standards
that we need quite yet. I don't think we have a framework yet to develop these,
but there is a lot of goodwill from several parties who are relevant to that
Senator Stewart Olsen: Who would those be? Who is doing this?
Ms. Pineau: Within university communities, there have been a number of
conferences on this topic that brought together several players both from
industry and from academia. Some of the largest companies — Google, Microsoft,
Amazon, IBM, Facebook and most recently Apple — have announced the formation of
a joint consortium specifically to discuss the safe deployment of AI and the
impact on society. Whether we leave it to these big companies to make all these
decisions is a good question.
A few non-profit organizations have been set up. The ones I'm aware of are on
the U.S. side the border, but they are set up with the mandate specifically to
push research that is responsible for the benefit of all and to ask the hard,
ethical questions without necessarily having a business interest in the
Senator Stewart Olsen: Dr. Silver, you mentioned you did quite a bit of
work in the last couple of years on home care applications. Could you give us a
couple of specifics where you think it's just brilliant? It would be a big help
if you could focus on rural and remote areas.
Mr. Silver: Perhaps before moving on I will mention that there is an
open AI movement as well that's growing worldwide to open up the methodologies
and practices. There is some concern about that question. too, at least at the
There are a couple of things. First, the information needed by many people
who are in a situation where they require home care is something that there is a
lot on the web but it's difficult to make your way through it, thus the reason
for the seniorscentre.info project that we have under way at the moment. Part of
it is in that Google way with people moving around on the website and then doing
analysis of that data after the fact to determine their interests in things like
that. That's one aspect of where AI can be applied in that kind of network base.
I think the more exciting areas are in the case of where putting in situ — in
homes, in seniors' complexes — technology that can actually monitor the health
of a person. It can be done in simple ways. There are now products in the market
that you can get as flooring that actually register pressure. You can use this
to recognize patterns of a person falling on the floor, versus lying on the
floor, walking around, which is a good thing. We keep moving.
I met a Dr. Sumi Helal from Florida, who is coming to a Smart Homes for
Seniors session we're setting up for the early summer at Acadia. He was asked
this question: "What are the simplest sensors you would put in a home?'' His
response was, "A sensor on the toilet, because if mom and dad haven't flushed
the toilet in a while, then you know something is not quite right.''
That shows both sides of this. That's a very simple, inexpensive sensor, but
it's also very invasive. There is that side as well.
People have explored radio technology, essentially your body disrupting the
movement of radio waves within an environment as a method of detecting motion
and perhaps falling, hurting oneself, that type of thing.
People are really getting away from anything you might carry. My mom suffered
a heart attack. She had her alert button around her neck, but she went for the
phone; she forgot all about it.
Sensors that can be used in that way are becoming the Internet of things in
the home for those purposes, coming back to centralized sites where you
integrate that fusion of information and then use it to predict potentially
life- threatening things; or, more proactively, a physician might begin to think
that you have been on the couch a bit much. Maybe we could all use that one; I'm
The fact is that that technology can exist. With the AI to bring in that
somewhat noisy sensory information, you can clean it up to be able to detect
patterns. That is exciting and, to the point where people don't find it too
intrusive, it really could help a lot. That's one of these areas where AI can
scale things up. You can have thousands of homes with this information coming
I would like to offer a bit of a warning. I'll take you back to the days
before the local area network, when you used to move your little diskettes
around from computer to computer. They were going to bring in a local area
network and we were going to save all kinds of money in that space. It didn't
work like that. All these systems we'll put in place will have costs associated
with them. So, again, don't confuse productivity with cost savings here.
Senator Galvez: Thank you very much. It's incredible what you are
doing; really, it is fantastic. It's progressing so fast.
You have given examples of how robotics and AI can improve functioning in
hospitals, and I see it as a tool for improving efficiency, resource management,
and taking care of patients in hospitals or in remote areas. You have also
mentioned that we have a public health system that is the envy of many places in
I want to say that these health services cost a lot of money, and when a
person is in the hospital, it's already late. What can we do to use these
technologies, which seem so powerful, in terms of prevention?
I wonder if you have a link with genetics — there is so much information —
such that we can prevent illnesses and conditions so that we can save money
upstream and not wait until the end and have to have a surgeon or all kinds of
Mr. Silver: I'll point out two items I had in the notes, which I
believe you have in front of you, at page 2, in the middle of the page, just
above "indirect patient health care.'' One is taking place at the University of
Toronto, through Brendan Frey, who studied under Geoff Hinton. He started up a
company called Deep Genomics. The intent is to use machine learning systems to
predict the molecular effects of genetic variation. They are doing this largely
so that they can use that in the pharmaceutical space at the moment, but the
intention is that, from hundreds of thousands of known relationships between
differences in the genome and how they then become actual molecular structures,
they're predicting in advance: If I change the genome in this way, what will the
impact be? That has effects initially in terms of pharmaceuticals, but it could
also be used to find predispositions to diseases or changes in those
The other company I mention here — and there are a couple in this space — is
iCarbonX, which is in the U.S. This sounds unbelievable, but they are working on
a digital DNA avatar, essentially growing the properties based on genetic
information, and then they use that to simulate, under different environments —
in a smoking environment or not — stress on the animal's life. I think they are
working at the animal level initially, but they want to predict an organ's
future health by doing this — in particular, what would be the stress upon the
kidneys or liver under certain environmental conditions, given a certain base
program which is given to you by the DNA?
This is all being done by computers and using AI concepts. For me, that's the
far end of where people are looking right now.
The other one I saw recently was the ability to predict a person's face, what
they look like, from their DNA. There has been work done on that recently. I saw
it in a TED Talk a couple of weeks ago. It's incredible stuff.
We will get there, but there are as many concerns with that as there are
potential good outcomes.
Senator Galvez: When the technologies of automatic diagnostics will be
ready, what is the position of the actual doctors, the specialists? Will they be
in competition? Whose diagnostic will prevail: the one set out by the AI or the
human? Will it be a committee that finally decides what to do with the patient,
or are we going to trust the machines?
Ms. Pineau: I think the answer to that depends a lot on the type of
disease we are looking at, in particular in what context the decisions are made.
In the case of the artificial pancreas, there is a control system. A sensor
looks at the blood sugar level in real time, and then the decision whether to
adjust the insulin is made on a very short time scale — the same decisions that
patients are now making on their own. With these kinds of decisions, I think we
will see a quick translation to shifting it to machines and we won't need the
doctor in the loop to check the decision every time. The doctor will be there
earlier on to adjust the parameters of the machine, but most of the decisions
will be made in real time.
There are many other cases where the actions are much more substantial and
have long-term impact. If we think about brain surgery, we have imagining
technology. The computer assists in figuring out the exact location of the
tumour, doing what we call segmentation of the tumour. In some cases, surgery is
the best option. We are not necessarily going to send in a robot and do the
surgery without consulting a doctor on the surgical plan, and probably a team of
The situation ranges from one end to the other. Particularly in the period of
transition while we are acquiring confidence in the system's ability to do the
job well, doctors are going to be in the loop very tightly. As we gather more
and more confidence that the system is performing well, and in some cases
exceeding the performance of the human doctors, then we can give more control to
the machine in terms of carrying out the action. There are many cases where the
machine will be primarily there in terms of advising on the course of treatment,
for many years.
The Chair: Dr. Frey is on our list of potential witnesses, to whom
Professor Silver referred.
Senator Meredith: Thank you both for being here today and for your
continued commitment and research in terms of the advancement of Canadian lives
and the impact it will have on the world. Congratulations, Dr. Silver, on your
My question is specifically to you, doctor. You mentioned that the embracing
of technology is not reducing costs in the health care system. That's what every
province should be concerned about. On the other hand, you said it would also
keep costs down and minimize waste through forecasted use and need. Could you
elaborate on that? Then I have a specific question for Dr. Pineau as well.
Mr. Silver: I think there is actually an opportunity to reduce costs in
terms of material and human resources within the health care system through this
type of thing. We can embed those systems within hospitals, well-defined and
controlled types of areas, and there will be cost savings in terms of scheduling
in operations and human resources, nursing. A lot of this could be done.
In health care, too, there are a lot of challenges. In Nova Scotia we have a
person who lives right beside an individual requiring health care Monday
morning, and they are sent 50 kilometres away to treat someone else. That sort
of thing happens.
I think we can save costs in that respect. I'm not sure how much of the
equation that makes up. My caution was more in the area of invoking new methods
of monitoring within homes and bringing that back to centralized sites. There
are costs similar to what we saw with the move from sneakernet to a local area
network, where you had have administrators and people to install and maintain
these things. They are different workers, but that cost is a real cost as well.
There are great benefits in that. I don't know how we measure this. I think
we can reduce costs in some areas with some of these technologies, as I
mentioned. Hopefully, others we can keep stable but at the same time increase
the quality of health care. That's the trade-off.
I think I would feel incorrect in saying — particularly in the area of
putting a lot of new technology for monitoring health care or bringing
physicians and their staff up to speed with technology — that it will, over a
long time, save us a lot of money in and of itself. What it will do is make the
quality of health care a lot better. We can, though, certainly deploy things in
other areas, managing resources, which will reduce costs.
Senator Meredith: Dr. Pineau, you spoke about biases and coding and
that these AI technologies are evolving. How do we prevent that in terms of the
human biases that are fed into these systems? Obviously they are going to spit
out those kinds of results that we don't want from a safety perspective. I want
both of you to comment on that. Yes, embracing these technologies is essential
and for the benefit of Canadians. Again, in terms of the overall safety to
Canadians, you are developing the SmartWheeler. Could you talk to us a bit about
that as well in terms of what stage you are at with that? What kinds of
mechanisms are built in from a safety perspective? I'm very concerned about
Ms. Pineau: There are a few different notions of safety. One of them is
the biases that are showing up in some of our algorithms. In many cases, as you
pointed out very astutely, the biases come from the data used to train our
algorithms. If we gather data and text from the Web and then we train a
conversational agent to speak, we are going to get an agent that speaks like
some of the websites out there. Depending which websites you've chosen to train
your agent, you will get very different conversation styles. There is a big
responsibility to gather data as diverse as possible and as representative as we
want of the behaviour of our agent.
We do the same when we conduct clinical trials for testing of new medical
procedures. If we gather data from a very small number of members of our
population, we are going to get results that only apply to that small segment of
the population. So it is on us, researchers, to ensure we have wide
representation in terms of where we are gathering data.
In terms of our SmartWheeler, this is a smart wheelchair that we have been
developing for several years at McGill University. The safety concerns are a bit
different and maybe a bit similar to the concerns we are anticipating from
autonomous driving technology. This is a wheelchair that can drive on its own.
It has the ability, with the intelligence system, to control the motors: to go,
to stop, to turn around.
For several years now we have been in partnership with one of the shopping
malls in the Montreal area so that we do our testing not only in a university
lab but in a shopping mall, so that we expose the system to the diversity of
conditions it will encounter in the real world. It's a shopping mall where a lot
of the regular clients are wheelchair users. So in one sense, the mall is quite
well adapted to our wheelchair. On the other hand, our wheelchair has to face
the same kinds of challenges that these people face every day.
This has been a very interesting project. We are developing new technology
such that the robot can navigate more fluidly in the shopping centre. For those
of you who are from the Montreal area, this is the Alexis Nihon mall. It's
connected through the subway system. Every four minutes or so hundreds of people
come out, so the wheelchair has to be perceptive in order not to have
collisions. So far I'm happy to report that we have had no collisions. We still
operate with a human in the loop. That means there's usually a PhD student
standing right behind with a hand on the stop button, who can guarantee the
safety of it. We have done trials over the summer with a number of people who
are regular wheelchair users and we are now preparing to analyze all of that
Senator Meredith: With respect to a regulatory framework, what needs
to happen as these new technologies come on stream, in terms of Health Canada
and FDA? Could I hear comments from both of you on that?
Mr. Silver: I did think about that a bit, and I put a few comments on
the last page. I'll just expand on them.
Baby steps are certainly an important piece here, from the PhD student
holding the button, to the method by which the individual can hit the stop
button in creative ways.
In terms of regulation, it will be important that AI developers disclose
potential risks as well as benefits, similar to perhaps the testing that drugs
go through at the moment, and that the person is fully aware of potential
problems so they can make informed decisions in that regard and so that health
care workers would know about them. In developing these technologies, it is
important that they adhere to good software engineering practices and
methodologies. Typically in that space there is a quality assurance role that
has oversight, ISO 9000, the standards organization that oversees most
engineering groups. TQM, total quality management, would be important.
The other thing I will introduce is that one of the things that will clearly
become the case is that clinicians — and physicians in particular — will have to
know a bit more about this sort of thing. Before coming here, I spoke about this
with two of the more progressive physicians in our area. They emphasized the
fact that computer science and AI need to be part of medical training — not that
they have to know how to build these things, but that they have an appreciation
for some of the things that could go wrong. They will increasingly be required
to understand the impact of these things so they can participate as we move
along the journey of having AI play a bigger role in health care.
Senator Merchant: Thanks very much to both of you. I think you
answered a bit of my question in terms of how physicians are going to be
prepared to deal with all these advances and how quickly these things will
change. There has been so much change with computers alone in the last few
years. You are all working hard, you're excited, and you have people who are
very inspired about this.
I'm wondering about the doctors. Maybe you've partially answered this
already. How many times in a lifetime will they have to be retrained? Is it
serious retraining that they will need?
My second question is this: You said, doctor, that we are fortunate to have a
good health system in Canada. I'm wondering about the evenness of delivery.
Given demographics and the provincial delivery of health care, do you foresee
that some areas in the country will be better equipped than others? Are patients
then going to be saying, "I could go to Montreal''? You must be thinking about
this, because you're doing all this to improve outcomes and quality of life. So
how do you deal with all of this?
Mr. Silver: I will speak to a couple more thoughts about the training.
Again, I think it needs to start early. It starts before they become physicians.
Increasingly around the world we understand that with the computational thinking
in computer science, these technical things need to be embedded in the core of
our educational system early on.
That's not easy. We are wrestling with this in Nova Scotia. A colleague of
mine is now engaged with the province's education department in placing
computing more in the core, and of course the issue is where you fit it in with
everything else. Ultimately, that generates a new generation of physicians who
will already understand some of these issues.
Then there is in-service for the practising person, and there is embedding it
within pre-service educational programs. Again, it means for a lot of change,
but it would put us a step up in the world in terms of having people that use AI
as a tool and participate in its evolution compared to many other countries that
may be doing it in more of an ad hoc fashion.
Ms. Pineau: I will address your second question as to how we ensure
evenness of delivery. Honestly, this is a major challenge. We have seen this for
several years in the development of biomedical research. When you carry out
multi- centre studies, often you see that one of the most predictable variables
in terms of the outcome is which centre was the treatment administered in. For
those studies that make it past the phase of just being a study and biomedical
technology gets deployed, we see huge differences in terms of outcome
measurements between some of the larger university hospitals versus smaller
I think the transfer of AI and robotics technology is not that different in
this respect than other biomedical research. Challenges exist already for much
of the biomedical research and who has access to what type of treatment. It is
not the same if you're in a big centre with a university hospital versus a small
centre. I think we'll continue to face that challenge, but I don't think the
challenge is particular to AI and robotics in that sense.
I think there needs to be structure in place to allow all the hospitals, as
well as primary care clinics and so on, to have access to that technology and to
have people who are competent in the use of this technology.
Senator Merchant: Right now it's difficult to get doctors to go out to
rural areas. Why would a doctor want to work in a small hospital when there are
places where he or she could do so much more? Right now you say to people: If
you work out in the country for two years, we will reduce your —
The Chair: Those issues are not unique to AI. I think you answered the
issue extremely well, and this will fit into the overall system. We are actually
already hearing examples of how you can facilitate distance access to medicine
here. I think we will hear some very specific ones as we go further along. Thank
Senator Unger: Thank you both very much. Your presentations were
Dr.Pineau, you talked about deep learning. I'm wondering how this will apply
to people with spinal cord injuries, for example. Is this technology being
applied now or will it be?
Ms. Pineau: I saw a fascinating talk last summer. A researcher in
California is looking at individuals with spinal cord injuries. He is
specifically looking at regeneration of some of the nerves using electrical
stimulation. Through this work, he has actually shown that individuals that were
in a wheelchair for many years, after a particular pattern of stimulation, could
gain enough mobility to start to walk again. They are not running a marathon
quite yet, but for specific individuals they have been able to take a number of
steps and keep on improving.
Now, there is a lot of work to be done to figure out the appropriate pattern
of electrical stimulation for an individual based on their physiology, their
injury and so on. Deep learning is a technology that can help in that sense.
Identifying patterns from complex readings of the physiology and trying to
optimize the particular pattern of electrical stimulation might help. I don't
think this is done yet, but I think at some point these two things are going to
On another side, within our lab we're doing research in terms of smart
wheelchairs, which also helps people with spinal cord injuries. In that case,
deep learning is used a lot more within the intelligent wheelchair to process
the information from the environment.
Senator Unger: Dr. Silver, I have information that a vast amount of
medical data has been collected on most of the individuals in developed
countries. It's referring to the AI evolution that started in the 1980s. At
least some of this data is available in electronic form. However, analysis of
the data is limited by legal and ethical considerations. To what extent is that
a problem now?
Mr. Silver: I am afraid you have me at a disadvantage. I don't have
Senator Unger: It's an overview of artificial intelligence that was
supplied to us.
The Chair: I think we better be clear here.
Dr.Silver, this is in a document that has been provided to senators as
background material. It's not taken from your presentation.
Senator Unger: Thank you, chair.
Mr. Silver: Please ask me the question again.
Senator Unger: It is about medical data that had been collected on
individuals, and I believe it's referring to earlier times. Some of this data is
available in electronic form. However, analysis of the data is limited by legal
and ethical considerations.
Mr. Silver: That's true. For example, while doing my PhD I worked with
a radiologist at Victoria Hospital in London, Ontario. He had his own personal
data set of 500 patients, and that was the reason by which he could do the work.
He had acquired that over almost 17 to 18 years at that point. It consisted of
people with arterial occlusion of arteries of the heart, and we were using
tomographic images. I was using machine learning to actually predict from the
images whether the person had a particular type of arterial occlusion, a
narrowing of the artery. But it was specifically because of his data that we
were able to do those things.
So groups will get together, and physicians, with appropriate permission.
This is the issue with people now through PIPEDA. If they have signed off things
appropriately, they can pool the information together and use it. But there are
certainly challenges because it is private information.
Ultimately, all of these systems that we are talking about largely nowadays
work on the basis of learning by examples. That's the power of them, that they
no longer are algorithms crafted by expert physicians or radiologists. They
actually are examples of people who do or do not have particular types of
disease. The algorithms build the programs, if you like, the models directly
from those examples. So the more we have of it, the better.
Conceivably, if we had every patient in, let's say, Comox or Cape Breton in
particular disease categories and whether they have it or not — heart disease,
for example — we probably could find some interesting things about that
pathology, but you have to be able to get that data together in order to do it.
And it has to be accurate. That's another issue. There is a lot of data out
there in the medical space that is not very accurate.
Senator Unger: How close is AI to real human intelligence?
Mr. Silver: The challenging problem, first of all, is defining what
intelligence is. I do not say that facetiously. It is true that that is probably
the biggest one we have come up against: What does that mean'' I think AI has
probably helped us and pushed the boundaries on trying to have a sense as to
what it mean to be intelligent.
Ms. Pineau: I would add that AI is excellent at a few very narrow
tasks. AI can play chess better than I can. It can translate from Farsi to
Japanese much better than I can. But in terms of the wealth of things that I can
do, AI is pathetic. It's on the order of what a very small rodent can do.
The Chair: I'll clarify the reference again that Senator Unger
referred to. I was pointing out to our senators in a briefing document that with
regard to the use of medical data and other information, there are already
well-established rules around ethics, what can be done and permissions and so
on. So the idea that we have to start from zero isn't there. We already have a
base for using data. That was the issue there.
Senator Mégie: Thank you for your presentation. When we are
looking for a young physician in a clinic, the first thing he or she asks is
whether we have electronic records or whether we have new artificial
intelligence technologies. That is how young physicians decide which clinic they
are going to practice in. However, equipment like that puts a distance between
them and the patients. There is very little human contact. I am pleased to see
that you have ethical concerns about the security and privacy of data.
My question deals with the concerns. In my opinion, they are much greater
than the ones you raised here. Some patients will search Google to find a
diagnosis for their headache, for example. A doctor ask questions: "Do you have
numbness in the hands?'' The patient replies that he had a numb arm yesterday.
Basically, after the online research, the patient has decided that he has a
brain tumour. The patient gets to the doctor's office and says, "Wikipedia
tells me that I have a brain tumour and I need an MRI right away.'' Imagine the
doctor's reaction when he hears that. The patient is extremely anxious, and that
is like poison in his life. But when the physician talks to the patient, he can
see the body language. I do not know whether artificial intelligence is able to
do that. Non-verbal behaviour can tell a physician whether his patient is
looking for a note so that he can take sick leave, or whether he is really ill.
Here is my question: Will artificial intelligence be able to sufficiently
distinguish symptoms and groups of symptoms in order to come to a diagnosis that
is close to a real one as possible?
Mr. Silver: I did speak with, as I mentioned, a couple of physicians.
They brought this up and I was thinking of it as well. Already, as you indicate,
patients present with their own solution to their problem, potentially, in the
future, being able to use AI Google, if you like, to say essentially, "What is
wrong with me?''
So their response, because they are really the experts in this, I think, more
so than I, is that physicians are going to have to increasingly be trained to
deal with the situation. Of course, as good science would tell us, the thing for
them to do is to go back to your symptoms. What would be the appropriate first
bit of treatment or testing that should be done?
However, it certainly is possible that information that could be extracted in
a home setting or in the doctor's office now and in the near future could be
data that comes in as a fusion of things, along with their demographic
information, age, blood pressure — the clinical parameters you usually get — and
could more readily point towards the most likely first step in determining the
I think encouraging people to understand what their body is telling them is
always a good thing in health care. Arbitrarily allowing AI to make that
determination at the moment is probably not the best idea.
As Dr.Pineau indicated, there are technologies now that can measure blood
sugar levels, and do it immediately, things that we need to be using. Aspects of
those, only a few years ago, would be considered AI.
One of the things you might notice on the first page of my notes is that my
PhD supervisor told me a long time ago that artificial intelligence is only
artificial intelligence until some critical mass understands how it works. Then
it's just a computer program. It's nothing more.
Again, education is the key here both for physicians who have to deal with
patients presenting with a sense of what their disease is and certainly for the
future of those presenting with, "Well, the AI told me.''
Ms. Pineau: Let me add some nuances to what you have just said. That
person with a headache who goes online and concludes that they have a brain
problem is a case where human intelligence has failed. The set of symptoms
studied was not complete enough to come to a diagnosis. If a problem of that
kind were examined with well-designed artificial intelligence software, the
symptoms would be studied better. The artificial intelligence system asks a
sufficient number of questions to come to a sound diagnosis. We have to look at
vehicles that are able to precisely evaluate a diagnosis with sufficient
information and to communicate when information is missing.
Parallel to that, our current artificial intelligence systems do not have the
same perception of complex systems. You were talking about non-verbal physical
communication. A lot of work is being done on that but it is not very effective
with non-verbal communication, much less so than human beings, of course. For a
good number of years yet, I imagine that doctors will be much more skilled at
analyzing complex situations. However artificial intelligence systems can very
precisely diagnose certain illnesses from blood samples.
Senator Raine: Thank you very much. This is fascinating. We really
appreciate your being here.
I have a question on the interface between AI and natural intelligence, and
how we, owning our body and our health, if you like, use artificial intelligence
about our body to motivate us to take personal responsibility for being
proactive in preventative health care. We know that there are Fitbits out there
and all kinds of pedometers, but somehow, even knowing that we're out of shape,
many people aren't motivated to get in shape. Yet, when we look at it, if you
take all the different pharmaceuticals or placebos, exercise is right up there
at the top of making you well. Would you like to comment on how artificial
intelligence can play a role in motivating individuals to take personal
responsibility for their health?
Ms. Pineau: I would say at this stage that we don't know the magic
strategy for motivating people to exercise, eat well, stop smoking or sleep
enough. But what AI and machine learning in particular gives us is the ability
to learn from a large number of people. As more and more of us carry these
Fitbits and more and more of us carry smartphones that can be programmed to give
us prompts, we can learn what kind of prompting or incentives work at motivating
people. I don't know what these incentives are, but I certainly have a good idea
of how to design a program to learn how to do that. We have colleagues that are
doing that right now, in particular, in the case of managing people who have
alcohol or drug addiction. There are a few different intervention strategies,
but in many cases they require the person to take some steps, whether it's
calling a doctor, a family member or a friend.
In that case, they are developing apps for phones to try to do the
intervention. Because it runs on a phone, they can distribute that across many
different people and learn which types of interventions are more effective and
pair up when the intervention applied compared to the context in terms of the
environment, the time of day and so on.
Mr. Silver: That is an excellent question. Most physicians will tell
you that the starting point is determining the problem, and having people take
action is the challenge.
I would agree with Dr. Pineau. We do have methods by which we can start
thinking about how to find those hot buttons in terms of motivation. The emails,
these sorts of things you might get from Fitbit and that type of thing, are
experiments at this point. In some cases they do work. They get you off the
couch, get you out walking, encouraging you, providing rewards to take action.
It's probably the most important thing, in many ways, that this technology
can do, because ultimately that leads to not having to have an operation or
The collaborative information from many different people, those who have
similar body types, and after taking action, helps them to do better. I guess
providing information statistically would be a good thing to do. We can do that.
First of all, the AI could determine your body type and things that you're doing
well at and perhaps not so well, and actually showing you statistically how
others who took such steps led to them living longer and being happier.
There is every reason to believe this would be a great area of investigation
and research for AI.
Senator Raine: If you look at all the fitness magazines and things
like that, it says, "See your doctor before starting any fitness program.''
Yet, when you go to your doctor and ask them what you should be doing, they say,
"Well, yes, you should be doing something.'' They don't have a prescription for
exercise. I know the Canadian Society of Exercise Physiology is working on this.
Is that an interface that could come into the artificial intelligence, not so
much the robots, but AI, that we can increase the knowledge of medical
professionals as to how to prescribe exercise? I think most people when they go
to the doctor know they have to do something. They really trust the doctor, but
I'm not sure they are getting the information they need.
Mr. Silver: It certainly could help direct the physician as to what to
tell the patient in terms of the proactive management of their health. I like to
think a lot of physicians do that already, but to sum up the person's
demographic recent medical history and give them good ideas, suggestions, would
be helpful. Having the physicians and clinicians take that action to actually do
that is another thing. I'm not familiar with how they are trained in that
regard, if that happens a lot.
Senator Raine: I know they get ongoing training from pharmaceutical
salesmen, but I'm not sure they get enough training from the other side.
I have one piece of information to share. I know of a young man in Ottawa who
is very much involved in robotics. He is a school student. He plays a computer
game called Minecraft. Through that, he is learning these skills at a
very young age.
Could you comment on the use of the Internet by young people to start to
learn these skills? Is that being tracked and followed by the academic world in
terms of developing future robot specialists?
Mr. Silver: Over the last two to three years, a couple of large global
organizations Code.org, the Computer Science Teachers Association, groups like
this, have been encouraging children to learn how to code and develop
computational thinking ideas over the Web using fun programs like Scratch, which
are kind of iconic, graphical methods of programming things to move and change
on the screen, which is really great.
In fact, we are using such tools even at the university level to introduce
people who haven't had any computer science in high school to what computing is
about. There is a lot of outreach happening now across all of our institutions
to youth groups and schools in this space. Youth robotics is huge across the
Ms. Pineau: I agree with my colleague.
I started programming when I was 18, which is incredibly late compared to
today's children. Today they can start when they are four and five. These tools
I'm particularly excited about the exposure at a younger age for programming
because many studies that show that in terms of girls' involvement in the study
of mathematics and science, we lose them around the age of 11 or 12. If they
don't start programming until they are 18, we have lost many of them. If they
start programming when they are four or five or six years old, we have a better
chance if they stay in science to take on computer science as their discipline.
The Chair: Before we start the second round, I would like to ask a few
questions or make some observations.
First of all, with regard to the issue of the ethical use of artificial
intelligence and its products, I think we need to be reminded of the
biotechnology era, the late 1970s and early 1980s. In actual fact, it was the
scientists who came together from around the world at the famous Asilomar
conference that developed recommendations on how experimentation with DNA, the
genetic material, could be dealt with. They came up with very detailed
recommendations to governments. These recommendations were adopted by countries
around the world.
We're now 40 years later, and there is only one example of a country that has
strayed, and only in recent times, from the most rigid of the recommendations
out of that. The parallels are quite similar.
Is there a movement among the academic leaders, the researchers in this
field, to pull together an international conference to make recommendations on
what they are experiencing, what they can foresee and to perhaps use the
Asilomar outcome as a guide for an agenda? Is there any movement that you're
aware of in that regard?
Ms. Pineau: Things are not quite as developed as you describe them.
We're now in the preliminary phase of investigation. Before the community's
ready to come up with a sound recommendation that will last for 40 years,
preliminary groundwork has to happen. In particular, that relates to what I was
mentioning a little earlier, where the scientists and specialists in law and
ethics have to get together and develop a giant vocabulary. That is happening in
several events these days, and there have been a few of them over the last three
or four years.
I expect this will eventually lead to something that takes the form of formal
recommendations to governments, but we're not quite there yet. There is a good
opportunity for academics to take leadership in that respect.
Mr. Silver: There has been a movement over the last couple of years, in
particular led by Stuart Russell who is one of the foremost writers of
introductory AI textbooks, Stuart Russell and Peter Norvig. Peter is at Google
and Stuart Russell is at Stanford. This is mostly driven by concerns in terms of
AI in the military.
Recently you probably heard about Stephen Hawking speaking up in this respect
about his concerns, in particular with machine learning, over the ability of an
agent to exponentially increase in its ability. That is the concern, thus
outrunning our ability to control it effectively.
There is a fledgling movement. At this point it has been mostly workshops as
part of conferences as opposed to dedicated conferences, I believe. I may be
wrong about that, but I'm not aware of any targeted conferences.
The Chair: You wouldn't miss an Asilomar kind of conference.
Mr. Silver: No.
There are definitely concerns shared across the community.
The Chair: I think the point you made, Dr. Pineau, about the language
evolution is obviously essential. At Asilomar, the biotechnologists had
experienced years of development, from microbiology, of the language that would
be involved. That would be critical to frame any discussion in that area. Thank
you both very much.
Dr.Silver, in your handout there is one acronym that we haven't quite
Mr. Silver: Natural language processing.
The Chair: Thank you very much.
Your leadership within the area we're discussing today is recognized, which
is why you are here. You have colleagues in kinesiology who are recognized for
programs they have developed dealing with senior health and with specific focus
on the heart and people who have had heart attacks. They have developed exercise
programs in these different categories that are so appealing to participants
that they hate it when Christmas comes and they have to take a break in their
Based on what you and Dr. Pineau have been talking about with regard to
diagnostics, and so on, it seems to me you have an ideal captive audience next
to you with regard to testing some of the diagnostics. We heard, for example,
that there is a diagnostic available at the moment that can be placed on an
individual who has been previously diagnosed as having a heart condition. It can
actually predict in advance when the individual may be subject to a heart
attack. The point is that even a day or two of advanced knowledge makes an
enormous difference in actually deal with that individual, the potential for
protection of the individual, and dramatic reduction in costs to the health care
I'm wondering if you have thought about some collaboration with your
colleagues in that area.
Mr. Silver: Well, your question is bang on. It fits together with a
couple of things we are talking about; one, sports analytics and, second, the
idea of motivating people to take better care of their lives.
I have been working with people in the kinesiology space along with Kinduct,
a company based out of Nova Scotia. We are putting together a sports analytics
symposium intended for the July time frame. One of the elements we have talked
about is how that overlays with general health care. The kinesiology department
has had a program for people who suffered from heart attacks so that they can
learn about exercise. Indeed, they get to the point where it becomes a big part
of their life. It would be a good idea to monitor those people and provide
predictive elements of their progress and how they might do better, or maybe at
the onset of particular problems.
Sports analytics have the same concern, but they are not so much about heart
attacks. They are more concerned with injury. That's the big one. They are
looking for cases of overwork or underwork or patterns that indicate the person
will injure themselves in some way.
The Chair: My final question is to both of you. One of the issues
raised continuously in this area is bias in recognition. Now, there is an area
in society that is not so much a question of bias but false recognition. Very
recent studies have shown that eye witness accounts are very fallible. They
might be reliable at a surface level, but not reliable in detail.
One of the clearly identified potential applications of artificial
intelligence coupled with electronics is identification, recognition of a
situation. We'll take the same example.
The language of artificial intelligence with regard to a particular problem
is being developed by humans. It is only as good as the data, parameters and the
algorithms that are specifically identified to interpret that data in a certain
How close do you think we are to the use of data and the creation of
algorithms in eliminating error in recognition? It would also, of course, filter
out bias in the data. Can you comment on that using the example I've given you?
Mr. Silver: Dr. Pineau has pointed out a couple of times that there is
bias recognized in some of the predictive models, and that has a lot to do with
the collection of data that has been used to develop those models, the
population base or the knowledge base from which that data has been collected. I
think that still exists. Obviously by increasing the amount of data you are
using to engineer these systems, one would think that would help to drive that
bias out of the system, but the source is always the issue. There is the case
that perhaps patients of a particular class or type might receive more medical
care and be more subject to testing. That, then, is the data you are feeding
into your system. The issue then becomes not so much about how much is AI doing,
but the source of the data that's feeding the generation of the new knowledge
built by the AI.
Ms. Pineau: There is an aspect we have not discussed yet, and it's the
notion of uncertainty in the machines. When humans analyze information, we have
uncertainty. If a doctor looks at an image of a tumor, in some cases they may
not be exactly sure and there is some uncertainty. In many cases, some of the
algorithms we are developing have a way to calculate uncertainty. Often we don't
communicate this. We make a prediction about the image which corresponds to this
disease or to this person. There are algorithms that are able to calculate that
uncertainty. We need to develop the right way to communicate the decision of
machines in the way that it reflects the measure of uncertainty. At that point,
the person receiving that information can take different decisions based on the
measure of uncertainty. If we are not sure we are predicting a particular tumor,
we will not operate right away but order more tests. That notion is crucial in
It's not feasible in all systems. If a car is on automatic pilot, it can't
stop and ask the driver, "Do you really think there is a truck out there?'' In
some cases, the uncertainty is not feasible because the time loop in which the
decision has to be taken is too short. But in cases where we have the time, it's
very useful information.
The Chair: To your knowledge, has there been an example of a device
being shown a figure in the crowd and then asked to pick that figure out of a
lineup, so to speak, to see whether the machine learning is more accurate than
Ms. Pineau: We haven't done that with a lineup as far as I know, but
we've certainly done it in terms of data set of people, trying to pick out a
face from a whole collection of faces and find the one that matches.
The machine is more accurate when the collection of candidates is very large.
But in some cases, if I put two faces side by side, the human is more accurate.
If the matching face has to be picked from a collection of 20,000, the machine
tends to be more accurate because the human gets tired and they glaze away.
Senator Galvez: I'm an engineer, so I know that the quality of data is
fundamental to the accuracy and the precision of the output.
You just talked about uncertainty. Two years ago, I broke my ankle. I had to
see three doctors because they couldn't agree whether I should have surgery and
put in a pin or if it would heal naturally. The factors were that I am over 50
years old, and I'm a small person if I compare myself to Senator Meredith.
I understand that this data will be very homogeneous for a homogeneous
society or population and that the diagnostics will be precise, but what about
population that is heterogeneous and multi-ethnic? How close are we to get an
Ms. Pineau: It really depends on the availability of the data. In the
case where we can get data from a wide segment of the population, we are quite
good at doing prediction for a wide segment of the population. In the cases
where we don't, the AI systems can't do very much.
An area I'll give as an example is speech recognition, where a machine hears
spoken languages and translates them into written text. The performance in
English was much better than in several other languages just because we had been
training with mostly English systems. Now the performance in other languages is
slowly catching up because we are feeding our machines with data from other
languages. The same is happening in terms of medical diagnostics and the
analysis of symptoms.
When we start deploying the technology only in large, big city hospitals, we
are getting data from a portion of the population. As we start gathering data
from a much larger segment of the population, we make decisions that are better
Senator Mégie: Mr. Silver, in your document, we see legal measures
being taken in order to properly deter malicious use. Were you thinking about
insurance companies using the data or did you have other things in mind?
Mr. Silver: The fact is that in the same way one could use an AI
algorithm to determine those individuals most threatened by heart disease for
the purposes of recommending that they consider taking time off, exercising and
things like this, it could also be something that factors into a decision
regarding promotion. Perhaps not so much in Canada but in other parts of the
world, employers have a lot to do with health care, and those are legitimate
On a wider basis, one could think of the same thing happening in terms of the
population of a jurisdiction. To some extent it does happen now. Physicians have
to make a choice as to whether a person receives a new hip, depending on their
age and health ability. Hopefully those decisions are made — and I'm sure they
are — in the most ethical ways possible given the constraints on the system, but
other factors could potentially make their way into the decision- making
processes, I guess, if it was AI and we all come to accept it does a great job
and that's its decision.
Senator Mégie: I had thought that insurance companies might use the
data to refuse to insure people. That's actually what I had in mind.
My second question dealt with your concerns about a reduction in quality of
health care when artificial intelligence is used for economic reasons. I don't
know if I missed your answer but I am having difficulty understanding. We are
told that Canada has the best health care system. However, you write that
artificial intelligence could reduce its quality. Did I understand incorrectly?
Mr. Silver: No, you didn't. You understood perfectly. In fact, this
comes from the physicians that I spoke with about AI over the last couple of
weeks, the thought being that one could reduce the cost of health care by
putting a paramedic or RN in place in situations, along with AI to assist them.
That can be a very good thing. That is a way of reducing costs or, should I say,
potentially extending productivity to a greater number of patients. But it does
come with the potential threat that poor decisions are being made.
All that has to happen in that case is that the person has to be trained to
be able to know that what this is pointing to is the need for a physician in the
loop to help make the appropriate choice, diagnosis, treatment, that sort of
thing. It's fine, but if we justify the use of technologies based on reducing
cost, we may not be getting the same quality of care. That was my point.
Senator Mégie: So we can get on without artificial intelligence. That
is why I asked the question.
Senator Meredith: Dr. Pineau, thank you again for being here. You are
the co-director of the Reasoning and Learning Lab. The phrase "probabilistic
system'' has been floated around. Could you explain what that means in our deep
learning here of this emerging technology?
Ms. Pineau: Reasoning and learning are two of the tasks necessary for
intelligence. The distinction I make is in learning, you acquire information
from data and distill that into predictions; in reasoning, you use multiple
complex facts, different pieces of information, and manipulate them to make
complex decisions. When you are talking about chess, you are doing more
reasoning than learning, because we can write out all the rules in the book.
When you are talking about recognizing faces, objects and diseases, it's more on
the learning side. We deal with AI systems that require both of these things
We look in particular at probabilistic systems because probability gives us
the language to talk about uncertainty. I was mentioning earlier how important
it is to characterize the uncertainty we have in our predictions and decisions,
and probability is the mathematical language through which we can think about
Senator Meredith: Can you interject regarding collaboration outside of
Canada, what's happening over there, in terms of not reinventing the wheel?
Ms. Pineau: Within machine learning, there are sub-fields. One area I
mentioned is deep learning. This is the one everyone is excited about.
The next up and coming area is called reinforcement learning. At McGill
University, as well as the University of Alberta in Edmonton, we have two of the
leading centres on reinforcement learning. It is used, in particular, for making
sequences of decisions, not just one shot — here is an image; who is this? — but
how do I make a sequence of decisions?
The other leading groups in this area are all around the world. One of the
biggest is a company called Google DeepMind in London. You may have heard about
them. They have produced a Go player that beat the world champion last March,
and people around the world noticed because the game of Go is very difficult.
There is a lot of activity going on in this area, and we are collaborating
with people in the U.S., the U.K. and throughout Europe on pursuing this agenda.
The Chair: I want to thank both of you for being here today. I think
this has been a very interesting session. You have been able to combine the
potential of it, not quite the theoretical aspect, but where it may go with
practical examples to base your responses on.
Dr. Silver, I will pick up one of the comments you made with regard to the
funding of research. That has been a real concern of mine for some time —
recognition. We need to motivate young people to go on to graduate programs and
postgraduate programs to professional programs. We know the small universities
have been disproportionately successful in that historically, but today,
knowledge development and access, research facilities and so on, are critical.
Here today we have in this meeting a representative of one of Canada's
historic, largest and most powerful research institutions, recognized around the
world at that level. We have a representative from one of the small
universities, one the outstanding ones over a long period of time.
On the futuristic issue of artificial intelligence, robotics and 3-D
printing, nothing could be a better example of the breadth of research and our
need to recognize that than to ensure we can always motivate young people to
eventually go on to advanced degrees at McGill and other great universities in
this country. I'll add that pitch to our meeting today.
Once more, I want to thank my colleagues for their questions that continue to
help us move forward in this study.
(The committee adjourned.)