Skip to content
SOCI - Standing Committee

Social Affairs, Science and Technology

 

Proceedings of the Standing Senate Committee on
Social Affairs, Science and Technology

Issue No. 19 - Evidence - March 29, 2017


OTTAWA, Wednesday, March 29, 2017

The Standing Senate Committee on Social Affairs, Science and Technology met this day, at 4:21 p.m. to continue its study on the role of robotics, 3D printing and artificial intelligence in the health care system.

Senator Kelvin Kenneth Ogilvie (Chair) in the chair.

[Translation]

The Chair: Welcome to the Standing Senate Committee on Social Affairs, Science and Technology.

[English]

I'm Kelvin Ogilvie, from Nova Scotia, chair of the committee. I am going to ask my colleagues to introduce themselves, starting on my left.

[Translation]

Senator Cormier: René Cormier, from New Brunswick.

[English]

Senator Dean: Tony Dean, Ontario.

Senator Hartling: Nancy Hartling, New Brunswick.

Senator Stewart Olsen: Carolyn Stewart Olsen, New Brunswick.

The Chair: I want to remind us all that today we are continuing our study on the role of robotics, 3-D printing and artificial intelligence in the health care system.

We are absolutely delighted with our two witnesses today, and I will welcome them in the order that I will call them. We didn't arm wrestle over who would go first, so unless you have an objection, I'll go with the order they're listed on my agenda.

Dr. Bernstein, I welcome you as president and CEO of the Canadian Institute for Advanced Research. We are delighted to have you here. We welcome your presentation.

Alan Bernstein, President and CEO, Canadian Institute for Advanced Research (CIFAR): Thank you very much, senator. I am very pleased to be here. I think senators will be aware this is a timely topic, certainly in the last few weeks since the budget was brought down.

I'm going to talk about artificial intelligence and the future of health care. Perhaps I should start with Artificial Intelligence 101, or I will try to. I've included a slide deck. As I looked at it, I'm not sure it really explains what I want to talk about, but I'll try a few words and then give some examples, and we'll go from there.

CIFAR actually started a program a number of years ago led by Geoff Hinton. In the last few days, you may have seen a picture and stories about Dr. Hinton. He is really the founder of deep learning or deep neural networks, which has transformed artificial intelligence. Geoff was really the guy who brought together a group of scientists in this country and around the world — which is what CIFAR does — to try to be inspired about how we think the brain learns and then apply that to computers.

How we think the brain learns is that the cells in your brain, the neurons, form connections that are called synapses. As you learn more and more as a child, for example — let's say two plus two is four — and keep learning, the strength of those connections get reinforced, get stronger, and that gets hard-wired. I'm sure you have all heard that expression. It gets hard-wired in your brain. What Dr. Hinton and his colleagues did was to try to model that mathematically in computers using algorithms or instructions for the computer.

Any one neuron may have three or four neurons connecting with it, so there are reinforcements, positive and negative, going on in our brains all the time as we learn. The layers of those neurons go up and up until you finally say, "Ah, two plus two is four.'' That's why this was called deep learning, because of the layers, the hierarchy, of the neurons.

One of the slides I've shown is this slide here, the deep learning basics. It's meant to be a graphic representation of the inputs from one neuron to the other. That may have confused you more than anything, but let me give you an example of something we all know, just to sort of think about it.

If you're driving along in a car and you see a ball come on the road, if the ball is coming from your right, you're probably going to look to the right for a child running after the ball, not because you have seen the child, but because you know from previous experience that balls don't roll onto the road by themselves. There's usually a child playing with a ball, so you'll apply your brakes just to be sure. Nine times out of ten there will be a child. That's why you look to your right.

The early computers, such as Deep Blue, that beat people at chess 20 years ago were not using learning. They were using the speed and the large memory banks of a computer to fish out possible moves and to try them out, but the computer never learned. With deep learning now, what is actually going on is the computer is learning to play chess just as a human would, at some level.

That's my 101 of artificial intelligence. Let me give you an example applied to medicine. You'll see a slide starting with "Nature'' in the slide deck.

This paper came out a few weeks ago. What the scientists and clinicians did in that paper was they presented to a computer about 130,000 digital images of skin lesions. Some lesions were rashes, insect bites, allergic reactions, acne and cancer. They fed the computer what a bona fide diagnosis was of each of those things based on what they knew from clinicians looking at it. So the computer was given what is called a learning set.

In a sense, that's how we learn. We learn by experience. If you're a medical student, you will see a lot of pictures, and your instructor will say, "That's a rash. That's a mosquito bite. That's melanoma.'' The same thing happens with this computer, in a sense.

They then gave the computer new pictures where there had not yet been a definitive diagnosis, and they also gave the same pictures to dermatologists. They compared the ability of the computer to diagnose correctly what those lesions were with that of the dermatologists.

If you go ahead two slides, you will see the slide called "Deep learning outperforms the average dermatologist . . . .'' I think that headline says it all. A computer program with deep learning is at least as good, if not a little bit better, as a skilled dermatologist every time in correctly diagnosing whether it is skin cancer, a melanoma, a nevus or cancer. It looks for certain patterns of the abnormality.

This is a very powerful example — this came out a few weeks ago — of how deep learning will be applied to medicine. Basically, when you learn something, what you have really learned how to do is predict with a high degree of accuracy something in the future. So when you see a ball, you can predict there will be a child. If you see a certain lesion, you can predict, even before the pathologist gets his or her hands on it, whether it's cancer or not. Deep learning allows the computer to predict just as we predict. We could not go about our daily life if we couldn't predict what's happening around us to some level. It would just be chaos all the time. We have a fair degree of prediction that when you walk into this building you will have to go through the X-ray machine. At least we have to. A computer with deep learning can predict, and that's basically what it does.

If you can apply that kind of thinking, then — I've given you an example of skin cancer, and the next is breast cancer. The error rate, you'll see here, is 3.5 per cent for the pathologists. The A.I. error rate for a computer is 2.9 per cent, so the computer beats the pathologist. If you actually combine the two, the error rate drops to a half a per cent. The definitive measure of an error is the pathologist actually doing a biopsy and saying yes, it's breast cancer or not.

The next example is on diabetic eye disease, and it's the same kind of thing, and so on.

I think the take-home lesson from this, senators, is that artificial intelligence, and particularly deep learning, which is a made in Canada discovery or invention, whatever one wants to call it, is going to transform medicine.

Geoff Hinton, who is a sort of guru in this field, was quoted in The New Yorker this week. There's an article by Siddhartha Mukherjee, who wrote The Emperor of all Maladies: A Biography of Cancer, on AI applied to medicine. Dr. Hinton is quoted as saying, "They should stop training radiologists right now.''

I think that's probably an overstatement. I think there probably will be a need for radiologists into the foreseeable future, but I think it's also fair to say that, increasingly, we will take this device, and if we think we have a lesion, we will take a picture of it, send it to the email address of the computer, and the computer will either send back to us and/ or to our doctor the answer of what the lesion is within seconds, in this case sticking with skin cancer, and on it goes.

There was a young man, Brendan Frey, who is a CIFAR fellow in two of our programs — genetics and artificial intelligence — who started a company called Deep Genomics. This will be my last example. What Dr. Frey is doing is looking at the sequence of our DNA, of our genes, trying to identify the differences between two people that actually matter for health.

I look around this room, and all of us have about 3 billion bases of DNA. That's a lot of information. For sure, about 0.01 to 0.02 per cent of that DNA differs amongst us. Not all those differences matter. Some of us have black hair and some of us have brown hair, red hair and no hair. Some of us are males and some of us are females. But some of us have a gene that will predispose us to heart disease.

What you would like to know is what the differences are that matter for health and what are the differences that don't. That's a learning challenge. If you have enough information and enough background on the population and feed it to a computer with deep learning, that computer can sift through all the noise and identify the signals. So Brendan Frey has applied that in a test case for children with spinal muscular atrophy, cystic fibrosis and colon cancer in adults, with a high degree of accuracy. You will see a fusion or coming together in this case of two very powerful technologies: genomics and artificial intelligence.

Again, artificial intelligence depends on data. We have a lot of data stored in us as a result of our experiences going through life. A computer program with AI needs the same thing. It needs to be fed data to learn off of. That's the learning set. From that will come pretty accurate predictions as to what the situation is. We're going to see that applied to diagnostics, I think, initially. I think you'll hear a little bit from my colleague about it being applied to surgery, and perhaps prevention and other things in health care.

I'll stop there, senator, and yield the floor.

The Chair: Thank you. We will be having a witness on deep genomics. We're trying to cover the field as best we can. That was an excellent summary and introduction, and we thank you for it.

We will now turn to Dr. Christopher Schlachta, Medical Director of the Canadian Surgical Technologies & Advanced Robotics, but he is here today as an individual. Dr. Schlachta, please.

Dr. Christopher Schlachta, Medical Director, Canadian Surgical Technologies & Advanced Robotics (CSTAR), as an individual: Thank you very much. I appreciate the privilege of appearing before the committee today. I will preface my comments by saying I'm also a surgeon, so this is a context that I can follow. I have provided a written statement in advance and I will read through that, but I will try to embellish and highlight my comments a little bit more.

Just to begin, one the most recent revolutions in surgery is the concept of minimally invasive surgery. For 150 years of modern surgery, we have never made apologies to our patients for the trauma and suffering that is caused by making an access incision to perform a procedure in a body cavity. That access incision is technically an unnecessary part of the operation. When you think about what we're doing in surgery inside your body, what is important is what we do inside the belly, not the incision that we make to get there, except that up until recently, we have never had another means of getting our eyes and our hands inside your body to do that surgery. That incision causes pain, delayed recovery and complications. It's always been needed simply because we needed to get our eyes and hands inside.

Into the late 1980s, a paradigm shift occurred, which was facilitated by the introduction of computer-assisted surgery, and that's the reason for this background. Diagnostic laparoscopy is something that had been around for decades but was restricted because it was quite awkward. A scope was a device that the surgeon had to hold in one hand and they could look in the end and see through a small hole into a joint, your pelvis or what have you. The restrictions were that you had to be careful how you held scope and not contaminate the sterile scope with your eye as you were looking in there. Only you could see what you were looking at and no one else in the room. If you were lucky, you had one hand free, for example, to do a laparoscopic tubal ligation, but that's all you could do; it was quite limited.

Once a modern digital camera was attached to the end of that scope, that really opened up the world for minimally invasive endoscopic surgery. It freed up the surgeon's second hand and it also allowed everybody in the room to see what the surgeon was seeing, including surgical assistants who could have their own instruments. So although laparoscopy had been around for a long time, it wasn't until the late 1980s that we saw this explosion of therapeutic keyhole surgery.

That started with gallbladder surgery which, in a period of seven years, if you were in hospital and needed your gallbladder out — many of you might know someone who has had their gallbladder out — that used to be a week to 10 days in hospital. Nowadays we do about 80 per cent of our gallbladder surgery as an outpatient procedure because we do it through small holes. In 2017, virtually any operation that is performed in a body cavity can now be performed in some kind of a minimally invasive endoscopic-guided fashion, all with the goal of eliminating that unnecessary incision and making smaller and smaller holes.

Now, a little bit about CSTAR. CSTAR was founded in 2003 as a research centre focusing on developing next- generation surgical robotics. We have a number of engineers affiliated with CSTAR who have particular expertise in haptics, which is adding the sense of touch to robots, and teleoperation.

As an example, at CSTAR, we have developed a laparoscopic instrument for the da Vinci robot, which performs intraoperative tissue palpation and ultrasound imaging. This work received the Best Innovation Prize at the 2015 Surgical Robot Challenge at the Hamlyn Symposium at Imperial College, and there were a large number of competitors. I mention that example because our own laboratory studies have demonstrated that that robotic finger, if you will, is more sensitive than the human finger, and that provides some context for our discussion.

CSTAR has benefited from peer-reviewed funding through federal programs such as CFI, NCE, NSERC and CIHR. Our engineers would argue that we're the best-equipped medical robotics research centre in the country. CSTAR engineers have a wide collaborative network, including the AGE-WELL Network, which you have heard about previously. Our engineering graduates are in great demand by the robotics industry and by Canadian companies developing computer-assisted technologies.

In addition to developing technologies, let me talk about the word innovation for a minute because it's a highly overused word right now. I would argue that true innovation is not just invention but it's also invention and translation of promising therapies into front-line care.

We've taken on the responsibility of medical training as well. You've heard lots about medical error being a significant cause of morbidity and mortality. This is magnified in an environment where the pace of change and the introduction of new technology is not just rapid but accelerating. There was a time when, if you wanted to be a surgeon, you finished medical school and went into four or five years of training, and that would carry you through a 30- or 35-year career. But the technology now that is being more and more rapidly introduced to surgery and medicine in general is coming at such a pace that we have to consider retraining the entire workforce at regular intervals. We are not currently equipped to handle that now, and a lot of thought needs to be given to that.

Patients will be harmed and new, promising technologies will fail if they are not safely introduced into practice through responsible training programs. At CSTAR, we opened the Kelman Centre for Advanced Learning as a surgical skills laboratory, but we currently run a high-volume of simulation training programs to help professionals, not just surgeons alone.

One advantage of computer-assisted surgery is that the technology is digital so you don't need a patient for training if you can create a virtual simulation at the other end. Think about how we have traditionally trained our medical students and surgeons over the years. Our patients are the people on whom they train. It's a program of graduated responsibility — a highly supervised environment that has worked well until now. But it would be a relief to most patients if we said we were no longer going to practice on patients and train instead on virtual reality models and mannequins, and through simulation get them up to a certain standard of competence.

One of our largest research grants to date has been an Ontario research fund grant on developing computer-based simulation technologies. You have previously heard from Dr. DiRaddo from the NRC. We collaborated with him many years ago on their NeuroTouch program.

Consistent with the training program, we've also developed an interest in telepresence. We collaborated with Intuitive Surgical, the makers and manufacturers of the da Vinci robot, currently the only commercially available — or the standard commercially available — multi-functional surgical robot. We collaborated with them on their first test of the telesurgery prototype. Remote telesurgery currently faces a number of feasibility challenges, and I'd be happy to discuss those later.

We've also developed the telementoring program, so if you can't operate from a distance, at least you can help a surgeon from a distance. I'm currently the chair of a telementoring task force for the Society of American Gastrointestinal and Endoscopic Surgeons.

Given our long-standing focus on robotics surgery, combined with our simulation and training programs, Intuitive Surgical, the company that makes the da Vinci, designated CSTAR to be their Canadian training centre for da Vinci surgery. We train surgeons through the da Vinci-specified criteria to bring them up to standard before they return to their hospitals. Several surgeons have trained with us.

We also deliver nursing coordinator training programs, because the nurses are really the ones who do all the work behind the robotic systems.

I'm at CSTAR now because I firmly believe that the future of surgery and medical care in general lies in the interposition of a computer between the patient and the health care providers. Just as artificial intelligence can augment diagnosis and decision-making, image-augmented surgery and mechatronics give the surgeon superhuman capabilities to advance minimally invasive therapies and offer many potential mechanisms to reduce harm due to error.

We face a number of challenges in realizing that vision, but with the experience and expertise we have acquired over the last few decades, I think that goal is achievable. Canadian health care providers, in particular, and the Canadian research centres are actively contributing to this research goal.

I'm happy to answer questions.

The Chair: Thank you. I will open up the floor to my colleagues.

Senator Stewart Olsen: That was a very informative presentation from you both, understanding that we're starting at ground zero with a lot of this.

Dr. Bernstein, I would like to take you back to the beginning. We'll use dermatology. Who picks the slides and the actual case studies and feeds that into the computer? Then who says what the confirmed diagnosis is? How do you teach that computer? How do you find the information?

Mr. Bernstein: In the study that I referred to — and I will generalize, as I think this will be true generally — there were slides at various places around the United States. This was a group associated with Google that did this work. They collected slides, and they knew the definitive diagnosis. If was cancer, they knew that from the pathology, and if it was a rash or other things, they knew that from the local clinician. They collected a lot of samples around the United States at various teaching hospitals that had collected bona fide skin lesions over the years. They "fed'' that into the computer. So it was a lot of samples — 194,000 samples.

This is not your question, but it allows me to make another point. If you're a medical student, you don't look at 194,000 samples to learn to become a doctor. That tells us that deep learning in its current stage is probably at a very early stage in the science. We can expect to see much more powerful forms of deep learning, or whatever it's going to be called over the next five to ten years, as this research continues.

Senator Stewart Olsen: Dr. Schlachta, kind of the same thing. If you're programming a computer to do surgery, who actually feeds in the best way to do the surgery, or is that the same and you take a vast group? How do you individualize it?

Dr. Schlachta: Programming a computer to do surgery would be suggesting a level of autonomy in surgical robotics that doesn't exist right now. The main robotic systems we use are called master-slave systems. The robot does not do anything I don't tell it to do, specifically by moving my hands and fingers. It reproduces my hand's actions precisely.

There are some robotic systems that have the potential to operate semi-autonomously. There's a Mako robot for knee and hip arthroplasty. The first time I met with the makers of that system, they presented it as a semi-autonomous robot where they would do CT scanning preoperatively, be able to sit down and work with three-dimensional images to design the perfect knee implant, let's say, and then the surgeon would provide exposure of the knee joint to the robot. You could basically push a button and it would carry on doing the reaming of the joint and providing a perfect interface for the prosthesis. It is currently being marketed as a system where the surgeon holds the device and guides it himself, and the robot provides boundaries where the surgeon can feel pushback from the robot, such as you've gotten to the border of where you want to be reaming that bone. But it doesn't act autonomously right now. There is some pushback against that.

Senator Stewart Olsen: I can see as this technology develops how extremely important it's going be to monitor how the data is input in both instances. Ethically speaking, is there anything set up that we would look for, or are you beginning to think about that as you move forward — the expertise needed to provide this? It will essentially take over in a lot of cases, especially in remote areas, which I think is wonderful. But at the front end, we're going to have to really be careful.

Mr. Bernstein: I'll take a stab at it. This is not a medical example, but if you go on the Amazon website and are looking for things you want to buy, Amazon has an artificial intelligence program that is already scanning your input, as well as millions of other people's input from around the world. Using that data, it decides how much of another product to order or what to drop from the website. It's doing that in real time, and you're inputting the data via your purchases decisions.

Dr. Schlachta: If I might, there was an article this month in a journal called Science Robotics. I believe it's an editorial with many authors. One of them is our director of engineering. It provides a classification system for surgical robots, from class 0 to class 5, and grades them according to their level of autonomy, with class 0 being the mechtronic master-slave system and class 5 being the fully autonomous thinking robot that does the surgery itself. Even in their article, they call class 5 science-fiction at this point, but certainly that's the goal we're all looking for.

Just like the invention of the calculator and the computer didn't put accountants out of work, I don't think autonomous robots will put surgeons out of work. There needs to be that guidance and advice. There is a lot about surgery that still involves judgment. Even with significant artificial intelligence, I think the public will have a lot of concerns about who's making the decisions. Where are you going to cut? Who will you blame if the robot makes a mistake? A surgeon can apologize and say, "I'm sorry, I made a mistake,'' but if the robot makes a mistake, how well will that sit with the patient who suffered that complication?

Senator Stewart Olsen: Thank you very much.

The Chair: Dr. Schlachta, I wonder if you can provide our clerk with the identity of the reference that you just referred to, the article.

Dr. Schlachta: I will do so.

The Chair: I wanted to get more clarification on what I think Senator Stewart Olsen was looking for. In the surgical examples that you gave, you gave one more normal surgery where the surgeon is in charge and carrying out the surgery; and then the last one, which we're beginning to see more examples of, where the intelligence is guiding the limits to which the surgical tool can move and helping the surgeon in the real inner space of the surgical cavity, and you made that one very clear.

Going back to the one where the surgeon is guiding the technology in the surgical process, the question I think that is left is the degree to which the surgical device contributes to the ability of the surgeon to carry out an almost perfect surgery. One of the elements, of course, is stability. If you have, in general, a robotic tool, at least some articles indicate that the stability of the device is one of the aspects that help a surgeon.

Could you help us understand a little bit more about what the device does in terms of making the surgery much more accurate and effective under the control of the surgeon?

Dr. Schlachta: There are many ways in which that occurs, so I will try to be as succinct as possible.

The most simple and straightforward one is, as you've already mentioned, the stability of the system. You also factor in ergonomics and the ability to have image-augmented surgery. These things are all important from a surgeon's perspective because when you think about long, complex operations and the fatigue of the surgeon, the ability to actually use a system that causes less strain on them is going to make it better overall for the surgeon to maintain their focus and their concentration. I know that's not specifically what you were asking me about.

As we move forward into systems where the computer exerts some control over the operation, the idea would be, for example, what we know from robots that have been involved in arthroplasty surgeries is that even the best surgeons can achieve contact rates with their implants that will be in the 70 to 80 per cent range. Use a robot, and you're well over 90 per cent in terms of your contact rate. If you can plan these things preoperatively and do a perfectly precise operation, then you will get a better outcome, presumably. There are a bunch of caveats associated with that.

There are challenges right now, though. It's a system that could be semiautonomous. We still have surgeons preferring to actually guide the instrument themselves and have a computer warn them when they're reaching the edges and so on. There are many other ways that can be built into the system, though. You can program the system to have no-go zones. If there is a potentially catastrophic complication that could be suffered during the course of surgery, if a robot arm gets moved into the wrong part of the body, you can program the system to say "don't allow the arm to go over there'' and it prevents the surgeon from being able to do that. Those are very basic things.

Image recognition; important critical structures the surgeon needs to be able to see or identify as they are performing the operation, we're on the verge of being able to have image recognition capability that will allow the system to be able to say to the surgeon, "Here's the ureter. In case you haven't noticed it already, here it is. You don't have to go looking for it; I found it for you already.'' That's the very basic level stuff we're getting now.

As we get into more autonomous things, the idea would be that presumably the surgeon would assist with the preoperative plan and then the robot would carry out that plan under some kind of surgical supervision.

The Chair: Thank you. That was very helpful.

Senator Seidman: Thank you both very much for your presentations.

Dr. Bernstein, if I could approach with you the basis of the deep learning that you tried to describe for us, because obviously it is the essence of artificial intelligence. Basically, one of your slides, Lukas Masuch, talks about deep learning and describes it as exceptionally effective at learning patterns, utilizing algorithms and lots of information. Basically, you try to simulate the brain's pattern of iterations and data experience and then produce these algorithmic responses.

I'm thinking now about health care specifically. Using this as a basis for diagnosis and decision-making in health care medicine, for example, some would say that there is a certain art of diagnosis and decision-making in medicine that goes beyond the sort of basic iterations of data. Is that true? In developing artificial intelligence, I don't know how you could build that in, but is there an attempt to build that in? The other component would be, of course, trying to build in sufficient moral and ethical safeguards into the algorithmic decision-making process of this.

Mr. Bernstein Two great questions or points. I'm not a medical doctor; I'm a Ph.D. My sister is a medical doctor. She's a family doctor. When the paper in Nature came out that I described to this committee, I talked to her about it, and she said she would never trust a computer to diagnose her patients. So we had a very good discussion, I thought, about it. She didn't budge, but I made the point that if a computer could make an accurate diagnosis, and any responsible physician would only trust the computer themselves after a while — it's a machine, after all — but once the clinician got the confidence that the computer was making an accurate diagnosis, then the clinician could actually spend more time interacting with the patient.

I think we've all had this experience. For some reason, actually knowing what's wrong with you, even if there is not a good treatment, makes you feel better. "What's wrong with my skin, doctor? I don't understand what's going on. I have this horrible rash. Please tell me what it is.'' The doctor will say, "You have. . .'' some long name. You have no idea what that means, but he or she does, so it makes you feel better. And I think that has to come from a human. I think this has to do with human-human versus human-machine interaction, so it doesn't speak to the capabilities of the machine; it deals with our own psychology, at least at this point, and our ability to relate to another human being versus to a machine.

But I do think it will allow physicians to spend more time with their patients and actually talk to them about the implications of that diagnosis, if it's a serious one, or if it's not a serious one, and what the options are for the patient, as opposed to, "You have melanoma. I'll see you next week. I have to go to the next patient.'' So I think it's a positive way of looking at it. It is a good way of looking at it.

The Chair: Dr. Schlachta would like to be included on this too, senator.

Senator Seidman: Absolutely, because you have the robotics with the artificial intelligence, so how will you handle this?

Dr. Schlachta: If I might just address the issue of the art of medicine, I teach medical students and residents, and a lot of this discussion about art comes up, and it's really a question of experience and intuition. When you have enough experience seeing something, you start to get a sense. What is intuition? It's not a supernatural thing. It's complex pattern recognition, and it's something that we haven't yet learned how to articulate.

I would argue that any computer system that's intelligent enough will be able to develop the same intuitive capacity that we have. We just can't explain how we figured it out. I have many medical students ask me, "How did you know that?'' I'll say, "I don't know; I just got a feeling.'' But that feeling is a rational process that takes place; I just can't explain it yet. I'm sure a computer can do that just as well as we can, if it's smart enough.

Mr. Bernstein If I can jump in there, that is really deep learning. I'm not advocating for machines here over humans, but the whole process of deep learning in a machine is feeding the computer data, and the computer then, using that data, makes a diagnosis and gets the immediate feedback whether that was right or wrong. In the example I gave you of 130,000 data sets, or slides in this case, it gets that feedback, and then the computer "learns'' and gets that intuitive feeling like a medical student does, or not, but certainly outwardly it does, and then in this example beats the dermatologist almost every time.

That's because the computer has the speed and memory retrieval and the ability to do things we don't have. There is a learning aspect to it. What we call intuition is actually learning, in fact.

Senator Seidman: Experience, to a certain extent.

Mr. Bernstein: Exactly. I can give one more example: This is in The New Yorker article, so I'm stealing it from there. If you show a computer a picture of a bicycle, it's easy for the computer to say that a bicycle has two wheels, handlebars and a seat. It is straightforward. It is the same for a child, but when you teach a child to ride a bicycle, there's a learning that goes on. When you ride a bike and you want to turn left, you know you don't lean to the left. The bike will fall over. You lean to the right. You've learned. That's an experiential thing that's actually hard to teach, but every child who learns how to ride a bike learns it, and you don't forget that. That becomes almost hard-wired. That's what we're talking about.

Senator Seidman: That's really helpful in terms of the deep learning piece, the intuition and the art of it. What about the question that always comes up that has to do with the ethics? How do you build that into a system of artificial intelligence and robotics, or do you have to? Do you leave that to outsiders to deal with in some way? Can you build in ethical safeguards so a machine or a robot isn't doing things in ways that —

Dr. Schlachta: It's a very complex issue. It's something that is being discussed quite a bit. A couple of weeks ago, I was participating at Western University in a Rotman panel on robotic ethics. I was the surgeon on the panel, but there were experts on lethal autonomous weapons and various things. It's a big issue.

Ultimately, you can build in basic rules. You can go back to Asimov's three laws and build in a hierarchy of rules. The great thing about those books was when those rules fail and you have to try and debug the software. That's what is so interesting about those stories.

The other thing that gets overlaid on that is compassion, which we've already addressed, but that is a really important consideration, and the greatest fear that everybody has on both sides is an unfeeling machine. Are we going to reach a point where we have spiritual machines like Kurzweil describes in his book? I suppose anything is possible with artificial intelligence.

The Chair: The ethical issue is another deep learning issue, because you've got so many levels of ethics. The simplest one would be, are you confident that the automated technology you're using is highly effective? What are the risk levels and have you properly informed the patient in the normal way, which you do on an ongoing basis with surgery in any event? Then there is the issue of whether the instrument eventually will have the capacity to take over the operating theatre and remove parts at whim. We're not going to get into the depth of the issue here. You gave a good initial answer.

I think I've got you off the hook, Dr. Schlachta. The article you referred to is in Science Robotics, and you mentioned it was an editorial; right?

Dr. Schlachta: Yes.

The Chair: And the first author is Dr. Yang?

Dr. Schlachta: That sounds correct. March 17 or 24, I think.

The Chair: That's correct. We will let you off the hook for getting us further information.

Dr. Bernstein, in your answer to a senator, you gave an interesting example of the attitude of acceptance of diagnosis, that is no, you want a real physician to give the diagnosis. We had earlier testimony that once the diagnosis has been made and the surgery is being carried out, the patients are pleased to know that a robotic system is helping to guide the surgery. This is a very interesting complementary aspect in terms of human nature, isn't it? We were informed that there is a very high level of confidence in the machines at that level, but clearly in terms of the diagnostics, your limited example would suggest there's a different approach that the human will take.

Mr. Bernstein: As a society, we are starting with the assumption that humans are more reliable in this space than machines. That's probably understandable. I would not be surprised if we evolve, with experience, to the other view.

I was interviewed on Saturday on TV about artificial intelligence, and the story just before I went on was about an Uber car being hit by a human-driven car, and the Uber car was pushed over. Uber withdrew all their smart cars from the road. The assumption in the story was that something went wrong with the machine-driven car. But if you think about it, the human-driven car hit the Uber car and pushed it over. It's a reasonable assumption, if not more reasonable, that it was human error, not machine error. We know from high insurance rates that humans make a lot of mistakes when they drive cars.

The Chair: I don't want to keep this going, but the further analysis on that is showing that the argument that humans are making is that a human might have seen something that the Uber car didn't, and even though the fault was in the colliding vehicle, the human instinct is that perhaps a human driving the Uber car might well have taken evasive action. Again, we could keep this going for some time. Your points are extremely well made, but those are the issues that will come out as we go through this.

Senator Neufeld: Thank you, gentlemen. It's very interesting information we're hearing about.

You talked about having a rash on your hand or something and you take a picture with your iPhone and a computer will tell you immediately. What happens if it's something inside of you instead of something on your skin, arm or hand? How do you do that? Is that through DNA that you put it into your iPhone or computer?

Mr. Bernstein: The example I gave happened to be on skin lesions, but there are now starting to be examples of biopsies looking at breast cancer, where that's inside of a woman, or looking at electrocardiograms for heart disease. There are lots of ways of looking inside, including X-rays.

CIFAR held a workshop in Europe with Siemens, the electronics company, a diagnostics company, to look at how AI will play out when applied to medical diagnostics and X-ray machines. Companies like Siemens and General Electric and start-up companies are very interested in applying artificial intelligence to look at these.

I will repeat myself a little bit. When you feed the computer a training set of thousands of X-rays with the right answer, for example, this is a problem in your stomach, or your pancreas, or in some internal organ based on X-rays, with the current science, after about 100,000 of those pictures and being told the right answer, the computer will learn for the next one what is the right answer. Any medical diagnostic one can think of, whether looking at a skin lesion with an iPhone or an X-ray or an NMR or MRI machine or a biopsy, one could in theory — and I think we're seeing that now increasingly in practice — feed that information into a computer with a deep learning program.

Senator Neufeld: I don't think I'm so afraid of computers making those decisions, because computers make lots of decisions for us now in the medical field, such as blood tests and all those things. That's not done by a doctor picking through it with something to look at. It's done by a machine, and it's been done for a long time. That kind of thing is not an issue that scares me. I suppose it depends how far along you're going to get.

I come from a time — I'm old enough — where we didn't even have cash cards. You had to go to the bank to cash a cheque. You got the money and you actually saw the $20 bills in your hands and everything was fine. When the cards came out, at first I thought I'm not getting one of those because I don't trust a computer, but I love the machine. It didn't take me long to find that out. You pop it in and it's all done automatically and it's correct. You usually don't check your account. It's all there. Those kinds of things are wonderful in how we're going to move forward in dealing with health care.

The other part is that robotics are used in operations. I've had doctors in my heart, and they didn't do that through cutting it open and having their hands there. They did it through some form of robotics. When that was being done, I just wanted it fixed. I wasn't worried about it being crazy or thinking how can I trust that kind of thing? Cameras are going in there. How can I trust that? I still trusted the physician who was actually doing it.

I think there is going to be a bigger acceptance of what is coming in the health care field than what we maybe expect about how AI and robotics are actually better for us. I'm probably old enough that I should be afraid of some of that, but I'm not, actually.

Dr. Schlachta: I'm happy to respond to that comment. I think that's a perfectly insightful comment.

Patients want to make sure that they get safe care. The informed consent discussion that takes place when you propose a therapy and an operation — and you go through the discussion about potential risks and recognized complications — is the scary part of the discussion. No one wants to hear about 1 per cent of this and 5 per cent of that. They want to know they're going to be in the "almost always goes well'' group.

I would contest the comment that you might have heard from a previous witness about how patients would rather have a machine do part of the operation. I have encountered many patients who have been resistant to the notion of having a robot involved in their surgery. Their main fear is the compassion side that we've talked about already and how do they know the robot isn't going to make a mistake? Hollywood hasn't helped in that regard, but how do they know there isn't going to be a glitch of some kind where the program will go off the rails, whereas they know the human will keep things under control and do it properly. I would say as long as the system is being monitored by a surgeon, whether it's autonomous or not.

We already use devices in the operating room now that are computer-assisted semiautonomous devices. There was a time when if you wanted to ligate a blood vessel during an operation, you tied it off with a suture. We still do that mostly. But there are a whole wide range of devices that are commonly used every day in the operating room now that use energy forms like electro surgery and ultrasonic energy for sealing blood vessels. They have a computer chip in them that measures the tissue impedance and sends an audible signal back to the surgeon that says you've sealed that blood vessel. It goes beep and you cut the vessel and carry on with the operation. We trust the computer device to tell us that we have adequately sealed that vessel, whereas if it wasn't properly sealed, there would be an impressive amount of bleeding. We've already incorporated these technologies into the operating room. I think the patient trusts that we're using them properly.

The Chair: I think, Dr. Schlachta, you clarified and made it consistent with the earlier witness testimony. We were dealing with robotics under the control of the surgeon. The patients felt extremely confident about the robotic surgery going ahead under the supervision of the surgeon. It wasn't a case of a totally autonomous robotic surgery. I failed to put that qualification in, and you clarified it. The witness expressing that there was a very high degree of confidence in having surgeons use robotic technology would have been a better way for me to have put it.

Senator Raine: Thank you both very much. This is really interesting.

I think Dr. Schlachta said at one point, "Don't train to be a radiologist,'' but you didn't say what you should train to be. We're really saying that if I go into the diagnostic field, you're diagnosing what is wrong. We use a lot of different technologies right now — X-ray, MRI, CAT scan, PET scan, biopsies and all these things. Is that not the realm of the radiologist? They send the information to the pathologist who sends it to whoever makes the decision in the end. How does it work? With this new field, where does the interpretation of artificial intelligence fit into that diagnostic field?

Dr. Schlachta: I might start by saying that it was Dr. Bernstein that disavowed the field of radiology in his comments.

Mr. Bernstein: Well, not quite like that.

Dr. Schlachta: There is a flow to the diagnosis and work-up of any disease process that involves medical imaging, biopsy and so on. As the tools get better and better, I don't think necessarily eliminates the need for a specialist in that area. I don't see a catastrophe of loss of jobs in the human health resources sector. I think the tools will just enable the caregivers to be better at their job.

Senator Raine: Right now, there are very few PET scans in Canada. Yet I understand that's one of the better diagnostic tools. Is what's coming down the line going to make some of that more readily available or not needed?

Dr. Schlachta: As these diagnostic tools come forward, one the biggest challenges — and this is a whole separate area of discussion which we might get into — is the cost of these things, because the costs are just going up and up. The question is: How much better care are you getting and at what cost are you getting that better care and what is society's willingness to pay for that incremental improvement in care that you're getting?

There is a lot of concern about things like PET scans and MRIs. As the number of these devices goes up, we're not necessarily seeing that correlated with better health outcomes in the population. As long as they're being used in the right way, they can be very useful, but they can be also overused and you have a problem with false positives.

Mr. Bernstein: To go back to your first question, my answer will be a bit different than that of my colleague here. I'm not going to disagree with him, but I think it will be a bit different.

With the application of artificial intelligence to medicine, AI is a transformative technology. This is not an incremental improvement in a diagnostic. This is a transformative technology or a disruptive technology — these phrases are used. When disruptive technologies come along, it's quite hard for us to predict the long-range impact of that technology, in this case on medicine. I don't think we quite know yet what the implications are for medicine in the longer term because we're early days. These papers are just coming out in journals right now. I think we're at the beginning of a journey and we have to see where this ends up. This is not just a better X-ray machine. This is a whole different way of analyzing data that we've never had before. So I think it will be interesting.

I think in metaphors, so please excuse me for a moment. Think about when the tractor was introduced in farming as a machine. It replaced humans to a large extent and made farming able to feed the planet, basically. It increased productivity. But when the tractor was first introduced at the beginning of the 20th century, if you read a little bit of that literature, which I have, the predictions of the impact of the tractor on farming were completely wrong.

I think we have to monitor this and see what happens as we go along in terms of what the implications will be for the workforce in medicine in this case.

Senator Dean: I'm a recent beneficiary of laparoscopic surgery so I can attest to the benefits of that. I likely wouldn't be here today if I had the other form.

Those were great presentations. I must recognize Dr. Bernstein's role in the work he does at CIFAR but also in brokering across the country discussions among a number of high-profile organizations in bringing coherence to the government's recent announcement.

The question is about government and its role. Earlier in these hearings, we were asking, where is government in all of this? Does it have a strategy? To some extent we have somewhat of an answer to that, a very solid answer in terms of funding, and obviously partnerships with some heavy-hitting, leading organizations. What more can the government do in terms of enabling and maximizing the benefits from this by regulation, other incentives or getting out of the way? As you talk to one another about what other supports might be possible from levels of government, are there things besides funding and asking for your support in developing a strategy?

Mr. Bernstein: That is an excellent question, senator, and I appreciate your remarks as well.

Going back to my earlier comment, this is a disruptive new technology. Certainly our government, if I look at the strategy that they've agreed to fund through CIFAR, has several major components.

One component is to build a foundation for innovation to strengthen the Canadian economy. That's clear. We also asked for funding, and they have agreed to it, to address some of the social, ethical, legal and philosophical issues of artificial intelligence. Government also has a key role to play — it will be obvious to this group — in terms of the implications of artificial intelligence for society writ large, not especially medicine but including in medicine. It's hard to anticipate exactly how that's going to play out.

One of the philosophical issues that will come about is, for example, a story I like to tell at dinner parties. There's lots of data showing that when someone is living alone or is ill, having a pet, not a human but a pet, has a beneficial effect on their sense of well-being and their recovery. Does that extend to a smart machine? We don't know.

We hear lots of horror stories about how badly seniors in care homes are neglected. Are we going to see smart robots not doing surgery but talking to them? SIRI can talk to us on our iPhone now and answer questions. Speech recognition is a form of artificial intelligence.

Will we see robots becoming caregivers in the healthcare system in a meaningful way? What is the role of government in purchasing this technology? I imagine it will be expensive. How do we regulate it and deal with some of the ethical issues as they arise? These are big questions.

Dr. Schlachta: Maybe I can follow up on that. There are many levels that can be addressed.

There is a paper I am aware of that outlines a study of seniors at risk for dementia. They randomized them into two groups. One group was given a child-like robot that behaved at the level of two years of age. They compared them over time and found that that group had less progression of their dementia. That data is currently available.

There is big picture and there's little picture stuff. I'm happy to talk about the big picture stuff because I completely believe that someday we can have a completely fully autonomous doctor and surgeon many years in the future.

The pace of development of robotic surgery has not progressed as fast as we all thought it would. I moved to London for CSTAR 12 years ago. If you asked me then what surgery would look like in 12 years, I would have said we're going to be doing everything with robots, but yet the penetration is very low still. There are a lot of barriers to that. A lot of that has to do with cost. It's not just a question of saying the system needs more money, but that is a part of the challenge we face right now. There are other countries with different payer models for how they manage their health care. As a result of that, they pass those costs on in other ways than we do here.

The big challenge that we have, for example, with the expansion of robotic surgery in Canada has been every hospital that has a robot right now has acquired that robot through philanthropy in some form. After a donor has bought that robot for them, the hospital is now faced with the challenge of dealing with the service contract and the operating costs, which significantly inflates their operating budget. That's not new. We all know there's health care challenges, and we want to be as cost-effective as we can, but new technologies are always much more expensive when they start out. If robotics fails now because we can't afford the first steps into robotic surgery, then we'll lose out on the promise of that future where more robotic systems are coming to market. There are many more competitors already for the da Vinci system coming to market. We all expect to see costs tumbling down, but we don't want to fall behind at this point in terms of not being able to afford to pursue that technology.

One side of it is finding ways to incentivize responsible use of surgical technologies or medical technologies. The other side that you've addressed is the regulatory side. I'm sure it would be almost insulting for me to say government does have a responsibility as far as protecting citizens, but you don't want that regulation to impede innovation.

Senator Hartling: Thank you very much. Your presentation was very impressive with good information.

It sounds like Canada is excelling in innovations. You talked about marketing and the commercialization of some of these products and services. How are you doing in your organizations? What do you see needs to happen?

Mr. Bernstein: In artificial intelligence, if I can jump in there first, when Minister Morneau presented the budget speech, he announced $125 million for funding developing CIFAR's pan-Canadian AI strategy. That strategy is meant to really attract and retain talent at the senior level and to expand the pipeline of graduate students going into this area, as well as some of the social and ethical issues that we've just been talking about. I think we're going to hear more announcements tomorrow on the innovation side, so coupling the talent that's needed to do the science and the young people who are being trained with the business side.

In the budget, Minister Bains announced $900 million for super clusters. Which ones will get funded, I don't know, but it's hard to imagine an area of innovation these days that doesn't depend on artificial intelligence. I think we'll start to see that happening.

Young people these days, graduate students and undergraduates, are very entrepreneurial. I'm very aware, right across the country, of a lot of young people who are starting up small companies in all kinds of areas based on artificial intelligence. The Globe and Mail had an article about this Creative Destruction Lab at the Rotman School of Management at the University of Toronto in the ROB a couple of weeks ago, a very big article. The Creative Destruction Lab is actually taking some of these young people who have ideas for starting up companies and mentoring them, initially in a "Dragons' Den'' kind of format, and then providing the financial resources to start their companies.

I think we are starting to see some more entrepreneurial activity in this space that we did not see before. I am very optimistic, actually.

Dr. Schlachta: I think he summarized it very nicely.

[Translation]

Senator Mégie: I want to come back to the question my colleague put to you about the device used to diagnose a skin lesion. I am trying to see what the benefits of that device would be. Currently, a photograph is taken and sent to the physician. Then the physician schedules an appointment for a biopsy, as many skin lesions look alike. If a photograph is taken, the physician cannot know whether or not the lesion is cancerous. All lesions look alike. A biopsy is absolutely necessary. Does doing a biopsy change anything for us in terms of what we are doing right now? A photograph is currently taken and sent to the physician, who schedules an appointment. Is there an additional benefit to that?

[English]

Mr. Bernstein: My translation was not working, but I think I understood your question. I think I did. You'll please correct me, or perhaps the chair can, if I don't get it right.

I still think we're going to need, for the time being, the combination of a machine and the clinician working together to give a good diagnosis and also to give that sense of well-being that comes with human-human contact. I think that will, for the foreseeable future, continue. Whether it will continue beyond the foreseeable future, I don't know. I don't know if I've answered your question, senator.

Dr. Schlachta: If you don't mind, I think I also understood your question. When you look at skin lesions, the judgment that a dermatologist or specialist in skin lesions — whatever their specialty is — for the vast majority of them, it's a question of looking at the lesion and providing the judgment as to whether further investigation is required. There would be nothing wrong with the artificial intelligence recommending a biopsy at some point, not necessarily in every case. That could even be an automated biopsy with automated pathological diagnosis and computer-generated result as well.

[Translation]

Senator Mégie: We will see. Thank you.

Senator Cormier: I want to thank you for the educational quality of your presentations. We have been following this work for a while, and I must admit that it has been very difficult for me to get interested in it, since I am not involved in that sector, but your presentations are very passionate and enlightening.

We know about all the benefits of this process. That said, what are the main reservations of health care professionals about all those changes and what challenges are involved in training physicians with the arrival of these new technologies? I am asking this question in the context of educational institutions and regions. We can see access to training being available in major centres, but what is your take on those challenges in smaller regions?

[English]

Dr. Schlachta: Thank you very much for that question, also a very insightful question, of course. It depends on what level we're speaking here. If we're talking about future fully autonomous robotic systems, I'm not really sure how to address that right now. That is so far off into the realm of science fiction, although I believe we will probably get there some day.

Practically speaking, as far as robotic systems are concerned, or any new technology, that is a real challenge right now. The question is: Are these systems really making our surgery any better? There has been ongoing debate about minimally invasive surgery for 30 years, since its introduction, with some surgeons still steadfastly rejecting any evidence that it's necessarily better than an open operation with an open incision.

These are all about paradigm shifts, and it takes time. A generation of surgeons needs to retire, and the new generation comes up that is thinking in the mind frame, whether it's necessarily supported by the evidence or not. But there is always that default, I think healthy scientific skepticism about: Prove this is better for my patients.

From a training perspective, that's a real issue. As far as training residents is concerned, this is a lesson that we learned explicitly with the introduction of minimally invasive gall bladder surgery, if I can use that as an example. It's not an artificial intelligence system.

The most feared complication of a keyhole gall bladder operation is an injury to your bile duct. The bile duct is not a necessary part of that operation, but it's close by. You can't live without your bile duct. If the surgeon gets lost and accidently cuts it, then you need a major operation to reconstruct it and probably a lifetime of misery as a result of that.

The risk of that complication tripled with the introduction of keyhole gall bladder surgery. We all believe the main reason was because of the way it was introduced. The technology was very industry-driven. Surgeons got together in groups, travelled around the country and taught weekend courses on an animal model, and then on Monday morning the surgeon would assault his first patient and try to do a keyhole gall bladder surgery. We all recognize in retrospect that's probably not the right way to introduce new technologies into practice, but this is an ongoing issue about how do we retrain the workforce.

Residents in training, when they graduate with their brand new surgical specialty, have trained in these new technologies in a controlled environment. We don't let them write their exams or go into practice until we're happy that they're safe. The problem is that when a new technology gets introduced and you have thousands of surgeons already in practice, how do you teach them that new technology? We haven't solved that yet.

The current approach we're taking is trying to development mentoring and telementoring programs where we will go to the surgeons' operating room and work with them, or virtually go to their operating room and work with them. This is a very hot area of research and development right now in terms of providing that hands-on training. It doesn't address the issue of artificial intelligence or autonomy, but it is an ongoing challenge, this notion about having to provide opportunities for retraining. I don't know if that answers the question.

Mr. Bernstein: If I could jump in here, I absolutely agree with what Dr. Schlachta just said and will just add one other comment.

You asked about smaller centres as well, senator. With these new sophisticated technologies, whatever they are — whether they're laparoscopies, robotic surgery, AI or genomics — the dissemination of those sophisticated new technologies to smaller centres becomes a crucial issue of Canadians going to have equal access to quality health care. That dissemination takes place, but it doesn't go to smaller centres first; that's for sure. It starts at downtown teaching hospitals, so-called tertiary or quaternary health care centres, and disseminates out from there. I think that is an issue, and I don't know what the solution is.

The Chair: Before I turn to the second round, I want to ask a couple of questions.

First of all, Dr. Schlachta, with regard to your comment about how long it's taking to get into robotic surgery, I remember that, 25 years ago or more, those of us who were using computer technology expected we'd all be running around with an electronic health record within a few years, and in Canada we still don't have it and we've spent billions of dollars. So there are lessons to be learned as we go forward.

Dr. Bernstein, I thought your use of the tractor as an illustration of the fact that things don't go the way we expect was good. The automobile may even be a more dramatic example because they arrived before roads for automobiles existed, and the predictions ranged all the way from the end of the world is upon us with this motorized monster to the sky was the limit. As you correctly said, the near use of the technology was quite different than people anticipated at the outset. Those are a couple of comments. I want to ask two questions.

So far, most of the examples that we've been given, as today, deal with a specific health care act, a surgery, a diagnosis, or some specific element of health care. Now, the issue in health care that Canada is confronting is not just the highly effective surgery itself; it's getting to the diagnosis. It is the silo system that we have in health care delivery, the need to go to the general practitioner and wait a year or more to get to the specialist for the first appointment, and from the first appointment with the specialist to the actual surgery it can often be another year, and so on.

That's what health care delivery is. It's the overall total process here. Can you address, in any way, how our knowledge, our deep learning use of artificial intelligence, will actually help us change the delivery to make it such that health care is a more immediate process from diagnosis to actual delivery of the solution to your problem?

Mr. Bernstein: Excellent question, senator, and an ambitious one. I will go back to your first comment about automobiles. When cars were first introduced, no one worried about traffic accidents or pollution destroying our cities. No one thought about those things. One of the lessons from that example is that we are really bad at understanding or projecting the side effects, positive or negative, of a new technology.

I could give lots of other examples. It's an amateur study of mine. I'll give one more. When the TV was first introduced, my mother wanted me to move back from the TV because of the X-rays. She had no worries about the programming on the TV. I think we're bad at anticipating the negative implications of a new technology.

To go back to your question, it is interesting to think about having a holistic or systemic approach to health care that's assisted by new technologies, like artificial intelligence. AI is a way of sifting through data, basically, when you strip it away from everything, and it's a smart way of sifting through a lot of data. There are 35 million of us in this country, and I appreciate that health care is a provincial responsibility, but that's a lot of data. If we were smart, we would try to capture as much of that data as we can about lifestyle, genetics and genomics, about age, demographics and ethnic groups, and try to use that data to develop not personalized medicine but personalized prevention. We all talk about personalized medicine and how do we customize treatment to your particular disease, but how do we personalize prevention for you? You're different than me. We come from different gene pools, et cetera. I'm not quite answering your question yet. I'm circling it and I'm aware of it. That is an aspect of a holistic approach to health as opposed to health care. How do we take a whole snapshot of that individual and customize a prevention strategy for that individual?

Of course, that's not going to eliminate disease completely. I understand that.

The Chair: Maybe I will bring you back a little bit. There is an example in Canada where, for one particular health issue, a certain hospital experiences a number of people coming in with particular symptoms. They have taken the approach that instead of having the typical scenario in the emergency ward when people come in and are being triaged in the current kind of situation, they have decided to bring in actual physicians with expertise in the disease symptom, as opposed to the upfront group. They have brought together a small team, two or three people. They get a lot of this particular issue in this city hospital, and they have found that by bringing in that particular level of expertise and triaging the patients with that level of expertise, they have been able to have a much more effective determination of which patient needs to be treated immediately versus the ones that can take a little more time to get to the treatment. The results after six months were roughly a halving of the time involved and a much higher level of satisfaction with the patients and the health care workers involved with the individuals.

So that was in the back of my mind in asking you the question, because if you have a situation where you have been able to build the deep learning aspect with a tremendous amount of information available on the system, the actual processes we use to make the diagnosis to determine who should be dealt with quickly, how they should be dealt with quickly.

The second thing I will add is in a number of our studies on various aspects of health care, what we have seen — and I used the term silo earlier — is that if a diabetic comes in, they are dealing with more than one morbidity. The issues that occur in treating a patient occur over many months as opposed to getting a kind of holistic treatment up front. You used the term holistic as well, Dr. Bernstein.

I'm wondering if you look down the road, can you see a holistic approach to diagnosis in a setting that will lead to a much more rapid and effective movement of patients to more rapid treatment?

Mr. Bernstein: I think it's a great question, so I will give as short an answer as I can. I think what we need in Canada is a much more innovative approach to health care, and we can have 10 experiments going on to test exactly what you just walked us through, so are there ways. In one of my previous lives, I was president of CIHR, and we funded research on health system delivery. We need to not be afraid of innovations in the health system as long as they are tested and measured: Are the things you were talking about better or worse in terms of cost efficiency, time, et cetera.

Part of that innovation can be the nature of a team that sees a patient in the emergency ward, as you walked through in that example. Part of it is how we can use data from all the previous experiences of a patient walking into the ER and saying they have a severe stomach ache. What happens next? That's where artificial intelligence can play a role in helping the ER physician or nurse make a quick decision as to what to do here, who should see him next first. So I think that combination of technology with a willingness to innovate, in terms of health system delivery, is a great combination. We need to see a lot more of that in the country.

The Chair: I want to go Dr. Schlachta, but I just want to mention that we released a report. This committee was authorized by the Parliament of Canada to study the 2004 health care accord, and the report we wrote was entitled "Time for Transformative Change'' with regard to innovation in health care.

Mr. Bernstein: Well done.

Dr. Schlachta: I don't want to make my answer too simplistic but I'm a bit worried that it might be. However, in the time that it takes for a patient to get from first concern or presentation of symptoms to seeing a specialist and definitive treatment, there are many bottlenecks in the system there. The one thing everyone wants is to see their specialist quickly and spend a lot of time with that person, and the usual answer is you can either see them quickly or spend a lot of time with them, but there are only so many hours in the day that they have. I say that on behalf of all specialists, not that I'm particularly unique.

The way that we have traditionally dealt with time with the specialist has been to try to recruit other allied health care workers to spill off some of the workload. That means I need to spend less time with my patient because my stoma nurse or wound care nurse or somebody is taking care of that particular problem. So it's a very easy jump to see how any kind of artificial intelligence or automated machine for data collecting can save me time. If the history is done by interview with the computer, then when I come in I can review the details that have already been collected and save time.

The same thing goes with patients presenting to an emergency department, either doing a face-to-face interview with a machine system or filling out a survey. I would expect, given Watson and everything else we have now, that you probably have the capability right now to triage patients by history alone as they walk into the emergency department before they see their first human. The obvious danger is that loss of human contact and compassion and so on, but clearly there's going to be the ability to streamline that process.

The other issue is how to go from primary care to specialist and is there the ability to do something online? We live in the misinformation age right now, in my impression. If somebody is at home and they have a problem can they go online right now and look up their symptoms? There are multiple resources online, most of which give false information. If there was a credible source to go to, which we have the capability of providing right now, a system that can actually be highly accurate in providing a diagnosis, then that system might be empowered to refer them directly to a specialist and save waiting time.

The Chair: We would agree that is a major issue. In fact, in another of our reports we have identified that we believe that PHAC should take on a much bigger role in providing direct windows to the best practices and issues directly. In fact, my experience with the network searches is that it's getting far worse now than it has been because now you've got the advertising laid on top of your search. For the average individual to determine which site is going to actually be useful is getting harder, not easier.

We're going to have to move into using the artificial intelligence in some way that give us much more clearly identified sources to go to quickly and through them. I'm not going to pursue that any further.

I do want to ask a specific question around surgery because we're trying not only to know what we can do now but to look a bit down the road. I put a question to an earlier witness because of another situation I was aware of. Prostate cancer issues are easy because in looking at that, it deals with a lot of similar small organ surgeries or whole organ issues. The example was the use of the artificial intelligence, coupled to the radiology analysis of the prostate situation, to give a three-dimensional look at the prostate. We know that many of the major side problems in prostate surgery arise from even the taking of an initial biopsy sample through to further surgical activity, the proximity of the organ to a number of other systems that, if struck or dealt with, cause additional circumstances. The example that was being described was the use of the various radiological techniques to give a whole organ view of the actual prostate and then using a physician-guided surgical implement to be able to have a much more focused surgical activity.

I want to use that as an example to ask you this: Do you see this kind of combination? You were talking earlier about getting inside the body and dealing with this. Can you see this combo giving surgeons a much more accurate capability in otherwise risky and difficult surgeries around organs that can be looked at from a holistic point of view?

Dr. Schlachta: The short answer to that very good question is yes. I'm trying to get my head around the scope of the question that you're trying to ask me there. For sure, we already have systems that allow for computer-assisted, image- guided surgery and biopsies and so on. Those technologies are evolving right now. The ability to develop 3-D images from CAT scans and MRIs or image segmentation can be done now. They are a little bit like cartoons, but as we see computers develop and processor speeds get faster, the resolution gets better and better, but for sure we can do that now.

There are systems available now for prostate biopsy and for intervention liver disease that allow the surgeon to actually see where the bile ducts and the blood vessels are in a 3-D construction of the liver and guide a probe to be able to form a biopsy or tissue ablation as it is right now. I definitely foresee that as becoming a regular part of care.

I don't know if that's really addressing the question you're asking me, though.

The Chair: I think I'll leave it at that. You've covered a number of the aspects.

My final comment would be about the gall bladder example you gave a little early. By coincidence, on the weekend I happened to be talking to two siblings, one of whom had an operation some 20 years ago and the other who just had one this past week in rural Nova Scotia. The first one involved the week to 10 days that the individual indicated they had been in the hospital before being released. The other walked out after a matter of hours. It was a dramatically different kind of experience. If we see that in what we can call the traditional evolution of surgical techniques, the potentials for benefit to humanity with the use of the deep learning capability we have only bodes well if we use it appropriately.

Senator Raine: I've been involved the last few years in terms of physical activity and the rising rates of obesity leading to ill health among Canadians, but I'm very concerned about young Canadians and I'm thinking of how we have introduced technology to kids at a younger and younger age. These kind of things are now ubiquitous everywhere.

I had somebody tell me recently that children today are not learning to write and the fine motor skills of writing. Printing is one thing, but writing is the cursive script. Then I start thinking that if we lose that in a generation, is that a skill that the surgeons need that we might be well advised to protect in our young people?

Dr. Schlachta: I'm presuming you're asking if we transition from traditional open surgery, hands-on surgery to a more computer-assisted surgery, are we losing the art of the surgeon putting their hands on the patient.

That's a very insightful question because that is a very common discussion that happens almost every day in the operating room. When we transitioned from doing open gall bladder surgery to doing minimally invasive gall bladder surgery, we do almost all of our gall bladder surgery now through little holes.

I was an intern when the first laparoscopic gallbladder operations were taking place. One of the reasons I chose that as a specialty is that I was blown away when I went into the operating room for the first time and saw a patient having their gallbladder out. The lights were out and there were 20 people in the room. There were these video monitors and it was so cool, how do you not want to do this for a living? Nowadays, the gallbladder surgery through little holes is so routine, but if you're planning an open gallbladder operation where you have to cut the patient open, now all the chief residents are climbing into the operating room because they want to see an open gallbladder operation. "I've heard of this open surgery and I want to see this operation.''

A question that is regularly raised is, if we're teaching everybody how to do it laparoscopically, or by whatever new technology comes down the pipe, are we going to lose the old skill? My argument is always, in order to do these operations that are less and less hands-on and more and more technology-dependent, I think it makes you a better surgeon. We know that the very best surgeons are the ones who cause the least amount of tissue trauma, if you will, respect tissue planes, understand the anatomy clearly, and don't guess about what they're doing, cause less bleeding and so on.

When you're doing an open operation with your hands on the tissue, you can cheat and can cause a lot of trauma. When you're doing an operation laparoscopically or robotically or by whatever other means, the first thing is that you have to know your anatomy and you have to respect the tissue planes because if you get into bleeding you can't operate anymore and you lose your ability to use that technology. I think that makes you a much better surgeon. If you end up doing an open operation, everybody says, "Wow, I've never seen it that way before,'' because you've got a much more refined approach.

Of course, the other question is the actual operations we're doing will probably change over time as well because the ability to get instruments inside the body and do tissue ablation, biopsies or repairs may not even need the old- fashioned approaches that we needed.

I do respect the concern about the loss of that old-fashioned approach, but I don't think anybody is suffering from an inability to ride horses anymore either. As long as we're confident that those technologies are going to be readily available when we need them, I think we're going to be fine.

Senator Raine: I wasn't thinking so much about the transition of the skill of hands-on to with assistance. I was thinking more about the fine motor skills that are developed as a child. Is there a risk that we could lose those? Are those fine motor skills still required with robotics in terms of surgery?

Dr. Schlachta: I apologize. I perhaps completely misunderstood your question. Let me be clear about what I'm answering. You're saying that the loss of writing skills and so on in our children will perhaps compromise their ability to do surgery later in life.

I'm sure you've heard this already, but here is one of the more interesting things. I don't know the answer to that question, but we do know that when it came to introducing image-guided surgery, minimally invasive, operating off the screen where you're not looking at the patient but you're looking up here and moving instruments, there are plenty of studies that document if you're a video gamer, you're much better at doing that kind of surgery than if you weren't a video gamer. It is possible, as you say, if we lose those fine motor skills, but by the same token we're trading writing with pencils to moving joysticks and pushing buttons. Those might be the tools that you use to do your surgery in the future instead of worrying about the fingertips.

Senator Raine: I've had arthroscopic surgeries twice. The first time I was out for it. The second time I got to watch it on TV as the surgeon was poking around inside my knee. It was very interesting. Absolutely, that's the way of the future. If you look at knee surgery, where it's come from, it's amazing. Certainly, I'm with my partner here, not afraid at all of all of this kind of robotic and guided surgery. It's the way of the future. Thank you.

The Chair: Thank you very much.

In our study, we're trying to look at the impact this is going to have on the health care system in general. Obviously, that includes specific examples of surgical and other health care issues, but it also deals with the delivery down the road. We can look at that, and we'll have many more examples of delivery of health care in rural circumstances, the use of added distance kind of access, and all of those are important to changing health care delivery and giving more access.

You have, Dr. Schlachta, particular experience in the complex of a hospital organization's overall operation. When you go away, if a thought strikes you about how one might be able to use the concept of deep learning, providing the information on a hospital organizational system into a deep learning situation with good questions asked by the programmers of how to use that, which is obviously key, and also key is the human setting up of the layers of knowledge in deep learning to be able to probe it for answers, would you get in touch with us through the clerk? We would really like anything that occurs to you after you leave with regard to either an example that didn't occur to you now or some aspect that you can see that occurs to you as a thought about the future of this that you think, based on our questions, we might be interested in. We would welcome that. In other words, we want you to keep working for us after, since neither of you have very much to do on a daily basis.

Seriously, on behalf of the committee, I want to thank you both for taking the time to come here. You bring enormously important and valuable information to us. You're both involved in this and the future, not only on a world basis but enormously important to us here in Canada.

Dr. Bernstein, we'll be looking for the very wise distribution of those funds that you've added to your system in the country of helping to develop the kinds of centres we need to help Canadian society in the future and to make our economy competitive with other nations' economies so that we can actually afford to use these technologies ourselves down the road.

I really do want to thank you. The way you've answered the questions has been enormously helpful to us as a committee. Again, we would welcome any thoughts you have after you leave.

(The committee adjourned.)

Back to top