Create a free Manufacturing.net account to continue

Conversation with Andrew Moore on Artificial Intelligence - Part 2

CHARLIE-ROSE-SHOW-01

ROSE-SHOW-01

at the Carnegie Mellon University in Pittsburgh. Conversation with Andrew

Moore, he`s a former vice-president engineering at Google, and now the dean

of the school of computer science at Carnegie.>

ANDREW MOORE: When they`re at high school, we need them to have loved math. Math is the center of all of A.I. Once they come to a place like Carnegie Mellon, then they learn about algorithms, which is how a computer is organized, how they`re going to solve tasks. And then they study things like computer vision, and machine learning, and all these disciplines which really are about replacing different parts of the human brain.

CHARLIE ROSE: At what age did your daughter learn to code?

ANDREW MOORE: At ten, she was able to write code which I thought was pretty respectable.

CHARLIE ROSE: Did she do that on her own or she do that because you encouraged her to do it?

ANDREW MOORE: That`s a very common thing. She is surrounded by friends and family who are all geeks and, so, she becomes a geek herself.

CHARLIE ROSE: Like father, like daughter?

ANDREW MOORE: Yes.

CHARLIE ROSE: Are we facing what many might call a fourth industrial revolution?

ANDREW MOORE: I think it is.

CHARLIE ROSE: Or is it here?

ANDREW MOORE: It is happening. I think we`re in the first few years of it. Ever since, for instance, things like travel agents became irrelevant because it`s easier to do with a computer now for all of us. That is when it started. And that is -- we`re now getting into full swing. I think it will be 2020 that we see that, wow, whole areas of what we used to think as what only people could do, can now be done better with having computers help them.

CHARLIE ROSE: Is this part of the sharing economy?

ANDREW MOORE: The sharing economy is an interesting side aspect of A.I. It`s where you`re getting groups of people together to solve problems by -- I`ve got to have another go at this. Let me describe it more clearly. The sharing economy is a way in which people can do what they`re really good at. So if I`m really good at writing, and you`re really good at planning, we make sure that you do what you`re good, I do what I`m good at, and then we will win. So having the computers help organize us so the right people are doing the right things.

CHARLIE ROSE: That is the sharing economy.

ANDREW MOORE: Yes.

CHARLIE ROSE: In the same way that Uber provides somebody that needs to move somewhere, and somebody wants to take them somewhere.

ANDREW MOORE: Yes, that`s right.

CHARLIE ROSE: That`s two separates talents.

ANDREW MOORE: Yes.

CHARLIE ROSE: What could be that company`s, and the industries, and the sectors that are disrupted most by artificial intelligence?

ANDREW MOORE: The ones which are really coming up in my mind are what we might have originally called white-collar work, like the legal profession, and some parts of the medical profession with extremely high training and expertise. So these are ones where computers are actually able to make some of these judgment calls.

A doctor facing lots of contradictory information actually figuring out what the problem is or a lawyer who has to sift through a vast amount of information to find the actual solution to a difficult problem, that I can also make.

Interestingly, something like a nurse or a teacher whose real job is understanding the people that they`re interacting with all the time, that I find much harder to automate. I don`t see those disappearing for a decade.

CHARLIE ROSE: Artificial intelligence is changing life as we know it.

ANDREW MOORE: Yes.

CHARLIE ROSE: Obvious.

ANDREW MOORE: Yes.

CHARLIE ROSE: And when you sit around and you blue-sky this, so 10 from now, 15 years from now, 25 years from now -- just take 25 years from now, we`re in 2016, right before 2050.

ANDREW MOORE: Yes.

CHARLIE ROSE: Where do you think artificial intelligence will be?

ANDREW MOORE: I hope that the world will be a much safer place.

CHARLIE ROSE: Safer?

ANDREW MOORE: Safer. When disasters happen, there will be fleets of robotic devices coming in to render aid, very first triage to get people to safety. Remote autonomous vehicles coming in to pick up severely injured folks. Large pieces of heavy equipment coming in to move things that are in the way.

You can imagine a world in which, just like now, we`ve learned to build houses to protect us from the elements. We`re actually using machines to give us far greater protection. Then for the 50 percent of the plant was currently living their life in fear almost every year of what`s going to happen to them, they may have a more secure and pleasant life.

CHARLIE ROSE: You can`t stop technology.

ANDREW MOORE: That`s right. And if we the United States said, well, we`re not going to do this, then we could just sit on our hands and let Europe and Asia do it. We`re not going to want to do that. That is not what the United States is all about.

CHARLIE ROSE: So who stops us? Our collective will stops us? Or is it a legislative function? Is it some ethics board that decides here but no further?

ANDREW MOORE: The place that we need legislative help is to answer some uncomfortable questions which we will need answers in order to save lives. For example, when you`re programming an autonomous vehicle, a car to minimize casualties in an accident, who decides whether that car should be protecting the driver or the person that`s being crashed into? I don`t want the engineers deciding that. I don`t need to decide that. We have to.

CHARLIE ROSE: Do you want congress to decide that?

ANDREW MOORE: I know it sounds impossible, but I want congress to decide that.

CHARLIE ROSE: Yes. What worries you the most about this forward progress of artificial intelligence?

ANDREW MOORE: I worry that it`s very stressful for people to live through times of change, and this is a time of change, and it is going to cause great anxiety, and all the economic theories and experience we have saying that disruption occurs, people get new jobs and life goes on, and we progress.

The frightening part of that, of course, is, during the disruption, a lot of people are displaced and they will have a harder time than they might have otherwise.

CHARLIE ROSE: That`s the kind of impact Amazon had, isn`t it? Amazon was the new business model, but it was using technology, all online, and it was disrupting the business of book stores around the world.

ANDREW MOORE: Absolutely, yes. This really is the story of the technology. It is the story of the United States for the last 200 years. We have constantly found better, more effective ways to do things, but you cannot do that and not think about the consequences to all the people who have been trained to do something, which are now automating.

CHARLIE ROSE: This is what intrigues people, this question. You have people like Elon Musk, you have Stephen Hawking saying it could spell the end of the human race -- Stephen Hawking saying that. Elon Musk said it`s the most existential threat we face. So here are pretty smart guys saying, watch out. Do we know what we are creating?

ANDREW MOORE: It is worth being extraordinarily careful about all of these things. I will put this up there with genetic modification of foods. I will put this up there with -- I know it sounds crazy but if we broadcast stuff out into interstellar space, some other alien civilization might spot us.

These very long-term questions are worth thinking about. But I want to make a distinction, that at the moment, what we are building here in places like the robotics institutions around the world, the equivalent of really smart calculators which solves specific problems.

CHARLIE ROSE: Okay. Is having artificial intelligence that`s smarter than you are, is that bad?

ANDREW MOORE: If I was really worried about that, I would already be really unhappy because I know that there are billions of people are smarter than me out there at the moment. We all have -- we all know that there are many, many smarter people and smarter organizations than us at the moment, so I don`t think we are affected by smarter people.

(CROSSTALK)

CHARLIE ROSE: Let`s not use you as an example.

ANDREW MOORE: All right.

CHARLIE ROSE: Artificial intelligence that could outthink the human population.

ANDREW MOORE: I think.

CHARLIE ROSE: My question that intrigues me is who controls the artificial intelligence, because we`re talking about artificial intelligence being created by engineers, scientists.

ANDREW MOORE: Yes.

CHARLIE ROSE: But could it go out of control?

ANDREW MOORE: We have -- no one knows how we go about building something that frightening. That is not something that our generation of AI folks can do. It is well possible that someone 30 or 80 years from now might start to look at that question. At the moment, though, we have the word artificial and artificial intelligence.

CHARLIE ROSE: Yes.

ANDREW MOORE: I am dreadfully worried about releasing software for autonomous car driving which turn out to have a very serious bug which meant that, you know, on the leap day, all the cars stop on the freeway because they had a bug in their code.

That could kill tens of thousands of people. That is a very real question and responsible engineers have got to have responsible ways of validating and proving their systems are safe.

CHARLIE ROSE: What`s the difference between artificial intelligence and super intelligence?

ANDREW MOORE: Artificial intelligence is a real technology just like steel or hired electric power which we are using at the moment to make our lives safer. Super intelligence is a really intriguing science fiction concepts like meeting aliens or having nanobots crawling through our veins.

CHARLIE ROSE: Yes, but I can tell you thing after thing in which science fiction became reality.

ANDREW MOORE: That`s absolutely true. Plenty of stories do involve the Frankenstein story, and we`re all concerned about this model of an eager scientist producing some compound which they thought would do good but turns out to do bad.

Modern engineering, we tell our students, you do not release something without testing it. It`s illegal to release something safety critical -- life safety critical.

CHARLIE ROSE: Right.

ANDREW MOORE: . without having very detailed.

CHARLIE ROSE: Make sure you understand all the consequences.

ANDREW MOORE: That`s right.

CHARLIE ROSE: And the collateral damage.

ANDREW MOORE: So if you look at what`s happening in a large company or in universities, usually what`s happening in a large company which is producing a complicated pieces of equipment, much more half of the effort goes into testing it. In fact, many projects, 10 percent of the work is inventing new things, 90 percent of the work is testing to find out any scenarios in which it can cause trouble.

That is what frightens me is if we AI people, you know, are exciting at getting this stuff out there don`t test enough and some of our robots instead of saving lives inadvertently hurt people. That would be a disaster.

CHARLIE ROSE: That`s what keeps you up late at night.

ANDREW MOORE: Yes.

CHARLIE ROSE: That fear.

ANDREW MOORE: Yes.

CHARLIE ROSE: We haven`t tested it, it goes out of control, and spreads like wildfire.

ANDREW MOORE: That`s right. So in the early days of computing -- computing in medicine has saved millions of lives as we know, but in the early days, there were some computer programs which accidentally made pieces of medical equipment go out of control and actually killed patients.

What happened then is the computer scientist realized you have to have very detailed testing procedures. We don`t want to make the same mistake with robots. We already are smart enough to know that we have to test this stuff. I frankly think it is not ethical to release software to do autonomous vehicles on the road, for example, right now until we have some governmental standards for safety.

CHARLIE ROSE: But it`s coming?

ANDREW MOORE: Yes.

CHARLIE ROSE: I mean, it`s really coming?

ANDREW MOORE: Yes.

CHARLIE ROSE: What about this. You -- through the power and progress and rapid increase in potential of artificial intelligence, some mad person who happens to have all the smarts in the world takes advantage of all the other learning in the world and programs some robot or some other kind of thing to do destructive acts, perhaps racial, perhaps antisocial, perhaps terrifying communities. Is that a scenario? Could that happen?

ANDREW MOORE: It absolutely could. Every piece of technology which can improve the human condition can also be used to damage the human condition. I absolutely believe, unfortunately, though right now there are people in various parts of the world figuring out how to put explosives on to drones.

Amateurs even who learned how to do this stuff on the internet, and just as with medical treatments where there have to be controls on disease agents that people can mess about with, we as a society have to understand that technology will be used by evil people as well as good people.

CHARLIE ROSE: Just think about the horror. We already have people who are terrorists who are willing to die for their cause, and, so, they`re willing to blow themselves up. Think about how much larger that potential would be, you know, if you could multiply that, you know, and somehow get inside of - - you know, and do things on such a large scale?

ANDREW MOORE: So I am deeply worried about all kinds of active terrorism which can even happen now.

CHARLIE ROSE: That`s my point.

ANDREW MOORE: And there are many folks who are using tools from artificial intelligence and machine learning to help quickly react or even prevent these kinds of disasters.

So, for example, after the Boston marathon bombings, there was very limited visual information about possible suspects, but it was possible then to use computer vision, automatic methods to sift through all the information from all the videography around at the time to help quickly determine the potential suspects.

So while I agree that there is a real danger of people using robots for evil, the solution isn`t for us to sit on our hands and say we don`t need to do robot work, it is actually to figure out how to use robots to protect people.

CHARLIE ROSE: How much of artificial intelligence is already being used by the military? For example, just the capacity to use anything that`s autonomous to advance into places where we may consider too risky to do otherwise.

ANDREW MOORE: So I`m not the right person to talk about all aspects of the military. Even for the last 30 or 40 years, cruise missiles have been using artificial intelligence to route themselves efficiently.

CHARLIE ROSE: And drones, too.

ANDREW MOORE: Yes. So understanding the world for surveilling and getting a sense of where there are dangers, AIs been in use for more than a decade in these kinds of areas. The military is also investing in experimental robotics, not just flying drones but ground-based drones.

CHARLIE ROSE: Right.

ANDREW MOORE: Surface of water-based robots and underwater robots for being able to do surveilling when it`s too dangerous for people.

CHARLIE ROSE: I got to believe the military is doing this because -- and the point is they need protection. They want to know what the other person is doing so they`ve got to figure out how they`d do it if they were doing it.

ANDREW MOORE: Yes.

CHARLIE ROSE: And then you can figure out what the antidote is to it.

ANDREW MOORE: Yes. One thing noticed during the wars of the last decade was -- U.S. soldiers asked friends from home to send them remote control vehicles because they actually felt safer piloting a toy vehicle into an area in order to see what was going on.

CHARLIE ROSE: Because of land mines.

ANDREW MOORE: Yes. This was just hobbyist folks in the military. So there is plenty of these kinds of things now happening as government programs to use small robots to protect the lives of troops.

CHARLIE ROSE: The interesting thing and -- I may be wrong about this -- the interesting thing, it seems to me, is that this is not just what nation states are doing, this is what a whole range of people who become very smart at operating either super computers or accessing the internet or creating software, and that`s creating a whole range of people who can do a whole range of things.

ANDREW MOORE: Yes, and one of the -- when you look at the world this way, it`s the countries or the organizations which have got the trained, smart people are the ones who are going to be prospering in this situation. So I would not like to be a country that for example had very few mathematicians because that would indicate that the folks who are technical people are eventually are gonna be able to.

CHARLIE ROSE: Does China have a lot more mathematicians than we do?

ANDREW MOORE: China has a lot more people, has a lot more mathematicians.

CHARLIE ROSE: There is an emphasis on math and science, then.

ANDREW MOORE: That`s right. And.

CHARLIE ROSE: And computer science, especially.

ANDREW MOORE: Yes. So if I`m looking what the natural resources are for a successful country in the 21st century, it is the number of math-trained brains. It is not the amount of oil.

CHARLIE ROSE: Could I make this even more precise that as we look at the contest, it`s not a zero-sum game, among nations, those that have the most capable and proficient and innovative use of artificial intelligence are going to be in a commanding place?

ANDREW MOORE: Yes, absolutely. You see this even now. It is more groups of smart people who start the $100 billion companies. I really do profoundly believe that the United States, which has led for the last 60 or 70 years in technology can still lead here and I want it to because I want us to build the automated planet that respects human life and values.

You mentioned earlier the question of where does the military go to find these brilliant people? They go to places like Silicon Valley. The main thing which a lot of us think about now is the care and feeding of young tech geniuses. And the reason folks are going to places like Silicon Valley is the young tech geniuses want to live in cool places.

CHARLIE ROSE: That`s right.

ANDREW MOORE: That`s why it`s so important. Pittsburgh has become cool now. We are getting a massive influx of AI experts.

CHARLIE ROSE: And the same way that Palo Alto is in Austin, Texas and a few places like that.

ANDREW MOORE: Exactly. And so the -- if you want to build up a great AI workforce, you need to have an environment where people can really explore crazy ideas.

CHARLIE ROSE: Is it in our national interest to share?

ANDREW MOORE: If you want to get your idea out there and used by billions of people, your best bet is to do a start up in the United States with viral marketing which gets the whole world using it. So in that sense, yes, everyone wants the rest of the world to share what they`re doing.

There is trade secrets and military secrets at the same time, and one of the things you learn in either of these environments is you come across some great technical idea, you`re not going to hold on to it forever. You better use it right now because.

CHARLIE ROSE: Change is so fast.

ANDREW MOORE: . someone else is gonna hop in, is gonna come up with the idea or the technology is gonna change and completely wipe away that advantage.

CHARLIE ROSE: You know both as a former vice president of Google, you know both the business side as well as scientific side. You know, is it realistic to expect companies like Google -- the question I asked earlier - - you know, to want to be as secretive as possible about this because the competition is so intense?

ANDREW MOORE: Does make sense to protect new technology, usually it is by keeping it secret rather than patenting it these days.

CHARLIE ROSE: That`s the model today?

ANDREW MOORE: Yes.

CHARLIE ROSE: Keep it secret, don`t patent it?

ANDREW MOORE: Correct. Here`s the fascinating thing about the game at the moment. Remember, getting these AI experts is the most important thing, so you have to have them be happy and motivated. Telling someone to come work for you, to do something which, like your parents are never going to see or know about, is not motivating.

So to really get the best people into your companies or your organizations, you have to do want to give them this ability of what they`re doing. So that`s why it is not the case if you have this very long-term secrets about technology anywhere.

We and when I say "we", I mean we as people who are employing AI experts, we actually part of what we`re doing is we tell you, you`re saving the world, you`re changing the world, and we want you to be part of it.

CHARLIE ROSE: And you become heroic and popular.

ANDREW MOORE: Yes.

CHARLIE ROSE: If you do.

ANDREW MOORE: That`s right.

CHARLIE ROSE: And if we know about it.

ANDREW MOORE: Yes.

CHARLIE ROSE: Talk to me about the IBM business model. What -- is it the best one?

ANDREW MOORE: I have real respect for what IBM is doing at the moment, where any of the big internet companies are going directly for bringing AI to consumers like you and me.

CHARLIE ROSE: Right.

ANDREW MOORE: IBM is really focusing its business at the moment about bringing AI to the other fortune 500 companies who are going to need it and who do not have that expertise for themselves.

CHARLIE ROSE: But why did they choose this cognitive assistance as the route to go rather than the route that Google has chosen? Your former employer.

ANDREW MOORE: Actually I`m going to say that Google and IBM are both going after forms of cognitive assistance. One of them, Google`s, is all around going out directly to help you the user who is using Google`s applications and taking on Google apps.

IBM`s business model is provide this to empower all the other companies, car companies, hospitals, and so forth and put AI into their systems. They`re both viable business models and not even in direct competition.

CHARLIE ROSE: There are ethical questions involved here, clearly.

ANDREW MOORE: Yes.

CHARLIE ROSE: Who should be deciding? Is it government?

ANDREW MOORE: One thing for sure, it`s not me and it`s not engineers. We do need to make some difficult decisions. For example, we can program a car to act various ways in a collision to save lives.

Someone has to answer questions like, if the car tried to protect the person inside the car more than the person it`s about to hit, that is an ethical question which the country or society probably through the government has to actually come up before we can put the safety into vehicles.

CHARLIE ROSE: Speaking of the government, are we at the risk of creating an AI arms race?

ANDREW MOORE: There is a technology race in computer science which has been there for decades, and it is going strong at the moment.

CHARLIE ROSE: The race has gotten more intensive.

ANDREW MOORE: Yes.

CHARLIE ROSE: There is more of a feeding frenzy, so to speak?

ANDREW MOORE: Yes.

CHARLIE ROSE: And is it all behind closed doors?

ANDREW MOORE: No. Interestingly, much of the most exciting stuff going on in AI still gets published. There is annual conferences. Something called the American Association of Artificial Intelligence. If you are doing something cool in AI, the absolute best thing that happens to you is if you get a paper accepted by that conference and you get on the world stage of what`s going on.

CHARLIE ROSE: Because everybody in the world that has resources and is a competitor in terms of the big ideas in America, whether private or public, they want the smart people. It`s like an NFL draft?

ANDREW MOORE: It`s very much like that. In fact, so much so that at Carnegie Mellon, we are now planning on sending talent scouts out to high schools and even middle schools to find these people.

CHARLIE ROSE: You know what I like about this? We want people to care more about science. We want young people to be as interested in science as they are in becoming a rock star or an NFL star or an NBA star, so that science, because of its consequences, is a place where people know they can do well, do good, and be celebrated.

ANDREW MOORE: When you are programming a robot, it`s like magic. The thing I tell middle schoolers is the closest thing to getting to go to Hogwarts is being able to do robotics and AI.

CHARLIE ROSE: What part of this would you most like to be involved in? I mean, you`re here because it`s one of the centers for what`s happening both in terms of students but in terms of ideas. What excites you the most? What gets you revved up?

ANDREW MOORE: So part of the reason I moved from industry to Carnegie Mellon is that the whole game over the next few decades is won or lost according to talents of the people building these systems. If they do it well, the year 2014 could be the best year to be alive in the history of the human race.

But if we screwed up, it could be pretty disastrous. So right now the thing which motivates me personally is all the 12-year-olds or 15-year-olds around the United States. If they love anything to do with math or programming or computers, they have to take this seriously. They should get involved. So that`s why I`m in this business. I have to get them involved.

CHARLIE ROSE: It`s one pathway to unlock the future of the world.

ANDREW MOORE: Yes.

(COMMERCIAL BREAK)

END

(Copy: Content and Programming Copyright 2016 Charlie Rose Inc. ALL RIGHTS RESERVED. Copyright 2016 CQ-Roll Call, Inc. All materials herein are protected by United States copyright law and may not be reproduced, distributed, transmitted, displayed, published or broadcast without the prior written permission of CQ-Roll Call. You may not alter or remove any trademark, copyright or other notice from copies of the content.)