The AI Paradox: Bringing Common-sense Understanding to Machines

The AI Paradox: Bringing Common-sense Understanding to Machines

By Martin Ford

Artificial intelligence is rapidly transitioning from the realm of science fiction to the reality of our daily lives. Our devices understand what we say, speak to us, and translate between languages with ever increasing fluency. AI-powered visual recognition algorithms are outperforming people and beginning to find applications in everything from self-driving cars to systems that diagnose cancer in medical images. In the conversation that follows—excerpted from Martin Ford’s exchange with Dr. Oren Etzioni—one of 23 such in-depth interviews featured in his new book, Architects of Intelligence, we gain some remarkable insights into one of the field’s greatest challenges: bringing common-sense understanding to machines.

***

MARTIN FORD: Project Mosaic sounds very interesting, could you tell me about that and the other projects that you’re working on at the Allen Institute?

OREN ETZIONI: Project Mosaic is focused on endowing computers with common sense. A lot of the AI systems that humans have built, to date, are very good at narrow tasks. For example, humans have built AI systems that can play Go very well, but the room may be on fire, and the AI won’t notice. These AI systems completely lack common sense, and that’s something that we’re trying to address with Mosaic.

Our over-arching mission at the Allen Institute for Artificial Intelligence is AI for the common good. We’re investigating how you can use artificial intelligence to make the world a better place. Some of that is through basic research, while the rest has more of an engineering flavor.

A great example of this is a project called Semantic Scholar. In the Semantic Scholar project, we’re looking at the problem of scientific search and scientific hypothesis generation. Because scientists are inundated with more and more publications, we realize that scientists, just like all of us when we’re experiencing information overload, really need help in cutting through that clutter; and that’s what Semantic Scholar does. It uses machine learning and natural language processing, along with various AI techniques, to help scientists figure out what they want to read and how to locate results within papers.

MARTIN FORD: Does Mosaic involve symbolic logic? I know there was an older project called Cyc that was a very labor-intensive process, where people would try to write down all the logical rules, such as how objects related, and I think it became kind of unwieldy. Is that the kind of thing you’re doing with Mosaic?

OREN ETZIONI: The problem with the Cyc project is that, over 35 years in, it’s really been a struggle for them, for exactly the reasons you said. But in our case, we’re hoping to leverage more modern AI techniques—crowdsourcing, natural language processing, machine learning, and machine vision—in order to acquire knowledge in a different way.

With Mosaic, we’re also starting with a very different point of view. Cyc started, if you will, inside out, where they said, “OK. We’re going to build this repository of common-sense knowledge and do logical reasoning on top of it.” Now, what we said in response is, “We’re going to start by defining a benchmark, where we assess the common-sense abilities of any program.” That benchmark then allows us to measure how much common sense a program has, and once we’ve defined that benchmark (which is not a trivial undertaking) we’ll then build it and be able to measure our progress empirically and experimentally, which is something that Cyc was not able to do.

MARTIN FORD: So, you’re planning to create some kind of objective test that can be used for common sense.

OREN ETZIONI: Exactly! Just the way the Turing test was meant to be a test for artificial intelligence or IQ, we’re going to have a test for common sense for AI.

MARTIN FORD: You’ve also worked on systems that attempt to pass college examinations in biology or other subjects. Is that one of the things you’re continuing to focus on?

OREN ETZIONI: One of Paul Allen’s visionary and motivating examples, which he’s investigated in various ways even prior to the Allen Institute for AI, was the idea of a program that could read a chapter in a textbook and then answer the questions in the back of that book. So, we formulated a related problem by saying, “Let’s take standardized tests, and see to what extent we can build programs that score well on these standardized tests.” And that’s been part of our Aristo project in the context of science, and part of our Euclid project in the context of math problems.

For us it is very natural to start working on a problem by defining a benchmark task, and then continually improving performance on it. So, we’ve done that in these different areas.

MARTIN FORD: How is that progressing? Have you had successes there?

OREN ETZIONI: I would say the results have been mixed, to be frank. I would say that we’re state of the art in both science and math tests. In the case of science, we ran a Kaggle competition, where we released the questions, and several thousand teams from all over the world joined. With this, we wanted to see whether we were missing anything, and we found that in fact our technology did quite a bit better than anything else out there, at least who participated in the test.

In the sense of being state of the art and having that be a focus for research, and publishing a series of papers and datasets, I think it’s been very positive. What’s negative, is that our ability on these tests is still quite limited. We find that, when you have the full test, we’re getting something like a D, not a very stellar grade. This is because these problems are quite hard, and often they also involve vision and natural language. But we also realized that a key problem that was blocking us was actually the lack of common sense. So, that’s one of the things that led us to Project Mosaic.

What’s really interesting here is that there’s something I like to call the AI paradox, where things that are really hard for people—like playing World Championship-level Go—are quite easy for machines. On the other hand, there are things that are easy for a person to do, for example if you look at a question like, “Would an elephant fit through a doorway?” While most people can answer that question almost instantaneously, machines will struggle. What’s easy for one is hard for the other, and vice versa. That is what I call the AI paradox.

Now, the standardized test writers, they want to take a particular concept like photosynthesis, or gravity, and have the student apply that concept in a particular context, so that they demonstrate their understanding. It turned out that representing something like photosynthesis, at a 6th grade level, and representing that to the machine is really quite easy, so we have an easy time doing that. But where the machine struggles, is when it’s time to apply the concept in a particular situation that requires language understanding and common-sense reasoning.

MARTIN FORD: So, you think your work on Alexandra could accelerate progress in other areas, by providing a foundation of common-sense understanding?

OREN ETZIONI: Yes. I mean, a typical question is, “If you have a plant in a dark room and you move it nearer the window, will the plant’s leaves grow faster, slower, or at the same rate?” A person can look at that question and understand that if you move a plant nearer to the window then there’s more light, and that more light means the photosynthesis proceeds faster, and so the leaves are likely to grow faster. But it turns out that the computer really struggles with this—because the AI doesn’t necessarily understand what you mean when you say, “What happens when you move a plant nearer the window?”

These are some examples that indicate what led us to Project Mosaic, and what some of our struggles have been with things like Aristo and Euclid over the years.

MARTIN FORD: There’s enormous attention currently being given to deep learning and to neural networks. How do you feel about that? Do you think it’s overhyped? Is deep learning likely to be the primary path forward in AI, or just one part of the story?

OREN ETZIONI: I guess my answer would be all of the above. There have been some very impressive achievements with deep learning, and we see that in machine translation, speech recognition, object detection, and facial recognition. When you have a lot of labeled data, and you have a lot of computer power, these models are great.

But at the same time, I do think that deep learning is overhyped because some people say that it’s really putting us on a clear path towards artificial intelligence, possibly general artificial intelligence, and maybe even superintelligence. And there’s this sense that that’s all just around the corner. It reminds me of the metaphor of a kid who climbs up to the top of the tree and points at the moon, saying, “I’m on my way to the moon.”

I think that in fact, we really have a long way to go and there are many unsolved problems. In that sense, deep learning is very much overhyped. I think the reality is that deep learning and neural networks are particularly nice tools in our toolbox, but it’s a tool that still leaves us with a number of problems like reasoning, background knowledge, common sense, and many others largely unsolved.

MARTIN FORD: I do get the sense from talking to some other people, that they have great faith in machine learning as the way forward. The idea seems to be that if we just have enough data, and we get better at learning—especially in areas like unsupervised learning—then common-sense reasoning will emerge organically. It sounds like you would not agree with that.

OREN ETZIONI: The notion of ‘emergent intelligence’ was actually a term that Douglas Hofstadter, the cognitive scientist, talked about back in the day. Nowadays people talk about it in various contexts, with consciousness, and with common sense, but that’s really not what we’ve seen. We do find that people, including myself, have all kinds of speculations about the future, but as a scientist, I like to base my conclusions on the specific data that we’ve seen. And what we’ve seen, is people using deep learning as high-capacity statistical models. High capacity is just some jargon that means that the model keeps getting better and better, the more data you throw at it.

Statistical models at their core are based on matrices of numbers being multiplied, and added, and subtracted, and so on. They are a long way from something where you can see common sense or consciousness emerging. My feeling is that there’s no data to support these claims and if such data appears, I’ll be very excited, but I haven’t seen it yet.

MARTIN FORD: Any concluding thoughts?

OREN ETZIONI: Well, there’s one other point that I wanted to make that I think people often miss in the AI discussion, and that’s the distinction between intelligence and autonomy. We naturally think that intelligence and autonomy go hand in hand. But you can have a highly intelligent system with essentially no autonomy, and the example of that is a calculator. A calculator is a trivial example, but something like AlphaGo that plays brilliant Go but won’t play another game until somebody pushes a button: that’s high intelligence and low autonomy.

You can also have high autonomy and low intelligence. My favorite kind of tongue-in-cheek example is a bunch of teenagers drinking on a Saturday night: that’s high autonomy but low intelligence. But a real-world example that we’ve all experienced would be a computer virus that can have low intelligence but quite a strong ability to bounce around computer networks. My point is that we should understand that the systems that we’re building have these two dimensions to them, intelligence and autonomy, and that it’s often the autonomy that is the scary part.

I want to emphasize the fact that a lot of our worries about AI are really worries about autonomy, and I want to emphasize that autonomy is something that we can choose as a society to meter out.

I like to think of ‘AI’ as standing for ‘augmented intelligence’, just as it is with systems like Semantic Scholar and like with self-driving cars. One of the reasons that I am an AI optimist, and feel so passionate about it, and the reason that I’ve dedicated my entire career to AI since high school, is that I see this tremendous potential to do good with AI.

This article is based on excerpts from Martin Ford’s interview with Oren Etzioni, featured in Ford’s book, Architects of Intelligence. This ambitious compendium of thought examines compelling questions such as, how will AI evolve and what major innovations are on the horizon? What will its impact be on the job market, economy, and society? What is the path toward human-level machine intelligence? What should we be concerned about as artificial intelligence advances? To these ends, Architects of Intelligence contains a series of in-depth, one-to-one interviews where New York Times bestselling author, Martin Ford, uncovers the truth behind these questions from some of the brightest minds in the Artificial Intelligence community. The wide-ranging conversations include twenty-three of the world’s foremost researchers and entrepreneurs working in AI and robotics: Demis Hassabis (DeepMind), Ray Kurzweil (Google), Geoffrey Hinton (Univ. of Toronto and Google), Rodney Brooks (Rethink Robotics), Yann LeCun (FaceBook) , Fei-Fei Li (Stanford and Google), Yoshua Bengio (Univ. of Montreal), Andrew Ng (AI Fund), Daphne Koller (Stanford), Stuart Russell (UC Berkeley), Nick Bostrom (Univ. of Oxford), Barbara Grosz (Harvard), David Ferrucci (Elemental Cognition), James Manyika (McKinsey), Judea Pearl (UCLA), Josh Tenenbaum (MIT), Rana el Kaliouby (Affectiva), Daniela Rus (MIT), Jeff Dean (Google), Cynthia Breazeal (MIT), Oren Etzioni (Allen Institute for AI), Gary Marcus (NYU), and Bryan Johnson (Kernel).

About the Author

Martin Ford is a prominent futurist, and author of Financial Times Business Book of the Year, Rise of the Robots. He speaks at conferences and companies around the world on what AI and automation might mean for the future.

Oren Etzioni is the CEO of the Allen Institute for Artificial Intelligence (abbreviated as AI2), an independent, non-profit research organization established by Microsoft co-founder Paul Allen in 2014. AI2, located in Seattle, employs over 80 researchers and engineers with the mission of “conducting high-impact research and engineering in the field of artificial intelligence, all for the common good.” Oren received a bachelor’s degree in computer science from Harvard in 1986. He then went on to obtain a PhD from Carnegie Mellon University in 1991. Prior to joining AI2, Oren was a professor at the University of Washington, where he co-authored over 100 technical papers. Oren is a fellow of the Association for the Advancement of Artificial Intelligence, and is also a successful serial entrepreneur having founded or co-founded a number of technology startups that were acquired by larger firms such as eBay and Microsoft. Oren helped to pioneer meta-search (1994), online comparison shopping (1996), machine reading (2006), Open information Extraction (2007), and Semantic Search of the academic literature (2015).