Go, Marvin Minsky, and the Chasm that AI Hasn’t Yet Crossed Source: Gary Marcus
DeepMind vs. the European champion of Go. Courtesy of DeepMind/Google.
Go, Marvin Minsky, and the Chasm that AI Hasn’t Yet Crossed
An expert in AI separates fact from hype in the wake of DeepMind’s victory over humans in the most challenging game of all
In the very same week that Artificial Intelligence lost one of its greatest pioneers, Marvin Minsky, it saw major progress on a decades-old challenge of playing human-level Go. There is much to shout about, but also a lot of hype and confusion about what we just saw. With so much at stake as people try to handicap the future of AI, and what it means for the future of employment and possibly even the human race, it’s important to understand what was and was not yet accomplished.
Fact: The paper published yesterday in Nature by DeepMind represents major progress in getting AI to play Go, a game that has been notoriously difficult for machines. (A second paper, published earlier in the week by Facebook, also reported considerable progress.)
Fact: DeepMind beat the European champion in Go.
Confusion: The European champion of Go is not the world champion, or even close. The BBC, for example, reported that “Google achieves AI ‘breakthrough’ by beating Go champion,” and hundreds of other news outlets picked up essentially the same headline. But Go is scarcely a sport in Europe; and the champion in question is ranked only #633 in the world. A robot that beat the 633rd-ranked tennis pro would be impressive, but it still wouldn’t be fair to say that it had “mastered” the game. DeepMind made major progress, but the Go journey is still not over; a fascinating thread at YCombinator suggests that the program — a work in progress — would currently be ranked #279.
Beyond the far from atypical issue of hype, there is an important technical question: what is the nature of the computer system that won? By way of background, there is a long debate about so-called neural net models (which in its most modern form is called “deep-learning”) and classical “Good-old-fashioned Artificial Intelligence” (GOFAI) systems, of the form that the late Marvin Minsky advocated. Minsky, and others like his AI-co-founder John McCarthy grew up in the logicist tradition of Bertrand Russell, and tried to couch artificial intelligence in something like the language of logic. Others, like Frank Rosenblatt in the 50s, and present-day deep learners like Geoffrey Hinton and Facebook’s AI Director Yann LeCun, have couched their models in terms of simplified neurons that are inspired to some degree by neuroscience.
To read many of the media accounts (and even the Facebook posts of some of my colleagues), DeepMind’s victory is a resounding win for the neural network approach, and hence another demerit for Minsky, whose approach has very much lost favor.
But not so fast. If you read the fine print (or really just the abstract) of DeepMind’s Nature article, AlphaGo isn’t a pure neural net at all — it’s a hybrid, melding deep reinforcement learning with one of the foundational techniques of classical AI — tree-search, invented by Minsky’s colleague Claude Shannon a few years before neural networks were ever invented (albeit in more modern form), and part and parcel of much his students’ early work.
To anyone who knows their history of cognitive science, two people ought to be really pleased by this result: Steven Pinker, and myself. Pinker and I spent the 1990’s lobbying — against enormous hostility from the field — for hybrid systems, modular systems that combined associative networks (forerunners of today’s deep learning) with classical symbolic systems. This was the central thesis of Pinker’s book Words and Rules and the work that was at the core of my 1993 dissertation. Dozens of academics bitterly contested our claims, arguing that single, undifferentiated neural networks would suffice. Two of the leading advocates of neural networks famously argued that the classical symbol-manipulating systems that Pinker and I lobbied for were not “of the essence of human computation.”
What yesterday’s Nature paper shows, if you read carefully, is that the pure deep net approach of DeepMind’s famous Atari game system does not work as well on Go as the hybrid system, exactly as Pinker and I might have anticipated.
Pinker and I were, as it happens, building on Minsky. People in the field of neural networks (nowadays better known as deep learning) often revile Minsky; old-schoolers are, after many decades, still bitter about Marvin’s 1969 book Perceptrons (co-written with Seymour Papert). As they see it, Minsky and Papert threw an unwarranted bucket of cold water on the incipient field of neural networks, widely viewed as slaying the field prematurely. In computer scientist and author Pedro Domingos’ words, “if the history of machine learning were a Hollywood movie, the villain would be Marvin Minsky.”
But people often tell the story wrong. The usual story is that Marvin claimed that you could never learn anything interesting (“nonlinear”) from neural networks. What Minsky and Papert really showed is that you couldn’t use some existing tools to guarantee — prove — that neural networks with hidden layers would converge on a correct solution. They invited readers to accept or reject their conjecture. In 2016 networks have gotten deeper and deeper, but there are still very few provable guarantees about how they work with real-world data.
Just yesterday, a few hours before the Go paper was made public, I went to a talk where a graduate student of a deep learning expert acknowledged that (a) people in that field still don’t really understand why their models work as they well as they do and (b) they still can’t really guarantee much of anything if you test them in circumstances that differ significantly from the circumstances on which they were trained. To many neural network people, Minsky represents the evil empire. But almost half a century later they still haven’t fully faced up to his challenges.
What happens next with Deep Mind’s Go program? In the short term, I won’t be at all surprised to see it beat the real world champion, soon enough — maybe in March, as they are hoping, or maybe a few years hence. But the long term consequences are less certain. The real question is whether the technology developed there can be taken out of the game world and into the real world. IBM has struggled to make compelling products out of DeepBlue (the chess champion) and Watson (the Jeopardy champion). Part of the reason for that is that the real world is fundamentally different from the game world. In chess, there are only about 30 moves you can make at any one moment, and the rules are fixed. In Jeopardy more than 95% of the answers are titles of Wikipedia pages. In the real world, the answer to any given question to be just about anything, and nobody has yet figured out how to scale AI to open-ended worlds at human levels of sophistication and flexibility.
As a sanity check, it’s worth peeking at a New York Times evaluation of personal assistants (like Siri and Google Now) that was published earlier this week. Each system had its own unique strengths and weaknesses. But many of them couldn’t even answer the question of what teams are playing in the Super Bowl next week.
AI in the real world is still pretty hard. The money question — which nobody yet knows the answer to — is whether passing Go will get us there sooner.
| }
|