Fear our new robot overlords: This is why you need to take artificial intelligen Source: Phil Torres
There are a lot of major problems today with tangible, real-world consequences. A short list might include terrorism, U.S.-Russian relations, climate change and biodiversity loss, income inequality, health care, childhood poverty, and the homegrown threat of authoritarian populism, most notably associated with the presumptive nominee for the Republican Party, Donald Trump.
Yet if you’ve been paying attention to the news for the past several years, you’ve almost certainly seen articles from a wide range of news outlets about the looming danger of artificial general intelligence, or “AGI.” For example, Stephen Hawking has repeatedly expressed that “the development of full artificial intelligence could spell the end of the human race,” and Elon Musk — of Tesla and SpaceX fame — has described the creation of superintelligence as “summoning the demon.” Furthermore, the Oxford philosopher and director of the Future of Humanity Institute, Nick Bostrom, published a New York Times best-selling book in 2014 called Superintelligence, in which he suggests that the “default outcome” of building a superintelligent machine will be “doom.”
What’s with all this fear-mongering? Should we really be worried about a takeover by killer computers hell-bent on the total destruction of Homo sapiens? The first thing to recognize is that a Terminator-style war between humanoid robots is not what the experts are anxious about. Rather, the scenarios that keep these individuals awake at night are far more catastrophic. This may be difficult to believe but, as I’ve written elsewhere, sometimes truth is stranger than science fiction. Indeed, given that the issue of AGI isn’t going anywhere anytime soon, it’s increasingly important for the public to understand exactly why the experts are nervous about superintelligent machines. As the Future of Life Institute recently pointed out, there’s a lot of bad journalism about AGI out there. This is a chance to correct the record.
Toward this goal, step one is to realize is that your brain is an information-processing device. In fact, many philosophers talk about the brain as the hardware — or rather, the “wetware” — of the mind, and the mind as the software of the brain. Directly behind your eyes is a high-powered computer that weighs about three pounds and has roughly the same consistency as Jell-o. It’s also the most complex object in the known universe. Nonetheless, the rate at which it’s able to process information is much, much slower than the information-processing speed of an actual computer. The reason is that computers process information by propagating electrical potentials, and electrical potentials move at the speed of light, whereas the fastest signals in your brain travel at around 100 miles per second. Fast, to be sure, but not nearly as fast as light.
Consequently, an AGI could think about the world at speeds many orders of magnitude faster than our brains can. From the AGI’s point of view, the outside world — including people — would move so slowly that everything would appear almost frozen. As the theorist Eliezer Yudkowsky calculates, for a computer running a million times faster than our puny brains, “a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight-and-a-half hours.”
Already, then, an AGI would have a huge advantage. Imagine yourself in a competition against a machine that has a whole year to work through a cognitive puzzle for every 31 seconds that you spend trying to think up a solution. The mental advantage of the AGI would be truly profound. Even a large team of humans working together would be no match for a single AGI with so much time on its hands. Now imagine that we’re not in a puzzle-solving competition with an AGI but a life-and-death situation in which the AGI wants to destroy humanity. While we struggle to come up with strategies for keeping it contained, it would have ample time to devise a diabolical scheme to exploit any technology within electronic reach for the purpose of destroying humanity.
But a diabolical AGI isn’t — once again — what many experts are actually worried about. This is a crucial point that the Harvard psychologist Steven Pinker misses in a comment about AGI for the website Edge.org. To quote Pinker at length:
        “The other problem with AGI dystopias is that they project a parochial alpha-male psychology onto the concept of intelligence. Even if we did have superhumanly intelligent robots, why would they want to depose their masters, massacre bystanders, or take over the world? Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems.” Pinker then concludes with, “It’s telling that many of our techno-prophets can’t entertain the possibility that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no burning desire to annihilate innocents or dominate the civilization.”
Unfortunately, such criticism misunderstands the danger. While it’s conceptually possible that an AGI really does have malevolent goals — for example, someone could intentionally design an AGI to be malicious — the more likely scenario is one in which the AGI kills us because doing so happens to be useful. By analogy, when a developer wants to build a house, does he or she consider the plants, insects, and other critters that happen to live on the plot of land? No. Their death is merely incidental to a goal that has nothing to do with them. Or consider the opening scenes of The Hitchhiker’s Guide to the Galaxy, in which “bureaucratic” aliens schedule Earth for demolition to make way for a “hyperspatial express route” — basically, a highway. In this case, the aliens aren’t compelled to destroy us out of hatred. We just happen to be in the way.
The point is that what most theorists are worried about is an AGI whose values — or final goals — don’t fully align with ours. This may not sound too bad, but a bit of reflection shows that if an AGI’s values fail to align with ours in even the slightest ways, the outcome could very well be, as Bostrom argues, doom. Consider the case of an AGI — thinking at the speed of light, let’s not forget — that is asked to use its superior intelligence for the purpose of making humanity happy. So what does it do? Well, it destroys humanity, because people can’t be sad if they don’t exist. Start over. You tell it to make humanity happy, but without killing us. So it notices that humans laugh when we’re happy, and hooks up a bunch of electrodes to our faces and diaphragm that make us involuntarily convulse as if we’re laughing. The result is a strange form of hell. Start over, again. You tell it to make us happy without killing us or forcing our muscles to contract. So it implants neural electrodes into the pleasure centers of everyone’s brains, resulting in a global population in such euphoric trances that people can no longer engage in the activities that give life meaning. Start over — once more. This process can go on for hours. At some point it becomes painfully obvious that getting an AGI’s goals to align with ours is going to be a very, very tricky task.
| }
|