The Tragic Crack in Artificial Intelligence Source: James P. Pinkerton
Here’s a headline to chew on: “Why artificial intelligence might trigger a nuclear war.” That ominous April 24 header, from MIT’s Technology Review, refers to a new study from the RAND Corporation on the possible impact of AI on nuclear weapons.
The folks at RAND, of course, have a long history with nuclear war—or at least thinking about it—going back to the organization’s origins in the 40s as a Pentagon think tank. Back in 1960, RAND staffer Herman Kahn wrote an improbable bestseller, On Thermonuclear War.
Interestingly, in pondering AI, the new RAND paper rules out the familiar popular scenarios of dystopian machines. Instead, it suggests a less dramatic, albeit still calamitous, vision of a cyber Murphy’s Law, as AI melds with nuclear weaponry. “Dismissing the Hollywood nightmare of malevolent AIs trying to destroy humanity with nuclear weapons,” Rand experts are “instead concerned with more-mundane issues arising from improving capabilities.”
In other words, the trouble would come not from AI itself, but from the implementation of AI. Says RAND: “There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion, or adversaries may believe that the AI is more capable than it is, leading them to make catastrophic mistakes.”
Yet once AI is fully installed, the RAND-ians continue, there’s hope: “If the nuclear powers manage to establish a form of strategic stability compatible with the emerging capabilities that AI might provide, the machines could reduce distrust and alleviate international tensions, thereby decreasing the risk of nuclear war.” So, yes, RAND is saying that putting AI in charge of nukes will be tricky, but after that the world will be safer.
Perhaps not everyone will agree. Some will indeed wonder whether having AI with its finger on the nuclear button will really be safer than human control. After all, if we look back at the more than seven decades that one or more countries have possessed nuclear weapons, the human record looks pretty good—nobody has been nuked since 1945.
So would AI really make us safer? And for that matter, should we be as quick as RAND to dismiss all those “Hollywood” scenarios? As we know, in Hollywood’s telling, when the computers take over, the results always seem to be horrible, from the enslavement of humanity in Colossus: The Forbin Project (1972) to the near-total nuclear annihilation of humanity in Terminator (1984). Indeed, the popular culture’s mistrust of enveloping technology is a constant: it’s hard to think of a culturally significant story or show about computers and robots in which the machinery does not run amok.
The reel world aside, in the real world, every day, computers, AI, and robots continue their encroachment. Social networks are ubiquitous, big data propels e-commerce, and the fast-approaching Internet of things will soon know the whereabouts, in granular detail, of every connected device in the world—and who knows, maybe every unconnected object as well.
There’s no lack of public concern about all of these developments. It seems likely that significant regulation is coming, and yet at the same time, the cyber-ification of everything seems certain to continue. Did I mention that Amazon is slated to sell robots—connected, of course, to its Alexa “virtual assistant” system?
We might observe that the tech-savvy types who will probably be among the first to buy Amazon robots are likely to be familiar with the various doomsday-ish sci-fi speculations as to what can go wrong with machines—and yet they will buy them anyway. Are they smart to get the benefits of a cool labor-saving device? Or are they dumb to welcome into their homes a tech Trojan Horse? The only thing we know for sure: they’ll find out.
Indeed, in the 21st century, whether we’re tech-geeks or not, we’ll all find out how we fare with computers. Sometime soon, humanity will have its rendezvous with cyber-destiny in the form of the Singularity. That’s the soothing name for the projected moment when computers, in toto, are smarter than humans. Estimates as to timing vary: Singularity Hub, for example, a Google-spawned operation, suggests it could happen anytime between 2029 and 2047.
So if the machines continue to rise, what will happen? Will they be little helpers? Or big Hitlers? Something in between? Or something different altogether?
Perhaps the scariest answer to questions about our joint destiny, man and machine, comes from a 1967 short story by Harlan Ellison boasting the evocative title I Have No Mouth and I Must Scream. In that work, the author refers to “The innate loathing that all machines had always held for the weak, soft creatures who had built them.” And so, in their “paranoia,” the machines had “sought revenge.”
That seeking of revenge was indeed severe. The entire human race is annihilated, except for one last man, who is kept alive in a special biological condition—“a great soft jelly thing”—just so the computer can have the pleasure of torturing him forever.
As the machine explains in technical detail to its victim, “Hate, let me tell you how much I’ve come to hate you since I began to live. There are 387.44 million miles of wafer-thin printed circuits that fill my complex. If the word ‘hate’ was engraved on every nano-angstrom of those hundreds of millions of miles it would not equal one one-billionth of the hate I feel for humans at this micro-instant. For you. Hate. Hate.”
Some will argue, of course, that the anti-computer bias of sci-fi is just a case of human writers pandering to the fears of human readers. After all, fear is the best motivator, and so if you want to sell movie tickets or run up book sales—or, these days, get clicks—you’re well advised to sound the alarm.
By contrast, good news tends to be dull. That’s why the fantastic material advancement of the last two centuries gets relatively little attention. People take the progress in their daily lives for granted, yet when they seek a thrill, they look to harrowing tales of war, famine, pestilence—and robot rebellions.
Indeed, it’s always been like this, especially with regard to innovation. Long before anybody thought of a computer, literature was full of ironic scenarios in which everything went wrong with a new thing, from Eve to Pandora, from Faust to King Midas (he of the ill-starred golden touch).
To be sure, computers are technical, not magical, and so without a doubt there’s a profound difference between casting a spell and running a program.
Yet here’s the rub: computers were made by humans. So in there somewhere, down deep, there’s a human fingerprint—and maybe a human stain. That is, any trait, loved or loathed, that one sees in humans will be glimpsed, too, in human creations. That is, even as we approach the threshold of Von Neumann machines—robots building robots—they’ll still, all of them, trace their lineage back to homo sapiens.
And so that’s likely the right response to the RAND study: there will never be an AI golden age. That is, AI will never be a stand-alone thing; it will never be “pure.” So if, as RAND says, the transition to AI managing nuclear weapons will be fraught with danger, well, the same will be the case with the post-transition, when AIs run things. Nukes will always be dangerous, and they won’t necessarily be safer with a machine in charge of them. Indeed, they could be more of a threat, for the simple reason that AI systems, smart as they might be, will be untested, and many bugs—including those deliberately embedded by their human makers—will need to be gotten rid of.
For perspective, we might recall the wisdom of Ralph Waldo Emerson: “There is a crack in everything God has made.” By that, he meant that there’s an ironic twist, a tragic flaw, a Nemesis lurking in everything. Thus we can see: in Emerson’s mind, maybe God wasn’t so providential. Or maybe He didn’t exist at all, at least not as a prime mover; instead, the prime mover is human nature, and by now, we know all about human nature.
Yet here on Earth, for computers there is a god—and that god is us. We created computers, and we judged them to be good, and so we made more of them. And now that they have multiplied to numbers like stars in the heavens, and so we have to deal with them. For their part, of course, they will have to deal with us.
Yet even as the power dynamics change, both man and machine will share, in their separate-yet-connected natures, a common fault: that tragic Emersonian crack.
Surely that fault is also a fault line. Today, here and there, the fault line has its tremors. And yet one day, as sure as Fate itself, it will quake.
| }
|