Let’s not succumb to a moral panic over artificial intelligence Source: Larry Magid
We have a long history of “moral panics.” Things that we fear, whether we should or not. In most cases, these fears aren’t entirely irrational, but based on exaggerations or predictions that could, but probably won’t, come true or simply are not nearly as horrific as may first appear.
Many of us remember the Y2K scare of 1999, when we were told that the power grid, ATMs and our transportation systems could come to a screeching halt at midnight on Jan.1, 2000 because computers weren’t programmed to recognize a new century. And, yes, there were a handful of problems, but the world didn’t come to an end. There was a panic that our personal privacy was over in 1988 when Kodak introduced the first portable camera. And there are so many other examples — from killer bees to reefer madness — about things that could be somewhat dangerous but are hardly as devastating as some feared.
Now there is panic about artificial intelligence, with the worry that computers will cease being our servants and somehow morph into becoming our overlords. This fear has inspired some great fiction with movies like the “Terminator” and “iRobot” as well as the all powerful Hal, from the movie “2001.”
And, while these movies remain in the realm of fiction, they do reflect a genuine concern about machines running amuck, which is shared not only by many in the public, but also by some well known tech experts.
Tesla and SpaceX founder Elon Musk has been one of the most vocal critics. Speaking at last month’s National Governors’ Association conference, Musk warned that artificial intelligence        ” is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that,” referring to AI as “the scariest problem.”
Musk called upon the governors to consider government regulation and oversight of AI.
“I keep sounding the alarm bell,” he said, “But until people see robots going down the street killing people, they don’t know how to react.” He called AI “the rare case where I think we need to be proactive in regulation instead of reactive.”
Facebook CEO Mark Zuckerberg responded in a Facebook Live segment from his backyard in Palo Alto.
“I just think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it.” He added, “It’s really negative and in some ways I actually think it is pretty irresponsible.”
Zuckerberg and, ironically Musk, both have a lot invested in AI. Anyone who’s ever driven a Tesla in Autopilot mode knows about that car’s powerful computers that are capable of making instant life and death decisions while automatically steering and changing lanes as you drive on highways. Musk has said that all Teslas now being built have the hardware for full autonomous driving which, he promises, will be unleashed when the self-driving software is ready and approved by government regulators.
Zuckerberg’s Facebook runs ts own Facebook AI Research (FAIR) lab which, very recently, has undergone public scrutiny as a result of an experiment that it shut down. That experiment drew attention over the past couple of weeks, with journalists and pundits misreporting what happened and why one part of its was shut down. Plus, all the hoopla about bots gone awry has detracted from some of the more interesting findings of the research.
In a nutshell, the purpose of the research was to find out how well computers could negotiate with each other and with people.
The panic was over the fact that the researchers stopped part of the experiment because the bots or AI agents wound up creating their own “language” that humans couldn’t understand. But that turned out to be partially fake news.
According to Facebook AI researcher Dhruv Batra, machines creating their own language “is a well-established sub-field of AI, with publications dating back decades.” In a Facebook post, he wrote that “agents in environments attempting to solve a task will often find intuitive ways to maximize reward” and stressed that “analyzing the reward function and changing the parameters of an experiment is NOT the same as ‘unplugging’ or ‘shutting down AI’. If that were the case, every AI researcher has been ‘shutting down AI’ every time they kill a job on a machine.”
In a published article, Deal or no deal? Training AI bots to negotiate, Facebook AI researchers wrote that “building machines that can hold meaningful conversations with people is challenging because it requires a bot to combine its understanding of the conversation with its knowledge of the world, and then produce a new sentence that helps it achieve its goals.”
In other words, the purpose of the experiment was to program bots that could talk with humans as well as themselves so, when researchers realized that the bots were speaking in ways that humans couldn’t understand, they simply reprogrammed them to speak English.
But what’s most interesting about this study was the finding that bots are actually better than humans when it comes to negotiating until they reach an agreement.
“While people can sometimes walk away with no deal, the model in this experiment negotiates until it achieves a successful outcome.”
That’s because the bots were heavily rewarded for coming to an agreement, even if what they agreed upon was less than ideal. But, of course, that’s also sometimes true with human negotiations.
So, if bots are that good at negotiations, maybe they can also help us prevent future panics by analyzing risks and coming up with reasonable predictions and appropriate precautions. In the meantime, I’m all for making sure that AI researchers work with ethicists and other experts to make sure that what they create benefits humankind without the risk of devastating unintended consequences.
| }
|