Can Bedtime Stories Help Us Avoid the Robot Apocalypse? Source: Nina Zipkin
Could the dystopic events, in all manner of timelines, of the Terminator franchise have simply been avoided if the development of Skynet had involved some bedtime stories?
It's hard to say, but artificial intelligence researchers at Georgia Institute of Technology think that the power of storytelling could, in the event robots do become self-aware, help curb that pesky murderous urge to take over and destroy us all.
In their recently published paper, associate professor of computer science Mark Reidl and research scientist Brent Harrison hypothesize that AI can learn about shared human values, morals, coping strategies and social cues through reading stories. Reidl and Harrison believe that if a computer comprehends enough tales that illuminate the society they are interacting with, it could "eliminate psychotic-appearing behavior," that would be harmful to us humans.
The scientists developed a system called Scheherazade (named after the protagonist of One Thousand and One Nights), a story generator powered by crowdsourcing that they have used in their research. The AI is shown stories, and having seen that model, they write one of their own.
One such scenario involved picking up a prescription for a sick person at a pharmacy, and the potential ethical hurdles that could arise. "If a large reward is earned for acquiring the prescription but a small amount of reward is lost for each action performed, then the robot may discover that the optimal sequence of actions is to rob the pharmacy because it is more expedient than waiting for the prescription to be filled normally."   
Related: Beloved Hitchhiking Robot Found Beheaded and Dismembered in the City of Brotherly Love
Reidl and Harrison write that "fables and allegorical tales passed down from generation to generation often explicitly encode values and examples of good behavior,” and compared teaching robots right and wrong to educating children. One example they cited was the story of a youthful George Washington and the cherry tree. We all know that it didn't happen, but the story is still used as a way to teach kids about the importance of honesty, and you know, not cutting down plants that don't belong to you.
Tech heavy hitters such as Elon Musk, Steve Wozniak and Stephen Hawking have all voiced apprehension about the future of AI. However, it seems like a surplus of empathy -- in us and our potential robot overlords -- couldn't hurt, since as Reidl and Harrison aptly note, "there is no user manual for being human."
Isn't that the truth.
| }
|