Super intelligent machines spawned by AI? Source: Zach Miners
It's the premise of many science fiction novels and movies: Super-intelligent robot machines that can outsmart humans, if not terminate them entirely. But the prospect doesn't exactly frighten some of today's experts in the field.
"It's a distraction," said Nigel Duffy, chief technology officer at AI software maker Sentient Technologies.
More pressing issues, he said, revolve around the role algorithms play online in determining whether people are allowed to get a mortgage, or establish a line of credit.
His remarks were echoed by others in the industry, during a panel discussion on Wednesday at an event in Silicon Valley focused on artificial intelligence.
"I'm more worried about global warming," said Kevin Quennesson, engineering manager and staff engineer at Twitter's AI and machine learning division, called Cortex.
Panelists' comments were made after two questions from the audience. One person asked what they were fearful of in AI. A second person asked them how humans might co-exist, if at all, with a super-intelligence created by AI.
The consensus of the panel seemed to be that while AI might one day give rise to super-intelligent computers or machines, it's a long ways off, and preparing for that scenario shouldn't be a priority.
Adam Cheyer, cofounder and VP of engineering at AI startup Viv, said the question of super intelligence was worth considering, but only from a theoretical standpoint. He equated the question to asking what it would be like to co-exist with aliens.
Viv is building a platform for AI that other developers can use for their own software. Previously, Cheyer was cofounder and VP of engineering at Siri, before it was acquired by Apple.
"We're more likely to be hit by asteroids," Cheyer added.
Duffy, of Sentient Technologies, said his fear was the notion that people even think there's something to be fearful of. The comment drew laughs from the panel and some in the room.
The panelists' comments seem at odds with concerns held by others as companies like Google and Facebook make new advances in AI, whether it be in language processing, image recognition, robotics or self-driving cars.
Last month, AI and robotics researchers including Stephen Hawking warned of weapons that could, within only a few years, select and destroy targets autonomously, without human intervention.
"The endpoint of this technological trajectory is obvious: Autonomous weapons will become the Kalashnikovs of tomorrow," they wrote in an open letter published at the International Joint Conference on Artificial Intelligence, in Buenos Aires.
But on Wednesday night in Silicon Valley, no such scenario was mentioned.
| }
|