Robot uprising? Cambridge University team to assess threat p Source: Trevor Mogg
Researchers at the UK's Cambridge University are taking seriously the potential threats posed to the human race by the likes of artificial intelligence and biotechnology with the planned opening of the Centre for the Study of Existential Risk.
If you thought the idea of robots taking over the world and ultimately wiping us out was merely the preserve of far-fetched sci-fi movies, think again.
A team of researchers at Cambridge University is making plans to open the Centre for the Study of Existential Risk, where it will assess the threat to human civilization posed by the likes of artificial intelligence, climate change, nuclear war and rogue biotechnology.
According to a BBC report, the team behind the Project for Existential Risk -- comprising Cambridge philosophy professor Huw Price, cosmology and astrophysics professor Martin Rees and Skype co-founder Jaan Tallinn -- claim it would be "dangerous" to scoff at talk of a potential robot uprising.
Speaking to the BBC about the project, Tallinn explained that the purpose of the center is to put more thinking into what it calls existential risks. "Existential risks are potential dangers that we might face as a species, things that might kill us as a species, or at least permanently curtail our potential," he said.
So it's not just robots that might finish us off.
'Extinction-level risks'
"Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole," the team explains on its website. "Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake."
Getting back to AI, Tallinn talked about the Frankenstein scenario of a planet overrun by technology with a mind of its own. "If you're creating something that is potentially smarter than you, you might have the problem of control -- like how do you control something that is smarter than you and potentially, for example, able to design its own technology," he said.
He went on to say he believes the chance of the human race being wiped out by something we're responsible for is higher than people generally acknowledge, although it's hard to properly assess because "the bar of uncertainty is very high," adding, "We really should be careful about these things and be prepared."
The center, which plans to open in 2013, will draw on the intellectual resources of the prestigious university to study the potential threats, and in doing so help "make it a little more certain that we humans will be around to celebrate the University's own millennium" in 2209, the teams says on its website.
| }
|