Artificial intelligence predictions surpass reality Source: Trevor Hadley
In a 2015 interview with Elon Musk and Bill Gates, Musk argued that humanity’s greatest concern should be the future of artificial intelligence. Gates adamantly voiced his alignment with Musk’s concerns, making clear that people need to acknowledge how serious of an issue this is.   
    “So I try not to get to exercised about this problem, but when people say it’s not a problem then I really start to get to a point of disagreement,” Gates said.
The fears surrounding unchecked advances in AI are rooted in the potential threat posed by machine superintelligence — an intelligence that at first matches human-level capabilities, but then quickly and radically surpasses it. Nick Bostrom, in his book “Superintelligence,” warns that once machines possess a level of intelligence that surpasses that of our own, control of our future may no longer be in our hands.
“Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed,” Bostrom said.
For Musk, Gates and Bostrom, the arrival of superintelligent machines is not a matter of if, but when. Their arguments seem grounded and cogent, but their scope is too far-sighted. They offer little in the way of what we can expect to see from AI in the next 10 to 20 years, or of how best to prepare for the changes to come.
Dr. Michael Mauk, chairman of the UT neuroscience department, has made a career out of building computer simulations of the brain. His wide exposure to AI has kept him close to the latest developments in the field. And while Mauk agrees in principle with plausibility of superintelligent AI, he doesn’t see its danger, or the timeline of its arrival, in the same way as those mentioned before.
“I think there’s a lot of fearmongering in this that is potentially, in some watered-down way, touching a reality that could happen in the near future, but they just exaggerate the crap out of it,” Mauk said. “Is (the creation of a machine mind) possible? I believe yes. What’s cool is that it will one day be an empirically answerable question.”
For Mauk, hype of the sort propagated by Musk, Gates and Bostrom is out of balance, and doesn’t reflect what we can realistically expect to see from AI. In fact, Mauk claims that current developments in neuroscience and computer science are not moving toward the development of superintelligence, but rather toward what Mauk calls IA, or Intelligent Automation.
“Most computer scientists are not trying to build a sentient machine,” Mauk said. “They are trying to build increasingly clever and useful machines that do things we think of as intelligent.”
And we see evidence of this all around us. IA has grown rapidly in recent years. From self-driving cars to Watson-like machines with disease diagnosing capabilities superior to that of even the best doctors, IA is set to massively disrupt the current social and economic landscape.
Students and professionals alike should sober any fears about a future occupied by superintelligent AI, and instead focus on the very real, and near future reality where IA will be profoundly impacting their career. And there’s a beautiful irony to this. As humanity works to adapt to a world with greater levels of Intelligent Automation, along with its many challenges — increased social strife, economic restructuring, the need for improved global cooperation — it will inadvertently be preparing itself to face a potential future occupied by superintelligent AI.
Hadley is a faculty member in biology and a BS ‘15 in neuroscience from Southlake.
| }
|