UT professor leads first panel for 100-year artificial intelligence study Source: Vivian Abagiu
A lot can happen in 100 years — just ask UT computer science professor Peter Stone.
On Thursday, Stone and fellow researchers released a report that will kick off Stanford’s One Hundred Year Study on Artificial Intelligence, or AI100.   
The report, which was produced by Stone and a panel of 16 other experts from around the world, aims to forecast the future of artificial intelligence by predicting the state of the field in a typical North American city in the year 2030. The report will be followed by updates released every five years, which will continue to track the progress of AI and make predictions about the future.
Barbara Grosz, an AI professor at Harvard and chair of the study’s standing committee, said study members wanted to paint a broad picture of the future of artificial intelligence.
“We felt it was important not to have those single-focus isolated topics, but rather to situate them in the world because that’s where you really see the impact happening,” Grosz said.
The study, which began in 2014, broke down the wide field of AI into eight areas, covering advances including autonomous vehicle, cutting-edge medical equipment and package delivery drones.
Stone, who has been researching autonomous vehicles and traffic control at UT since 2003, said he is most excited about the potential advances in AI technology related to transportation.
“Autonomous cars are getting close to being ready for public consumption, and we made the point in the report that for many people, autonomous cars will be their first experience with AI,” Stone said. “The way that is delivered could have a very strong influence on the way the public perceives AI for the years coming.”
While the report thoroughly covered the benefits of artificial intelligence, Stone said the panel also made a point to address associated risks and ethical dilemmas. Stone said one potential dilemma relates to autonomous vehicles.
“It hasn’t come up in practice, but there is always a possibility that the software in the car would have to make a decision that could choose, for instance, between putting the driver at risk or putting pedestrians outside the car at risk,” Stone said. “How could such a decision be made?”
Grosz said she hopes this panel will help to inspire experts in science, policy and ethics to start a dialogue about AI.
“We cannot predict even 50 years in the future, let alone a hundred years, but we can be sure that whatever technology is around, there will be a scientific basis, and there will be social and ethical challenges that it produces,” Grosz said. “What I hope is that this particular study starts a conversation that brings the right players together to grapple with those issues.”
Katie Genter, an eighth-year Ph.D. student in Stone’s Learning Agents Research Group, is developing the concept of a robotic bird that can work as a teammate with real flocks, leading them out of the path of wind turbines and airplane engines. She said that the recently published report is important because it provides a realistic view of AI for the public and opens an ethical conversation for researchers.
“A bunch of the public is terrified that ‘robots are going to take our jobs,’ or ‘robots are going to take over the world’ … [and] that’s not really the case,” Genter said. “As researchers, though, we also need to put some thought into what we’re programming and what we’re working on, making sure that we’re being responsible as we’re creating new technologies.”
The report also addressed some common misconceptions about AI.
“I think the biggest misconception, and the one I hope that the report can get through clearly, is that there is not a single artificial intelligence that can just be sprinkled on any application to make it smarter,” Stone said.
According to Stone, people tend to make assumptions about AI such as “if a car can drive itself, a robot ought to be able to fold my laundry.” He said assumptions like this disregard the fact that attempts to create artificial intelligence take sustained effort to a very specific application; in other words, the development of autonomous cars does not bring society any closer to laundry-folding robots, because the two things are very different applications.
Stone said that although this study looked exclusively at a typical North American city 15 years in the future, follow-up studies throughout the course of AI100 might shift the focus to other regions and expand to include applications of AI in other situations such as the military. Each study will be run by a different panel of experts, he said.
This study makes predictions for huge advances in the future of AI. But according to Stone, when AI moves from a foreign concept to technology used everyday, people may not see the technology as artificial intelligence anymore. Stone calls this phenomenon the “AI effect.”
“Some people say that that is the curse of the field: If you have a success, then people move on to the next thing and say, ‘Well, AI still can’t do what we really care about.’ But it is also one of the reasons that it is so exciting to be doing research in this area,” Stone said.
| }
|