Singapore can tap artificial intelligence to solve problems Source: Norvig
AI HELPS to analyse data and fix problems 'you might not even know you had', says Google research director.
Singapore, in tapping artificial intelligence (AI) as part of its Smart Nation drive, will help policymakers put their finger on what makes its citizens tick, Google's director of research Peter Norvig believes.
“You want to think of the city as serving the citizens, and you want to think of more communication and continual improvement,” he told The Straits Times.
"By understanding the problems through AI, you have a chance to fix them. You can understand better the patterns of a city without having to talk to everyone, but by just measuring it."
AI, he said, will come in handy to analyse data and answer problems “you might not even know you had”, such as changing traffic or electricity usage habits.
He was speaking to ST on the sidelines of the Nobel Prize Dialogue in Tokyo on Sunday. Titled “The Future Of Intelligence” , it featured Nobel laureates and tech leaders as panellists.
The speakers discussed the room for growth of AI in fields such as medicine and mobility.
They also noted the progress of AI in self-driving vehicles, image and speech recognition, and language translation.
Machine learning expert Tom Mitchell of the Carnegie Mellon University told the conference that when the day comes when AI can read and understand text, it will be a “watershed moment for all of us”.
“With computers they will read it all. They will read every article, every webpage and they will be better-read than you and me by a factor of a million,” he said.
“Search engines can be replaced by a personal reading assistant that can answer questions based on what it has read, and prepare a one-paragraph summary, justifying its answer along with citations.”
A world with built-in sensors and replete with self-driving vehicles will, Dr Mitchell foresees, curb traffic congestion and pollution even as cities can tap the gathered data to anticipate crowd sizes. So officials will be able to “make more intelligent decisions on whether to reroute a bus or to deploy police for crowd control”.
Speakers noted, however, the many dangers of mankind's increasing reliance on technology.
The Japanese Society for Artificial Intelligence, comprising 4,000 scientists, on Tuesday drew up ethical guidelines governing the use of AI.
The society acknowledged the risk of AI being misused or abused, and said scientists “must make every effort” to remove any threats to human safety and stop the use of AI to harm others.
It also stressed AI could eventually lead to greater inequality, as scientists develop more autonomous technologies that could, in time, replace jobs.
This was a point that has recently been raised by other prominent thought leaders, including physicist Stephen Hawking and Microsoft founder Bill Gates.
Gates proposed last month that robots should be taxed, and the money used to help retrain people who lose their jobs to the machines.
Dr Mitchell acknowledged these risks in his keynote address on Sunday, stressing the need to improve access to education and job retraining today, while also looking at how to reallocate workers who could be displaced in due time.
Explaining how machines will skewer income disparity, he said: “The jobs that are likely to be automated first will be those that don't pay very well, and these are people who are already at the lower end of the income scale.”
Dr Norvig, meanwhile, gave a scenario that has arisen with technology, but which could worsen.
“Society used to be such that, if I was a farmer and I produced twice as much vegetables as my neighbour, then I made twice as much money.
“But today if I'm an app maker, if my app is a little bit better than the next guy's, I can get 1,000 times more users,” he added.
“So we build this kind of winner-takes-all society and to get away from that pressure, I think we have to change society.”
Speakers also brought up the issue of fake news, which was widely circulated through social media in last year's US presidential election.
Norwegian neuroscientist Edvard Moser, who won the Nobel prize in physiology or medicine in 2014, told the conference he was concerned by the trend.
“As social-media news feeds become more personalised, people are more likely to be trapped in their own bubble.
“This on the one hand serves the user, but on the other hand it could increase his susceptibility to misinformation,”he said.
Dr Norvig stressed that the tech industry as a whole is equipped with tools to fight the perpetuation of false information, although “maybe they weren't applied as well as they could have been”.
But he also pointed out another risk: that some people are prone to cherry-picking data that tell only one side of the story, leading to an incomplete picture.
“With the combination of AI and human effort, I believe we'll be able to help people interpret information better,” said Dr Norvig.
He also sought to put to rest concerns over privacy issues that come with AI, stressing it is Google's bottom line to protect its users.
“As companies deal more with people's data, they have to take responsibility to be good stewards by understanding the data belongs to the individual,” he said.
“We're serving the user, who has the right to do what he wants to the data. And we have the responsibility to protect the user the best we can from hackers, legal attacks.”
While Dr Norvig offered no fast panacea to the pitfalls of AI, he noted: “Anything that is powerful is going to have a big positive effect and a potential for a negative effect, and having a discussion means it is a chance to get it right from the start.”
| }
|