After many years, artificial intelligence is finally here Source: Vivek Wadhwa
Wix.com Ltd. ADI (artificial design intelligence) signage is displayed ahead of the ADI presentation in New York on Tuesday, June 7, 2016. Photo Credit: Bloomberg News / Mark Kauzlarich
We have heard predictions for decades of a takeover of the world by artificial intelligence.
In 1957, Herbert A. Simon predicted that within 10 years a digital computer would be the world’s chess champion. That didn’t happen until 1996. And despite Marvin Minsky’s 1970 prediction that “in from three to eight years we will have a machine with the general intelligence of an average human being,” we still consider that a feat of science fiction.
The pioneers of artificial intelligence were off on the timing, but they weren’t wrong; AI is coming. It is going to be in our TVs and driving our cars; it will be our friend and personal assistant; it will take the role of our doctor. There have been more advances in AI over the past three years than there were in the previous three decades.
Even technology leaders such as Apple have been caught off guard by the rapid evolution of machine learning, the technology that powers AI. Apple opened up its AI systems so that independent developers could help it create technologies that rival what Google and Amazon have already built.
The AI of the past used brute-force computing to analyze data and present them in a way that seemed human. The programmer supplied the intelligence in the form of decision trees and algorithms. Imagine that you were trying to build a machine that could play tic-tac-toe. You would give it specific rules on what move to make, and it would follow them.
Opinion
National cartoon roundup
Today’s AI uses machine learning in which you give it examples of previous games and let it learn from the examples. The computer is taught what to learn and how to learn and makes its decisions. What’s more, the new AIs are modeling the human mind itself using techniques similar to our learning processes.
advertisement | advertise on newsday
The new programming techniques use neural networks modeled on the human brain, in which information is processed in layers and the connections between these layers are strengthened based on what is learned. This is called deep learning because of the increasing numbers of layers of information that are processed by increasingly faster computers. These are enabling computers to recognize images, voice, and text — and to do humanlike things.
AI has applications in every area in which data are processed and decisions required. Wired founding editor Kevin Kelly likened AI to electricity: a cheap, reliable, industrial-grade digital smartness running behind everything. He said that it “will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now ‘cognitize.’ This new utilitarian AI will also augment us individually as people . . . and collectively as a species.”
AI will soon be everywhere. Businesses are infusing AI into their products and helping them analyze the vast amounts of data they are gathering. Google, Amazon and Apple are working on voice assistants for our homes that manage our lights, order our food, and schedule our meetings. Robotic assistants such as Rosie from “The Jetsons” and R2-D2 of “Star Wars” are about a decade away.
Do we need to worry about the runaway “artificial general intelligence” that goes out of control and takes over the world? Yes — but perhaps not for another 15 or 20 years. There are justified fears that rather than being told what to learn and complementing our capabilities, AIs will start learning everything there is to learn and know far more than we do. Though some people, such as futurist Ray Kurzweil, see us using AI to evolve together, others, such as Elon Musk and Stephen Hawking, fear that AI will usurp us. We really don’t know where all this will go.
What is certain is that AI is here and making amazing things possible.
Sign up for The Point
Go inside New York politics.
Vivek Wadhwa is a fellow at Rock Center for Corporate Governance at Stanford University. He wrote this for The Washington Post.
| }
|