TechNews Pictorial PriceGrabber Video Tue Nov 26 15:33:04 2024

0


Google victories in Go game amaze experts
Source: Ethan Baron


South Korean professional Go player Lee Sedol reviews the match after finishing the second match of the Google DeepMind Challenge Match against Google's artificial intelligence program, AlphaGo in Seoul, South Korea, Thursday, March 10, 2016. The human Go champion said he was left "speechless" after his second straight loss to Google's Go-playing machine on Thursday in a highly-anticipated human versus machine face-off. (AP Photo/Lee Jin-man) ( Lee Jin-man )

Lee Sedol wasn't just playing against a Google computer this week when the 18-time world champion lost two straight matches in Go, the most complex board game in the world.

Lee was, in effect, competing against hundreds of the best players on the planet, whose millions of moves had been fed into Google's AlphaGo during the machine's training. AlphaGo's victories -- a sign of rapid advancements in artificial intelligence -- have amazed some Bay Area computer scientists. Not since IBM's Deep Blue upset world chess champion Garry Kasparov in 1997 has a machine so humbled -- and impressed -- man.

"Everyone has been shocked and surprised with the extent to which AlphaGo plays like a human being," said Stanford University computer science professor Christopher Manning, who specializes in computer language processing and is a high-level amateur Go player -- and was convinced going in that Sedol would best AlphaGo. "It comes up with sharp responses, it plays interesting moves and sequences that seem impressive. It by and large seems to be playing Go just like a top-flight human."

Go, believed to have originated more than 2,500 years ago in China, is considered far more complicated than chess, and the victories of AlphaGo, developed by Google's DeepMind unit, show that artificial intelligence is developing much faster than many experts predicted. AlphaGo could wrap up the best-of-five series with a win Saturday in Seoul, South Korea, and many expect the machine to claim the $1 million victory prize. Google has said it will donate any winnings to charity and Go associations.
Advertisement

"The thing is just relentlessly playing very high-quality moves," said Lukas Biewald, an advanced Go player and CEO of CrowdFlower, an artificial intelligence-driven San Francisco data-services company. "It's a totally different form of intelligence and yet it's coming up with exactly the same pattern as humans come up with after years and years of practicing. It's just amazing."

The machine played like a human because it learned from humans, Biewald suggested. AlphaGo's creators filled its machine brain with 30 million moves from a database of Go competitions, including major international tournaments and national championships. To learn, AlphaGo would try to predict how a professional Go player would respond to a move. "It's actually like taking the collective intelligence of human beings and sort of distilling it," Biewald said. "It's not like it figured this stuff out on its own."

The rules of Go, which is played with round, black and white stones on a grid board, are simple -- but the game is anything but, Biewald explained. "You take turns putting stones on a board and you try to surround territory with those stones," Biewald said. "It's kind of the mystery of life; you start with something so simple, and something so deep and complicated evolves from it."



South Korean professional Go player Lee Sedol reviews the match himself after finishing the second match of the Google DeepMind Challenge Match against Google's artificial intelligence program, AlphaGo in Seoul, South Korea, Thursday, March 10, 2016. The human Go champion said he was left "speechless" after his second straight loss to Google's Go-playing machine on Thursday in a highly-anticipated human versus machine face-off. (AP Photo/Lee Jin-man) ( Lee Jin-man )

While IBM's Deep Blue program employed what's known as "brute force" analysis -- looking at the outcome of every possible move -- AlphaGo "looks ahead by playing out the remainder of the game in its imagination, many times over," DeepMind's Demis Hassabis and David Silver wrote in a Google blog post.

One prominent computer scientist who expected the robot to win was nonetheless struck by AlphaGo's performance. Babak Hodjat, chief scientist at San Francisco artificial intelligence company Sentient Technologies, said he knows DeepMind co-founder Hassabis personally. "I was pretty sure he wouldn't enter the competition if he didn't know he would win," Hodjat said. Still, Hodjat added, "the power of AI on display there is mind-boggling. This is one of the first demonstrations of how powerful machine learning could be."

AlphaGo's machine learning derives from "a level of introspection" as the computer program constantly second-guesses its performance, Hodjat said. "How did I go wrong? Where did I make a mistake? How do I adjust my moves?" Hodjat said, describing the program's playing process.

Artificial intelligence, of course, has a multitude of applications beyond game-playing, from self-driving cars to Web searching. Hodjat's company, for instance, has developed an AI-based "visual intelligence" tool that predicts online shoe shoppers' preferences based on details of the shoe images shoppers click on.

Biewald of CrowdFlower said he was hoping Lee would turn things around.

"I tend to root for the underdog," Biewald said. "In the first game I rooted for the machine because I thought it was the underdog. But now I'm rooting for the human."


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |