Giraffe machine has taught itself to play chess at higher le Source: Nancy Owano
Credit: Wikipedia/CC BY-SA 3.0
"Chess, after all, is special; it requires creativity and advanced reasoning. No computer could match humans at chess." That was a likely argument before IBM surprised the world about computers playing chess. In 1997, Deep Blue's entry won the World Chess Champion, Garry Kasparov.
Matthew Lai records the rest: "In the ensuing two decades, both computer hardware and AI research advanced the state-of-art chess-playing computers to the point where even the best humans today have no realistic chance of defeating a modern chess engine running on a smartphone."
Now Lai has another surprise. His report on how a computer can teach itself chess―and not in the conventional way―is on arXiv. The title of the paper is "Giraffe: Using Deep Reinforcement Learning to Play Chess."
Departing from the conventional method of teaching computers how to play chess by giving them hardcoded rules, this project set out to use machine learning to figure out how to play chess. Namely, he said that deep learning was applied to chess in his work. "We use deep networks to evaluate positions, decide which branches to search, and order moves."
As for other chess engines, Lai wrote, "almost all chess engines in existence today (and all of the top contenders) implement largely the same algorithms. They are all based on the idea of the fixed-depth minimax algorithm first developed by John von Neumann in 1928, and adapted for the problem of chess by Claude E. Shannon in 1950."
This Giraffe is a chess engine using self-play to discover all its domain-specific knowledge. "Minimal hand-crafted knowledge is given by the programmer," he said.
Results? Lai said ,"The results showed that the learned system performs at least comparably to the best expert-designed counterparts in existence today, many of which have been fine tuned over the course of decades."
OK, not at super-Grandmaster levels, but impressive enough. "With all our enhancements, Giraffe is able to play at the level of an FIDE [Fédération Internationale des Échecs, or World Chess Federation] International Master on a modern mainstream PC," he stated. "While that is still a long way away from the top engines today that play at super-Grandmaster levels, it is able to defeat many lower-tier engines, most of which search an order of magnitude faster."
Addressing the value of Lai's work in this paper, MIT Technology Review, stated that, "In a world first, an artificial intelligence machine plays chess by evaluating the board rather than using brute force to work out every possible move." Giraffe, said the review, taught itself to play chess by evaluating positions much more like humans.
The technology at play here, said MIT Technology Review, is the neural network, processing information inspired by the human brain. "It consists of several layers of nodes that are connected in a way that change as the system is trained."
What is more, the "deep" neural networks of today have become quite powerful and can outperform humans in tasks such as facial and handwriting recognition.
Lai submitted the paper in partial fulfillment of MSc degree requirements in Advanced Computing at Imperial College London. The Imperial College High Performance Computing service provided the computing power required for the project. He expressed gratitude for "the hundreds of people who played thousands of games against Giraffe on the Internet Chess Club."
Quoted in MIT Technology Review, he said, "Unlike most chess engines in existence today, Giraffe derives its playing strength not from being able to see very far ahead, but from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans, but have been elusive to chess engines for a long time."
| }
|