A breakthrough that has been pursued for more than 20 years has been reached today by Google DeepMind. The team has taught a computer program the ancient game of Go, which has long been considered the most challenging game for an artificial intelligence to learn. Not only has the program been successful in playing the game, but it’s proved to be actually quite good at it.
The computer program AlphaGo was developed by Google DeepMind specifically with the task of beating professional human players in Go. The group challenged the three-time European Go Champion Fan Hui to a series of matches, and for the first time ever, the software was able to beat a professional player in all five of the games played on a full-sized board. The team announced the breakthrough in a Nature article published today.
Facebook’s AI team has also been working towards this milestone, and coincidentally just one day before the Google DeepMind announced their achievement, Mark Zuckerberg wrote a public Facebook post that his team is “getting close”.
This history of Go dates back to ancient China, some 2,500 years ago. The game is played by placing black or white stones on a 19 x 19 grid and the aim of the game is to surround the opponents pieces, otherwise known as capturing them. The winner controls at least 50 percent of the board. The reason it’s so difficult for computers to play is because there is an estimated 10 to the power of 700 possible variations of the game. By comparison, chess only has 10 to the power of 60 possible scenarios.
The milestone is hugely significant for several reasons, it will impact the way computers are able to search for a sequence of actions, helping AI programs to get from one place to another, navigating with logic.
Following this announcement, the Google DeepMind team has issued a challenge to the best player in the world, Lee Sedol of South Korea, who has long been considered the greatest player of the modern era. The match is scheduled to take place in March 2016.