This post is also available in: עברית (Hebrew)
by Konstantin Bodragin
In case you haven’t heard, Google just made a pretty big splash in the Artificial Intelligence (AI) world. The company’s London-based DeepMind group announced that its software had beaten Fan Hui, the best Go player in Europe. Now the company plans to take on the competition at the highest level in Seoul, S Korea.
Computers have been able to consistently beat humans at tic-tac-toe for decades, and it’s considered a solved game in that regard. Deep Blue beat chess grandmaster Garry Kasparov in 1997, and it’s been quite a few years now since even a top level player posed a serious challenge to AI. Go is different. The 2,500 year old Chinese game is exponentially more complex than a game of chess, and nobody thought a human player would be beat this year.
Go is played with black and white stones on 19×19 grid – chess only on an 8×8. The rules are simpler, with no specific directives for how a particular piece can move, but the strategy of the game is far, far more complex. If in a game of chess a player typically has a choice of about 20 moves in each turn, in Go a player needs to choose one move out of a possible 200. According to DeepMind’s team, there are more possible positions in Go than atoms in the universe. Determining who is winning is in itself a challenge, and top players frequently rely on instinct rather than logical analysis. The game posed such a challenge, most experts thought solving it would take years.
“Many of the best programmers in the world were asked last year how long it would take for a program to beat a top professional, and most of them were predicting 10-plus years,” said Demis Hassabis, DeepMind’s chief executive.
Facebook is also working on an AI to beat Go, though apparently they are lagging a bit behind. Both companies, however, didn’t set out to develop a Go beating heuristic “if-this-then-do-that” engine, but AI that could and did independently learn to play at very high levels.
Their work is the culmination of years of effort on the part of many of the world’s top scientists. “There has been, and there is, a lot of progress in state-of-the-art artificial intelligence,”said Oxford philosophy professor Nick Bostrom. “[Google’s] underlying technology is very much continuous with what has been under development for the last several years.”
But that is exactly what is so exciting about this: the technology is progressing so fast, in fact faster than many imagined it would, that practical, real-world AI is now right around the corner. “DeepMind’s techniques can help our smartphones not only recognize images and spoken words and translate from one language to another, but also understand language. These techniques are a path to machines that can grasp what we’re saying in plain old English and respond to us in plain old English—a Siri that actually works,” writes Cade Metz for Wired.
But that would be only the beginning. AI that can work in an evolving, complex environment like a game of Go can certainly be useful for making some more serious decisions, too. That’s exactly what Professor Zoubin Ghahramani of the University of Cambridge thinks: “That could be used for decision-making problems – to help doctors make treatment plans, for example, in businesses or anywhere where you’d like to have computers assist humans in decision making,” he said.
With the multitude of sensor data available to the modern military commander, knowing everything that is happening at any given moment is nigh on impossible. A smart, dedicated AI that can not only synthesize the data, but also reach useful decision based on it, would prove an invaluable tool for any modern army, putting it in a league above the rest.
This reality is likely still a long way away, but Google just brought us one significant step closer to it.