This post is also available in: עברית (Hebrew)
In 2017, a robot playing poker robot, Libratus, made quite the buzz headlines when it defeated four top human players. Now, Libratus’ technology is being adapted to serve the US military.
Libratus, Latin for balanced, was created by researchers from Carnegie Mellon University to test ideas for automated decision making based on game theory. Early last year, the professor who led the project, Tuomas Sandholm, founded a startup called Strategy Robot to adapt his lab’s game-playing technology for government use, such as in war games and simulations used to explore military strategy and planning. In August, public records show, the company received a two-year contract of up to $10 million with the US Army.
Libratus’ defeat of poker pros in 2017 was seen as a milestone in AI because the card game has complex features lacking in the board games most prominently mastered by computers.
Libratus was built on a technology called computational game theory. It won more than $1.8 million in play money from the poker champions by calculating how they might respond to its decisions. The software devised powerful betting strategies and even showed the ability to bluff.
Sandholm told wired.com that the approach can be applied to other games, and also military simulations. Sandholm declines to discuss specifics of Strategy Robot’s projects, which include at least one other government contract. He says it can tackle simulations that involve making decisions in a simulated physical space, such as where to place military units. Libratus’ poker technique suggests Strategy Robot might deliver military personnel some surprising recommendations. Pro players who took on the bot found that it flipped unnervingly between tame and hyper-aggressive tactics, all the while relentlessly notching up wins as it calculated paths to victory. “It’s weird because it doesn’t seem that it overwhelms you, but then you look at the score and realize what’s happened,” Sandholm says.
The Pentagon is pushing to make broader use of AI technology. In 2017, then US defense secretary James Mattis explained that his department lagged behind technology companies in the adoption of technologies like machine learning. That same year, the Pentagon started a program called Project Maven, intended to employ commercially available AI techniques on US missions. Its initial project used machine learning to flag objects in drone surveillance video, with help from AI-savvy startups and large companies—including Google.
Other nations, too, are exploring military uses of AI. Russian president Vladimir Putin has said that whoever leads in AI “will become the ruler of the world.” Military applications feature prominently in China’s national AI strategy. In 2017, China’s National Defense University hosted a national war-gaming contest in which human teams took on an AI system.
The growing military interest in AI unsettles some technologists who are advancing the underlying technology. Some of Google’s AI researchers joined the thousands of employees who protested against the company’s work on Project Maven. Sandholm believes concerns about US military use of AI are overblown. The technology is important to help the Pentagon keep the US safe and improve operational efficiency, he says. “I think AI’s going to make the world a much safer place,” Sandholm says.