Artificial Intelligence – the Next Cold War?

Artificial Intelligence – the Next Cold War?

This post is also available in: heעברית (Hebrew)

A report by Pentagon’s Defense Science Board argues that the United States needs to prepare for artificial intelligence (AI) in warfare – because other countries might prepare for it first.

The report, cited by PDDNet.com, says that “while evident that the DoD is moving forward in the employment of autonomous functionality, it is equally evident that the pull from diverse global markets is accelerating the underlying tech base and delivering high-value capabilities at a much more rapid pace.” The report suggests a wide variety of experimental projects, and that the DoD reach out to private “non-traditional R&D communities.”

The report operated under the idea that autonomous weapons capabilities were going to be developed eventually, and that the US needs to avoid an AI Cold War. Doing that involves beefing up the U.S.’s autonomous armory, most of which, they say, can be employed in non-lethal military applications. The writers are aware of the public fear of “the use of autonomous weapons systems with potential for lethality,” and notes that development of technology that reeks of killer robots “may meet with resistance unless DoD makes clear its policies and actions across the spectrum of applications.”

So, most autonomous systems might be focused on piloting vehicles, or communicating between human warfighters – or at least, the report focused on those. The exception to this rule is a recommendation for a “minefield of autonomous lethal UAVs,” which could prevent unwanted incursion into American-held zones either on the land or underwater. These would be designed to be “cascaded,” or to deploy smaller autonomated weapons in order to control specific areas.

According to the report, “for some specific algorithm choices—such as neuromorphic pattern recognition for image processing, optimization algorithms for decision-making, deep neural networks for learning, and so on—the “reasoning” employed by the machine may take a strikingly different path than that of a human decision-maker”.