This post is also available in: heעברית (Hebrew)

The Pentagon is eager to begin incorporating autonomous systems into exercises and operations. Military leaders have been pushing new concepts of operation based around human-machine teaming for years now, with the goal of augmenting soldiers’ ability to carry things, sense their environment, and respond to threats with help from ground robots, small tactical drones, and even armed robots. 

However, it appears that frontline military personnel are more apprehensive than their commanders about teaming with unmanned systems. This is the conclusion of a new research from the U.S. Air Force’s Journal of Indo-Pacific Affairs, based on a survey of 800 officer cadets and midshipmen at the Australian Defence Force Academy.

The survey showed that “a significant majority would be unwilling to deploy alongside fully autonomous” lethal autonomous weapons systems, or LAWS, and that “the perceived safety, accuracy, and reliability of the autonomous system and that the potential to reduce harm to civilians, allied forces, and ADF personnel are the most persuasive benefits,” as opposed to other factors, such as cost savings, etc. 

So how do you get the troops who have to fight alongside robots to trust them? A recently published paper from the Naval Postgraduate School offers a new look at the problem from an operator’s perspective. 

Marine Corps Maj. Daniel Yurkovich argues in a recent paper that “inability to (a) understand artificial intelligence (AI) and (b) train daily, will compound to create an atmosphere of mistrust in valuable systems that could otherwise improve the lethality of Infantry Marines.” 

According to this approach called interactive machine learning, the key to building that trust might be allowing operators to help train the AI-powered machines that serve beside them, as opposed to just handing a soldier, Marine, or airmen a robot and sending the pair off to war together. 

“Teaching and developing AI agents within a simulated environment by the end user indicate there is the potential for better trust in the AI agent by the end-user when placed as a teammate” within a human-machine team, Yurkovich wrote.

According to defenseone.com, interactive machine learning brings not just the algorithm designer but also the user into the process of designing updates. This allows users to “interactively examine the impact of their actions and adapt subsequent inputs to obtain desired behaviors…even users with little or no machine-learning expertise can steer machine-learning behaviors through low-cost trial and error or focused experimentation with inputs and outputs.”