This post is also available in: heעברית (Hebrew)

The goal: Human soldier decision-makers operating in a command and control capacity to receive organized, fused and integrated combat data in real time from robots. Manned-unmanned teaming, human-machine interface demonstrates more coordinated human decision-making with robots. 

US Army soldiers operating modified Bradley Fighting Vehicles acquired enemy targets by operating combat drones armed with guns and sensors, as part of an exercise intended to refine unmanned systems for targeting and attack operations. 

The modified Bradleys, called Mission Enabling Technologies-Demonstrators (MET-Ds), controlled several Robotic Combat Vehicles up to a 2,000 meter range. 

These new fighting vehicles have 360-degree cameras, a 25-mm main gun, and control touchscreens. “The RCVs are M113 surrogate platforms that also have 360 cameras and fire 7.62 mm machine guns,” the Army report stated. 

Unmanned vehicles could carry ammunition, cross bridges into enemy fire, perform forward recon missions to test enemy defenses, coordinate with air attack assets and—when directed by human authorities—destroy enemy targets with mounted weapons. 

Not only will these kinds of technical steps expand attack options and combat lethality while better protecting soldiers from enemy fire, but they also help expand the battlefield to expedite air-ground networking and longer-range operations. 

The unmanned vehicles could also accompany dismounted soldiers on patrol as a way to reinforce their mission without placing manned crews in danger of enemy fire. 

The robot vehicles could call for fire, carry supplies and ammunition, survey forward terrain, provide targeting data or, if called upon by a human operator, potentially fire weapons. 

However, for ethical and tactical reasons, the military maintains its clear position that humans must make decisions regarding the use of lethal force, despite advances in algorithms enabling greater autonomy. 

The doctrinal stance is also grounded in a recognition that even the most advanced computer algorithms are not sufficient to replace the problem-solving and decision-making abilities of human cognition. There is concern, however, that potential adversaries will not adhere to similar doctrine, according to nationalinterest.org.