Mission: Teaching Robots to Drive at Battlefield

Mission: Teaching Robots to Drive at Battlefield

An iRobot 310 Small Unmanned Ground Vehicle belonging to Combat Logistic Battalion 31, 31st Marine Expeditionary Unit, sits staged with 3-D printed lens covers aboard the USS Wasp (LHD-1) while underway in the Pacific Ocean, April 17, 2018. Marines with CLB-31 are now capable of ‘additive manufacturing,’ also known as 3-D printing, which is the technique of replicating digital 3-D models as tangible objects. The 31st Marine Expeditionary Unit partners with the Navy’s Amphibious Squadron 11 to form the Wasp Amphibious Ready Group, a cohesive blue-green team capable of accomplishing a variety of missions across the Indo-Pacific. (U.S. Marine Corps photo by Cpl. Stormy Mendez/Released)

This post is also available in: heעברית (Hebrew)

The challenges facing military robots are not simple. Specifically, unlike the self-driving cars being developed by Google, Uber and others, military robots will be operating in complex environments that don’t have the benefit of standardized markings like lanes, street signs, curbs and traffic lights.

Now, robots are learning how to be better mission partners to soldiers — starting with how to find their way with minimal human intervention. The project is implemented by scientists at the US Army Research Laboratory and Carnegie Mellon University’s Robotics Institute.

According to ARL researcher Maggie Wigness, “environments that we operate in are highly unstructured compared to [those for] self-driving cars. We cannot assume that there are road markings on the roads, we cannot assume that there is a road at all. We are working with different types of terrain.”

While the training of self-driving cars “requires a tremendous amount of labeled training data,” Wigness said, this is not the case in Army-relevant environment, “so we are focusing more on how to learn from small amounts of labeled data.” Specifically, in the ARL project, the robots are trained to navigate environmental features following examples provided by humans.”   

Luis Navarro-Serment, a senior project scientist at Carnegie Mellon University’s Robotics Institute, offered an example.  “Say there’s a puddle. Humans will generally move to avoid the puddle. By observing, the robot can learn to do the same. It’s a form of emulation,” Navarro-Serment said.

In the ARL project, humans assigned weights to the various features in the environment to help the robot learn to resolve conflicting commands, according to defensesystems.com. “For example, we train a robot to drive on road terrain and avoid the grass, so it learns the grass is bad to drive on and the road is good to drive on,” ARL researcher John Rogers said. But then the team gave the robot an additional command to avoid the field-of-view of a sniper. The robot, he said, “needs to balance these two goals simultaneously. It needs to break one of the behaviors.” Presumably, with proper weighting of factors the robot will opt to drive on the grass. “In the ultimate vision of this research, the robot will be operating alongside soldiers while they are performing the mission and doing whatever specific duties it has been assigned,” said Rogers.

The ARL project is scheduled to conclude, along with other RCTA projects, in the fall of 2019, including a demonstration of the new learning technology.