Human-Machine Teaming Reaches New Record

human-machine teaming

This post is also available in: עברית (Hebrew)

The level of human trust in military autonomous systems influences their teaming performance. In fact, the  U.S. Defense Science Board identified six barriers to human trust in autonomous systems, with ‘low observability, predictability, directability and auditability’ as well as ‘low mutual understanding of common goals’ being among the key issues.

A new situatioanl awareness technology has been developed to address these problems. The

U.S. Army Research Laboratory scientists developed ways to improve collaboration between humans and artificially intelligent agents in two projects recently completed for the Autonomy Research Pilot Initiative supported by the Office of Secretary of Defense.

They did so by enhancing the agent transparency, which refers to a robot, unmanned vehicle, or software agent’s ability to convey to humans its intent, performance, future plans, and reasoning process.

ARL’s Dr. Jessie Chen, senior research psychologist, explains: “As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust in the systems and make appropriate decisions.”

According to phys.corg, the situation awareness-based Agent Transparency, or SAT, model l deals with the information requirements from an agent to its human collaborator in order for the human to obtain effective situation awareness of the agent in its tasking environment.

At the first SAT level, the agent provides the operator with the basic information about its current state and goals, intentions, and plans. At the second level, the agent reveals its reasoning process as well as the constraints/affordances that the agent considers when planning its actions. At the third SAT level, the agent provides the operator with information regarding its projection of future states, predicted consequences, the likelihood of success/failure, and any uncertainty associated with the aforementioned projections.

In one of the projects, IMPACT, a research program on human-agent teaming for management of multiple heterogeneous unmanned vehicles, ARL’s experimental effort focused on examining the effects of levels of agent transparency, based on the SAT model, on human operators’ decision making during military scenarios.

The results suggest that transparency on the part of the agent benefits the human’s decision making and thus the overall human-agent team performance. The human’s trust in the agent was significantly better calibrated — accepting the agent’s plan when it is correct and rejecting it when it is incorrect — when the agent had a higher level of transparency.

In another project, the Autonomous Squad Member (ASM), on which ARL collaborated with Naval Research Laboratory scientists, is a small ground robot that interacts and communicates with an infantry squad. Chen’s group developed transparency visualization concepts, which they used to investigate the effects of agent transparency levels on operator performance. Informed by the SAT model, the ASM’s user interface features an at a glance transparency module where user-tested iconographic representations of the agent’s plans, motivator, and projected outcomes are used to promote transparent interaction with the agent. The use of this technology had positive effects of agent transparency on the human’s task performance without increase of perceived workload.