Human-Machine Teaming: New Approach

Human-Machine Teaming: New Approach

photo illus by pixabay
photo illus by pixabay

This post is also available in: heעברית (Hebrew)

In recent years, the U.S. military has developed many programs and prototypes pairing humans with intelligent machines, from robotic mules that can help infantry units carry ammunition and equipment to AI-enabled autonomous drones that partner with fighter jets, providing support for intelligence collection missions and air strikes. As AI (Artificial Intelligence) becomes smarter and more reliable, the potential ways in which humans and machines can work together with unmanned systems, robots, virtual assistants, algorithms, and other non-human intelligent agents seems limitless. 

Human-machine teaming forms a substantial element of future warfare. It is part of the Department of Defense’s strategy for AI. The successful and effective collaboration between humans and intelligent machines depends in large part on trust. Trust is critical to effective human-machine teaming because it affects the willingness of people to use intelligent machines and to accept their recommendations. 

Rather than studying trust directly, defense researchers and developers have prioritized technology-centric solutions that “build trust into the system” by making AI more transparent, explainable, and reliable. However, technology-centric solutions may not fully account for the human element in this teaming equation. 

Brookings.edu suggests that a holistic understanding of trust — one that pays attention to the human, the machine, and the interactions and interdependencies between them — can help the US military move forward with its vision of using intelligent machines as trusted partners to human operators.

Ensuring that the autonomous and AI-enabled systems are used in safe, secure, effective, and ethical ways will depend in large part on soldiers having the proper degree of trust in their machine teammates.

In the context of human-machine teaming, trust speaks to an individual’s confidence in the reliability of the technology’s conclusions and its ability to accomplish defined goals. Having too little trust in highly capable technology can lead to underutilization or disuse of AI systems, while too much trust in limited or untested systems can lead to overreliance on AI. 

Building trustworthy AI that is transparent, interpretable, reliable, and exhibits other characteristics and capabilities that enable trust is an essential part of creating effective human-machine teams. But so is having a good understanding of the human element in this relationship.

What does it take for people to trust technology? Are some individuals or groups more likely to feel confident about using advanced systems, while others are more reluctant? How does the environment within which human-machine teams are deployed affect trust? Cognitive science, neuroscience, psychology, communications, social sciences, and other related fields that look into human attitudes and experiences with technology provide rich insights into these questions.

For example, research shows that demographic factors such as age, gender, and cultural background affect how people interact with technology, including issues of trust. 

Stressful conditions and the mental and cognitive pressures of performing complex tasks also influence trust, with research showing that people generally tend to over-trust a machine’s recommendations in high-stress situations. 

Broader societal structures also play a role, with organizational and workplace culture conditioning how people relate to technology. For instance, different branches within the military and even individual units each have a unique organizational culture, including different postures toward technology that are reinforced through training and exercises. 

A focus on the human element is therefore a necessary complement to technology-centric solutions. Insights from research on the different cognitive, demographic, emotional, and situational factors that shape how people interact with technology as well as the broader institutional and societal structures that influence human behavior can therefore help augment and refine systems engineering approaches to building trustworthy AI systems.