Can Fighter Pilots Trust AI With Their Lives?

Can Fighter Pilots Trust AI With Their Lives?

fighter pilot trust

This post is also available in: heעברית (Hebrew)

The U.S. military is researching how humans and machines cooperate together by testing how well pilots and Artificial Intelligence trust each other in aerial combat.

The Air Combat Evolution program, or ACE program, is DARPA’s attempt to increase the capabilities of the military’s fighter pilots. The idea is that soon enough pilots in manned aircraft will be escorted by “Loyal Wingmen” in the form of UAVs. The increasingly capable UAVs will help the pilot evade other fighters and air defenses. The intent here is that F-35 pilots will feel more like they are sitting in a flying commander center, rather than in a fighter jet.

The F-35’s are equipped with several AI features that make flying the fighter significantly easier than past models of fighter jets. The AI also assists with managing the immense amount of data that the UAVs and other flight sensors pull in.

So the scenario is this: human fighters are escorted by lethal and intelligent robots that are capable of making life threatening decisions, for the pilots and the targets alike. The main issue here is how likely are pilots going to trust experimental AI software when their life is potentially on the line?

The main focus of the ACE program is to calibrate, increase, predict and measure human trust in combat autonomy performance.

The program will test how well humans and AI capable tools, such as UAVs and computers, do together in aerial dogfights. After this has been tested, the program will proceed to test the human-machine teams in a simulation environment to see how well the pilots work when they have multiple UAVs to command. In the simulation there will also be adversary UAVs and other counter measures to threaten the pilot.

Defenseone.com reports that government officials believe that the United States is not the only country to experiment with such autonomous weapons. They believe that Russia and China have already begun testing AI weapons and systems, however the edge still goes to the U.S. since there is a lot of faith in the humans operating these AI systems.

The U.S. understands that we are in an age where AI technology has surpassed the speed and strength of human thinking, to the point where a computer is capable of beating a human grandmaster at chess. However, we have come to learn over the years that although AI may beat humans in many fields, humans, when paired along with AI, come victorious over other humans and other AI programs almost always. This is the main idea behind the ACE program, helping humans gain the advantage by building their trust in non-human systems.