This post is also available in: עברית (Hebrew)
Artificial intelligence and humans have different risk tolerances when data is scarce. A recent finding from the US Defense Intelligence Agency (DIA) suggests that in a real situation where humans and AI were looking at enemy activity, those positions would be reversed.
Artificial intelligence can actually be more cautious than humans about its conclusions in situations when data is limited.
While the results are preliminary, they offer an important glimpse into how humans and AI will complement one another in critical national security fields.
Terry Busch, the technical director for the agency’s Machine-Assisted Analytic Rapid-Repository System, or MARS, told defenseone.com about the experiment. The first part included the use of available data to determine whether a particular ship was in U.S. waters.
“Four analysts came up with four methodologies; and the machine came up with two different methodologies and that was cool. They all agreed that this particular ship was in the United States,” he said. So far, so good. Humans and machines using available data can reach similar conclusions.
The second phase tested something different: conviction. Would humans and machines be equally certain in their conclusions if less data were available? The experimenters severed the connection to the Automatic Identification System, or AIS, which tracks ships worldwide.
In theory, with less data, the human analyst should be less certain in their conclusions, but the researchers found the opposite. “Once we began to take away sources, everyone was left with the same source material — which was numerous reports, generally social media, open source kinds of things, or references to the ship being in the United States — so everyone had access to the same data. The difference was that the machine, and those responsible for doing the machine learning, took far less risk — in confidence — than the humans did,” he said. “The machine actually does a better job of lowering its confidence than the humans do…”
The experiment provides a snapshot of how humans and AI will team for important analytical tasks. But it also reveals how human judgement has limits when pride is involved.