This post is also available in: heעברית (Hebrew)

One of the challenges involved with the use of artificial intelligence is trust. While machine learning algorithms show promise for defense systems, determining how much users can trust their output remains obscure. US intelligence officials have repeatedly noted that analysts cannot rely on black box artificial intelligence systems that simply produce a decision or piece of intelligence—they need to understand how the system came to that decision and what unseen biases (in the training data or otherwise) might be influencing that decision.

The technology that underpins machine learning and artificial intelligence applications is rapidly advancing, and now it’s time to ensure these systems can be integrated, utilized, and ultimately trusted in the field. New software could help the US Department of Defense build confidence in decisions and intelligence produced by these algorithms.

According to c4isrnet.com, the software is designed to increase transparency in machine learning systems by auditing them to provide insights about how it reached its decisions.

The Mindful software program was delivered by BAE Systems to the Defense Advanced Research Projects Agency (DARPA). It was developed in collaboration with the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory. The system stores relevant data in order to compare the current environment to past experiences and deliver findings that are easy to understand.

Prepared to dive into the world of futuristic technology? Attend INNOTECH 2023, the international convention and exhibition for cyber, HLS and innovation at Expo, Tel Aviv, on March 29th-30th

Interested in sponsoring / a display booth at the 2023 INNOTECH exhibition? Click here for details!