Cyber Security On The Battlefield – From AI To Autonomy

Cyber Security On The Battlefield – From AI To Autonomy

Battlefield. image by pixabay

This post is also available in: heעברית (Hebrew)

Artificial intelligence software in battlefield autonomous systems can be vulnerable to cyber attacks. Researchers are looking for ways to make the systems’ machine learning algorithms more secure. These algorithms are what the machines rely on to make decisions and adapt on the battlefield. The research project, led by Purdue University in partnership with Princeton University, is part of the US Army Research Laboratory (ARL) Army Artificial Intelligence Institute (A2I2).

The goal is to develop a robust, distributed and usable software suite for autonomous operations. The prototype system will be called SCRAMBLE, short for “SeCure Real-time Decision-Making for the AutonoMous BattLefield.” Army researchers will be evaluating SCRAMBLE at the Army Research Laboratory’s autonomous battlefield test bed to ensure that the machine learning algorithms can be feasibly deployed and avoid cognitive overload for warfighters using these machines.

According to purdue.edu, here are several points of an autonomous operation where a hacker might attempt to compromise a machine learning algorithm. Before even putting an autonomous machine on a battlefield, an adversary could manipulate the process that technicians use to feed data into algorithms and train them offline. SCRAMBLE would close these hackable loopholes in three ways. The first is through “robust adversarial” machine learning algorithms that can operate with uncertain, incomplete or maliciously manipulated data sources. Second, the prototype will include a set of “interpretable” machine learning algorithms aimed at increasing a warfighter’s trust of an autonomous machine while interacting with it. 

“The operating environment of SCRAMBLE will be constantly changing for many reasons such as benign weather changes or adversarial cyberattacks,” said David Inouye, a Purdue assistant professor of electrical and computer engineering. “These changes can significantly degrade the accuracy of the autonomous system or signal an enemy attack. Explaining these changes will help warfighters decide whether to trust the system or investigate potentially compromised components.”

The third strategy will be a secure, distributed execution of these various machine learning algorithms on multiple platforms in an autonomous operation. Ultimately, the goal is to make all of these algorithms secure despite the fact that they are distributed and separated out over an entire domain. 

Prepared to dive into the world of futuristic technology? Attend INNOTECH 2023, the international convention and exhibition for cyber, HLS and innovation at Expo, Tel Aviv, on March 29th-30th

Interested in sponsoring / a display booth at the 2023 INNOTECH exhibition? Click here for details!