Machine Learning Systems And Their Security Vulnerabilities

Machine Learning Systems And Their Security Vulnerabilities

Robots of the future, photo illust. by Pixabay
Robots of the future, photo illust. by Pixabay

This post is also available in: heעברית (Hebrew)

While training a neural network, we usually use backpropagation to compute the derivative of the cost function with respect to the network’s weights. In contrast, during an evasion attack, we use backpropagation to compute the derivative of the cost function with respect to the input. 

For example, tricking an autonomous vehicle into not recognizing a stop sign is an evasion attack. Autonomous vehicles use object detectors to both locate and classify multiple objects in a given scene (e.g. pedestrians, other cars, street signs, etc.). An object detector outputs a set of bounding boxes as well as the label and likelihood of the most probable object contained within each box.

In an interview to jaxenter.com, David Glavas, an expert on adversarial machine learning, i.e. the study of security vulnerabilities in machine learning systems, the Technical University of Munich (TUM), said the most dangerous type of attack is evasion attack, which deliberately manipulates the input to evade detection. It can be performed with less knowledge about the target system. 

In late 2018, researchers showed that they can cause a stop sign to “disappear” according to the detector by adding adversarial stickers onto the sign. This attack tricked state-of-the-art object detection models to not recognize a stop sign over 85% of times in a lab environment and over 60% of times in a more realistic outdoor environment. 

Prepared to dive into the world of futuristic technology? Attend INNOTECH 2023, the international convention and exhibition for cyber, HLS and innovation at Expo, Tel Aviv, on March 29th-30th

Interested in sponsoring / a display booth at the 2023 INNOTECH exhibition? Click here for details!