Vehicle Collision Avoidance Systems Threatened

Vehicle Collision Avoidance Systems Threatened

This post is also available in: heעברית (Hebrew)

Deep-learning algorithms excel at using shapes and color to recognize the differences between humans and animals or cars and trucks, for example. These systems reliably detect objects under an array of conditions and, as such, are used in myriad applications and industries, often for safety-critical uses.

Deep-learning neural networks are highly effective at many tasks, however, they were adopted so quickly that the security implications of these algorithms weren’t fully considered. While the image processing systems are vital for protecting lives and property, the algorithms can be deceived by parties intent on causing harm.

New adversarial techniques developed by engineers at Southwest Research Institute, Texas, can make objects “invisible” to image detection systems that use deep-learning algorithms. These techniques can also trick systems into thinking they see another object or can change the location of objects. The technique mitigates the risk for compromise in automated image processing systems.

Security researchers working in “adversarial learning” are finding and documenting vulnerabilities in deep- and other machine-learning algorithms. Research Engineer Abe Garza of the SwRI Intelligent Systems Division and Senior Research Engineer David Chambers developed a futuristic technology, that, when worn by a person or mounted on a vehicle, the patterns trick object detection cameras into thinking the objects aren’t there, that they’re something else or that they’re in another location. Malicious parties could place these patterns near roadways, potentially creating chaos for vehicles equipped with object detectors, according to eurekalert.org.

“These patterns cause the algorithms in the camera to either misclassify or mislocate objects, creating a vulnerability,” said Garza. “The patterns don’t need to cover the entire object or be parallel to the camera to trick the algorithm. The algorithms can misclassify the object as long as they sense some part of the pattern.”

The patterns are designed in such a way that object-detection camera systems see them very specifically. A pattern disguised as an advertisement on the back of a stopped bus could make a collision-avoidance system think it sees a harmless shopping bag instead of the bus. If the vehicle’s camera fails to detect the true object, it could continue moving forward and hit the bus, causing a potentially serious collision.

The team has created a framework capable of repeatedly testing these attacks against a variety of deep-learning detection programs.