Cybersecurity Researchers Can Make Self-Driving Cars Hallucinate

Cybersecurity Researchers Can Make Self-Driving Cars Hallucinate

image provided by pixabay

This post is also available in: heעברית (Hebrew)

We all experience a sort of “visual hallucination” when we confuse the silhouette of a chair or a clothing rack out of the corner of our eye with the silhouette of a person. But what if such a thing would happen to, say, a smart car?

Kevin Fu is a professor of engineering and computer science who specializes in finding and exploiting new technologies at Northeastern University, and he managed to make self-driving cars hallucinate.

This is an entirely new kind of cyberattack, a form of machine learning Fu and his team call “Poltergeist attacks”, which could have disastrous consequences in the wrong hands. Poltergeist is more than just jamming or interfering with technology like some other forms of cyberattacks, this method creates false coherent realities, optical illusions for computers that utilize machine learning to make decisions.

According to Techxplore, Poltergeist exploits the optical image stabilization found in most modern cameras designed to detect the movement and shakiness of the photographer and adjust the lens to ensure photos are not blurry. Fu explains it has a sensor inside of it, if one hits the acoustic resonant frequency of the materials of the sensor, one can make the sensors sense false information.

Fu and his team were able to fire matching sound waves toward camera lenses and blur images, making fake silhouettes from blur patterns. Fu says that when done in conjunction with machine learning in an autonomous vehicle, it begins to mislabel objects.

Fu and his team were able to add, remove and modify how autonomous cars and drones perceived their environments, disrupting a driverless car’s object detection algorithm, making the silhouettes and phantoms conjured by Poltergeist attacks transform into people, stop signs or whatever the attacker wants the car to see or not see.

Some lethal examples include making a driverless car see a stop sign where there isn’t one, potentially resulting in a sudden stop on a busy road, or tricking a car into not seeing an object that is in front of it.

If these vulnerabilities aren’t fixed, these threats will become a bigger problem for consumers, companies, and the world of tech as a whole, as machine learning and autonomous technologies become more common.