Cybersecurity to Protect Artificial Intelligence

Cybersecurity to Protect Artificial Intelligence

This post is also available in: heעברית (Hebrew)

A new study shows how vulnerable compressed AI models are to adversarial attack and offers a solution. Connected devices (IoT) such as smartphones, security cameras, etc. are just a few of the devices that will soon be running more artificial intelligence software to speed up image- and speech-processing tasks. A compression technique known as quantization is smoothing the way by making deep learning models smaller to reduce computation and energy costs. Quantization is the process of constraining an input from a continuous or otherwise large set of values (such as the real numbers) to a discrete set (such as the integers).

But smaller models, it turns out, make it easier for malicious attackers to trick an AI system into misbehaving.

MIT and IBM researchers offer a solution: add a mathematical constraint during the quantization process to reduce the odds that an AI will fall prey to a slightly modified image and misclassify what they see.

“Our technique limits error amplification and can even make compressed deep learning models more robust than full-precision models,” says Song Han, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science and a member of MIT’s Microsystems Technology Laboratories. “With proper quantization, we can limit the error.”

The team plans to further improve the technique by training it on larger datasets and applying it to a wider range of models, according to mit.edu.

In making AI models smaller so that they run faster and use less energy, Han is using AI itself to push the limits of model compression technology.