What Does AI Safety Have to Do With Homeland Security?

What Does AI Safety Have to Do With Homeland Security?

image provided by pixabay

This post is also available in: heעברית (Hebrew)

As companies worldwide are rushing to join the AI craze, experts fear that crucial security details are being overlooked. Top security official claims cyber security must be urgently built into artificial intelligence systems or malicious attacks could have a “devastating” effect.

Lindy Cameron from the National Cyber Security Centre told BBC News it is absolutely necessary to have secure systems in place now, in the early stages of AI development.

AI is being slowly integrated into more and more aspects of our daily lives, and in the not-so-far future it may play a part in our homes and cities, high-end national security and even fighting wars. But of course, along with the benefits come the risks, and experts are worried.

According to BBC News, the concern is that companies competing to secure their position in a growing market will be so focused on getting their systems out as fast as possible, they won’t be thinking about the risks of misuse.

“The scale and complexity of these models is such that if we don’t apply the right basic principles as they are being developed in the early stages it will be much more difficult to retrofit security,” says Cameron.

AI systems may easily be used as tools, or even be subverted by those seeking to do harm.

For many years, a small group of experts has specialized in a field called ‘adversarial machine learning’, which looks at how AI and machine learning systems can be tricked into giving bad results.

For example, let’s take AI that is trained to recognize images. According to the BBC, researchers ran a test by placing stickers on a ‘stop’ road sign, which made the AI think it was a speed limit sign – something with potentially serious consequences for self-driving cars.

Another danger is ‘poisoning’ the data from which the AI is learning- meaning deliberately creating bias by injecting bad data into the learning process.

These dangers are not only hackers seeking to cause disruption but may pose a risk to wider national security.

For example, AI used to analyze satellite imagery may be “tricked” to either miss the real object or see an array of fake ones.

These concerns, previously theoretical, are now emerging as real-world attacks on systems. It seems to be happening first where AI is used to improve cyber security by detecting attacks. Here adversaries are seeking ways to subvert those systems so their malicious software can move undetected.

This phenomenon will inevitably reach all fields of our lives, from grocery shopping to homeland security. It seems that the experts should continue pushing for regulations and security measures to be built in before it is too late.