How to Protect Satellite Images from Deep Fake?

How to Protect Satellite Images from Deep Fake?

This post is also available in: heעברית (Hebrew)

Deep fake, a notion based on “deep learning” and “fake”, has been gaining momentum in more fields. AI-manipulated videos of celebrities and world leaders purportedly saying or doing things that they really didn’t, have become a growing concern for governments and security agencies. Now, images of the Earth are also subject to deep fake. An adversary might fool computer-assisted imagery analysts into reporting that a bridge crosses an important river at a given point, while there is no bridge. Forces trained to go a certain route toward a bridge will, in fact, be surprised to find that the bridge does not exist.

Many image-recognition systems can be fooled by adding small visual changes to the physical objects in the environment themselves, such as stickers added to stop signs that are barely noticeable to human drivers but that can throw off machine vision systems.

China has been recently leading the efforts in using generative adversarial networks (GANs) to trick computers into seeing objects in landscapes or in satellite images that aren’t there. Todd Myers, automation lead for the CIO-Technology Directorate at the US National Geospatial-Intelligence Agency says “the Chinese have already designed; they’re already doing it right now, using GANs .. to manipulate scenes and pixels to create things for nefarious reasons.”

How does GANs technology work? First described in 2014, GANs represent a big evolution in the way neural networks learn to see and recognize objects and even detect truth from fiction, according to defenseone.com. Say you ask your conventional neural network to figure out which objects are what in satellite photos. The network will break the image into multiple pieces, or pixel clusters, calculate how those broken pieces relate to one another, and then make a determination about what the final product is, or, whether the photos are real or doctored. It’s all based on the experience of looking at lots of satellite photos. GANs reverse that process by pitting two networks against one another.

The adversarial network learns how to construct, or generate, x, y, and z in a way that convinces the first neural network, or the discriminator, that something is there when, perhaps, it is not.

A lot of scholars have found GANs useful for spotting objects and sorting valid images from fake ones. In 2017, Chinese scholars used GANs to identify roads, bridges, and other features in satellite photos.

The concern is that the same technique that can discern real bridges from fake ones can also help create fake bridges that AI can’t tell from the real thing.

The military and intelligence community can defeat GAN, Myers claims, but it’s time-consuming and costly, requiring multiple, duplicate collections of satellite images and other pieces of corroborating evidence. “For every collect, you have to have a duplicate collect of what occurred from different sources,” he said. “The biggest thing is the funding,” he said.

U.S. officials confirmed that data integrity is a rising concern. Lt. Gen. Jack Shanahan, who runs the Pentagon’s new Joint Artificial Intelligence Center, said: “We have a strong program protection plan to protect the data. If you get to the data, you can get to the model.”

But when it comes to protecting public open-source data and images, used by everybody, the question of how to protect it remains open.