This post is also available in: heעברית (Hebrew)

Smartphone makers like Apple and Samsung typically use biometric technology in their phones so that people can use fingerprints to easily unlock their devices instead of entering a passcode. Hoping to add some of that convenience, major banks are increasingly letting customers access their checking accounts using their fingerprints.

However, deep learning technologies can be used to weaken biometric security systems. Fake digital fingerprints created by artificial intelligence can fool fingerprint scanners on smartphones raising the risk of hackers using the vulnerability to steal from victims’ online bank accounts.

A recent paper by New York University and Michigan State University researchers found out that the software that runs fingerprint scanning systems can be fooled. The discovery underscores how criminals can potentially use cutting-edge AI technologies to do an end run around conventional cybersecurity.

In a previous paper the by other researchers, they discovered that they could fool some fingerprint security systems by using either digitally modified or partial images of real fingerprints. These so-called MasterPrints could trick biometric security systems that only rely on verifying certain portions of a fingerprint image rather than the entire print.

In the new paper, the researchers used neural networks — the foundational software for data training — to create convincing looking digital fingerprints that performed even better than the images used in the earlier study. Not only did the fake fingerprints look real, but they also contained hidden properties undetectable by the human eye that could confuse some fingerprint scanners, according to

The fake fingerprints, dubbed DeepMasterPrints, were created using a variant of neural network technology called “generative adversarial networks (GANs).” This was used to create convincing-looking but fabricated photos and videos known as “deep fakes.” The researchers typically use a combination of two neural networks that work together to create realistic images embedded with mysterious properties that can fool image-recognition software. Using thousands of publicly available fingerprint images, the researchers trained one neural network to recognize real fingerprint images, and trained the other to create its own fake fingerprints.

They then fed the second neural network’s fake fingerprint images into the first neural network to test how effective they were. Over time, the second neural network learned to generate realistic-looking fingerprint images that could trick the other neural network.

The newly developed DeepMasterPrints show that AI technology can be used for nefarious purposes, which means that cybersecurity, banks, smartphone makers and other firms using biometric technology must constantly improve their systems to keep up with the rapid AI advances.