This post is also available in: heעברית (Hebrew)

Deepfake (“deep learning” and “fake”) is an artificial intelligence-based human image synthesis technique used to combine and superimpose existing images and videos onto source images or videos. The doctored images and videos can be used to generate propaganda and deceptive media like fictionalized political speeches or pornography.

DARPA, the U.S. military’s research division, has spent $68 million on digital forensics technology to flag forgeries during the past two years.

For DARPA, spotting and countering deepfakes is a matter of national security, reports the Canadian Broadcasting Corporation CBC. DARPA’s program, Media Forensics, created to automate existing forensics tools, has recently turned its attention to AI-made forgery.

One of the technologies against deepfakes was developed by a team led by Siwei Lyu, a professor at the State University of New York at Albany and one of his students. They studied several deepfakes, and realized that the faces made using deepfakes rarely, if ever, blink. And when they do blink, the eye-movement is unnatural. This is because deepfakes are trained on still images, which tend to show a person with his or her eyes open, as reported by technologyreview.com.

As a computer scientist and digital forensics expert at Dartmouth College, Hany Farid, evaluates, this is a losing cause: “The adversary will always win, you will always be able to create a compelling fake image, or video, but the ability to do that if we are successful on the forensics side is going to take more time, more effort, more skill and more risk”.

Some videos and images shown by CBC reflect the flaws in DARPA’s deepfake-spotting artificial intelligence in action. People who want to create misleading deepfakes may always be a step ahead of the people trying to stop them, but DARPA’s digital forensics program still has another two years of research ahead of it, according to futurism.com.