Deep Fake Videos Challenged

Deep Fake Videos Challenged

deep fake

This post is also available in: heעברית (Hebrew)

Deep-fake videos created using AI present people saying things they never actually said. They can be used for various nefarious purposes, perhaps most troublingly for political disinformation. 

Some recent research promises to support fake-detection efforts. Researchers at Binghamton University (SUNY) and Intel created software that takes advantage of the fact that real videos of people contain physiological signals that are not visible to the eye.

In particular, video of a person’s face contains subtle shifts in color that result from pulses in blood circulation. You might imagine that these changes would be too minute to detect merely from a video, but viewing videos that have been enhanced to exaggerate these color shifts will quickly disabuse you of that notion. 

This phenomenon forms the basis of a technique called photoplethysmography, or PPG for short, which can be used, for example, to monitor newborns without having to attach anything to their very sensitive skin.

Deep fakes don’t lack such circulation-induced shifts in color, but they don’t recreate them with high fidelity. The researchers found that “biological signals are not coherently preserved in different synthetic facial parts” and that “synthetic content does not contain frames with stable PPG.” Translation: Deep fakes can’t convincingly mimic how your pulse shows up in your face.

According to spectrum.ieee.org, the inconsistencies in PPG signals found in deep fakes provided these researchers with the basis for a deep-learning system of their own, dubbed FakeCatcher, which can categorize videos of a person’s face as either real or fake with greater than 90 percent accuracy. And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it.

Will a newer generation of deep-fake generators someday be able to outwit this physiology-based approach to detection? No doubt, that will eventually happen. But for the moment, knowing that there’s a promising new means available to thwart fraudulent videos is encouraging.