Detecting Deepfakes- We are Not Getting Better, But Technology is

image provided by pixabay

This post is also available in: עברית (Hebrew)

New research performed at UCL shows that humans are only able to detect AI-generated speech 73% of the time in both English and Mandarin.

But first, what are Deepfakes?

According to Techxplore, they are synthetic media intended to resemble a real person’s voice or appearance. They are made by generative artificial intelligence, which is a type of machine learning that trains an algorithm to learn the patterns and characteristics of a dataset, such as video or audio of a real person so that it can reproduce original sound or imagery.

When deepfake technology was in its infancy, it required huge amounts of data in order to produce a satisfactory replica of a person’s voice or appearance, but now the latest pre-trained algorithms can recreate a person’s voice with just a three-second clip.

The researchers at UCL generated 50 deepfake speech samples both in English and Mandarin using a text-to-speech (TTS) algorithm that was trained on two publicly available datasets, one for each language.

Both real samples and these generated samples were played for 529 participants, who were asked to detect which were real and which were fake. The participants were only able to identify fake speech 73% of the time, and after they received training to recognize aspects of deepfake speech the results improved only slightly.

The researchers state that their next step is to develop better-automated speech detectors as part of ongoing efforts to create detection capabilities to counter the threat of artificially generated audio and imagery.

There are still benefits from generative AI audio technology of course, like granting accessibility for people with limited speech or who have lost their voice from an illness. Nevertheless, there is already a growing phenomenon of this technology being used by criminals who are causing harm to both individuals and companies, and it’s likely only going to get worse.

Senior author of the study Professor Lewis Griffin (UCL Computer Science) said, “With generative artificial intelligence technology getting more sophisticated and many of these tools openly available, we’re on the verge of seeing numerous benefits as well as risks. It would be prudent for governments and organizations to develop strategies to deal with abuse of these tools, certainly, but we should also recognize the positive possibilities that are on the horizon.”