AntiFake Is Your Armor Against Audio Deepfakes

AntiFake Is Your Armor Against Audio Deepfakes

image provided by pixabay

This post is also available in: heעברית (Hebrew)

Advances in artificial intelligence spurred developments in realistic speech synthesis, and while the technology can potentially improve lives through personalized voice assistants and accessibility-enhancing communication tools, it has also led to many deepfakes in which synthesized speech can be misused to deceive humans and machines.

Ning Zhang is an assistant professor of computer science and engineering at the McKelvey School of Engineering at Washington University in St. Louis, and in response to this evolving threat, he has developed AntiFake- a novel defense mechanism designed to prevent unauthorized speech synthesis before it happens.

According to Techxplore, as opposed to traditional deepfake detection methods that are used to evaluate and uncover synthetic audio as a post-attack mitigation tool, AntiFake takes a more proactive stance. It uses adversarial techniques to prevent the synthesis of deceptive speech by making it more difficult for AI tools to read necessary characteristics from voice recordings. The code for AntiFake is reportedly freely available to users.

Zhang explains: “AntiFake makes sure that when we put voice data out there, it’s hard for criminals to use that information to synthesize our voices and impersonate us. The tool uses a technique of adversarial AI that was originally part of the cybercriminals’ toolbox, but now we’re using it to defend against them. We mess up the recorded audio signal just a little bit, distort or perturb it just enough that it still sounds right to human listeners, but it’s completely different to AI.”

Another issue with the ever-changing technological landscape of artificial intelligence is making sure AntiFake can hold up against potential attackers and unknown synthesis models. Zhang and his team built the tool to be generalizable and tested it against five state-of-the-art speech synthesizers, where AntiFake achieved a protection rate of over 95%, even against unseen commercial synthesizers. AntiFake usability was also tested with 24 human participants to confirm the tool is accessible to diverse populations.

AntiFake can at the moment protect short clips of speech, but according to Zhang, there’s nothing to stop this tool from being expanded to protect longer recordings, or even music, in the ongoing fight against disinformation.