“Responsible AI” – The Solution to the Deepfake Threat?

image provided by pixabay

This post is also available in: עברית (Hebrew)

Several months ago, thousands of Democratic New Hampshire voters received a telephone call from none other than US President Joe Biden urging them not to vote at the state primary. While the message was obviously fake, this incident is one of the most high-profile examples of the threat posed by deepfakes, especially during the current UK election and the upcoming US election.

The rapid developments of AI (more specifically GenAI) are blurring the line between fact and fiction, potentially causing devastating consequences, sowing the seeds of distrust in the political process, and swaying election outcomes.

According to Techxplore, although the Online Safety Act mandates removing identified illegal disinformation, removing deepfakes once they have already been seen by thousands of voters is too little, too late. Therefore, the purpose of any technology or law aimed at tackling deepfakes should be to prevent the harm altogether.

This is why the US launched an AI task force to look into ways to regulate AI and deepfakes, while India plans to introduce penalties both for those who create deepfakes and other forms of disinformation and for platforms that spread it.

Moreover, tech firms like Google and Meta are imposing regulations that require politicians to disclose the use of AI in election adverts, and seven major tech companies (including OpenAI and Amazon) decided to incorporate “watermarks” into their AI content to identify deepfakes.

However, the watermarks are a faulty solution since they are unregulated (both in terms of shape and use) and easy to remove. Moreover, social media platforms are not the only means of online communication, and so anyone intent on spreading misinformation can easily email deepfakes directly to voters, for example.

Now the question arises- given these limitations, how can we protect democracies from the threat of AI deepfakes? Techxplore claims the answer is to use technology to combat a problem that technology has created. This can be done, for example, by designing and developing a new “responsible AI” mechanism to detect deepfake audio and video at the point of inception – this mechanism would work like a spam filter and remove them from social media feeds and inboxes.

Moving forward, however, we still require responsible AI solutions beyond simply identifying and eliminating deepfakes – we need to find methods for tracing their origins and ensuring transparency and trust in the news users read.