Scenario Deepfakes- New Risk for the 2024 Presidential Election

Scenario Deepfakes- New Risk for the 2024 Presidential Election

Image provided by pixabay

This post is also available in: heעברית (Hebrew)

Imagine a scenario in which just a month before the elections a video clip surfaces where two world leaders with previous animosity are seen conversing and shaking hands- it could rattle the whole political landscape and even sway voters. And even if experts and journalists point out that no such meeting was held, some citizens might feel that their worst fears have been confirmed.

In this scenario, which is not too far from our reality, elections were swayed and manipulated by a situation deepfake—an event that never actually happened.

According to TechXplore, situation deepfakes are the next stage of technologies that are shaking audiences’ perceptions of reality. Research conducted at the DeFake Project studies how deepfakes are made and what measures voters can take to defend themselves from them.

What is a Situation Deepfake?

A deepfake is created by using artificial intelligence (especially deep learning) to manipulate or generate a face, a voice or conversational language, which can all be combined to form “situation deepfakes.”

The DeFake Project researchers found that deepfakes are mostly created by a mixture of adding one piece of media with another, using a video to animate an image or alter another video, conjuring a piece of media into existence with generative AI, or some combination of any of these techniques.

When it comes to format, instead of a video, it could be a photograph that simulates a smartphone camera or the forged logo of a news agency.

How would one want to influence the elections? Maybe someone would like to portray a candidate either acting heroically or doing something offensive or criminal. The key factor is knowing the target audience- people might target conspiracy theorists in key voting districts instead of trying to sway the entire voting base.

So what can we do against this risk?

There are various technological and psychological ways to detect and defend against situation deepfakes: Technologically, all deepfakes contain some incriminating evidence for their forgery, like overly smooth skin or odd lighting or architecture. When it comes to signs undetectable by the human eye, DeFake’s detector is using AI to catch signs of deepfakes, and the company is aiming to release it in time for the 2024 election.

But even without a powerful deepfake detector, content consumers can always use psychological tools such as background knowledge, curiosity and healthy skepticism to deal with this upcoming threat.