This post is also available in: עברית (Hebrew)
Deepfake technology has become one of the biggest threats faced by society today, claims a recent report by University College London (UCL). In fact, as much as 90% of online content may be synthetically generated by 2026, according to other estimates.
European law enforcement authorities are concerned with the consequences. Europol, as the criminal information hub for law enforcement organizations, has published a detailed overview of the criminal use of deepfake technology, alongside the challenges faced by law enforcement in this field.
Deepfake technology uses Artificial Intelligence to audio and audio-visual content. Deepfake technology can produce content that convincingly shows people saying or doing things they never did, or create personas that never existed in the first place. Tools of disinformation campaigns can include deepfakes, falsified photos, counterfeit websites and other information taken out of context to deceive the audience.
Europol Innovation Lab’s report, entitled Facing Reality? Law enforcement and the challenge of deepfakes, claims that advances in artificial intelligence and the public availability of large image and video databases mean that the volume and quality of deepfake content are increasing, which is facilitating the proliferation of crimes that harness deepfake technology. Contemporary examples included in the report as to the potential use of deepfakes in serious crimes include CEO fraud, impersonating the CEO of a company to make an employee transfer large sums of money. evidence tampering, and the production of non-consensual pornography. Law enforcement agencies therefore need to be aware of deepfakes and their impact on future police work.
Basing on law enforcement practitioners, the report identifies a series of upcoming challenges, including risks associated with digital transformation, the adoption and deployment of new technologies, the abuse of emerging technology by criminals, accommodating new ways of working and maintaining trust in the face of an increase of disinformation.
Much of the deepfake content created today is identifiable through manual methods that rely on human analysts identifying telltale signs in deepfake images and videos. However, this is a labor intensive task that is not actionable at scale, according to hstoday.us.
What kind of action other actors, including the online platforms (e.g. Facebook, Tiktok, etc.) where most deepfakes can and might be shared, are addressing this threat? Many of the policies use ‘intent’ as their barometer for deciding whether or not to remove a deepfake. However, defining ‘intent’ might prove challenging and highly subjective, since it is based on the assessment of individual actors. Nonetheless, it seems that online platforms could play a pivotal role in helping victims of deepfake technology to identify the perpetrator, but how this looks in practice remains to be seen.
As to legislation, European law is struggling to keep pace with the evolution of technology and the changing definitions of crime. The most relevant regulatory framework for law enforcement in the area of deepfakes will be the AI regulatory framework – which is still at proposal level and not applicable yet – proposed by the European Commission, says the report.