A Look into the Cyberthreats Leading to the Elections

A Look into the Cyberthreats Leading to the Elections

image provided by pixabay

This post is also available in: heעברית (Hebrew)

2024 is an election year for many countries worldwide, and cybersecurity experts are worried about the destructive influence cyber threats and AI can cause.

Take Britain for example – as the country faces social and political turmoil, security experts expect the majority of cybersecurity risks to emerge in the months leading up to the day of the elections. This has already happened in 2016, when the elections in the US and the Brexit vote in the UK were both votes disrupted by disinformation shared on social media platforms.

According to CNBC, cybersecurity experts expect malicious actors to interfere in the upcoming elections all over the world in various ways, not only through the spread of misinformation, which in itself is expected to be even worse this year due to the widespread use of AI, and the relative ease of creating deepfakes.

Todd McKinnon, CEO of identity security firm Okta explains: “Nation-state actors and cybercriminals are likely to utilize AI-powered identity-based attacks like phishing, social engineering, ransomware, and supply chain compromises to target politicians, campaign staff, and election-related institutions,” and adds that we are sure to see “an influx of AI and bot-driven content generated by threat actors to push out misinformation at an even greater scale than we’ve seen in previous election cycles.”

The cybersecurity community is calling for a heightened collective awareness of this type of AI-generated misinformation, as well as international cooperation to mitigate the risks of such malicious activity. Countries like China, Russia, and Iran are very likely to conduct misinformation and disinformation operations against various global elections, with the help of tools like generative AI.

When it comes to mitigating these risks, the upcoming local elections are expected to serve as a “test” for social media giants like Google, Facebook owner Meta, and TikTok, to keep their platforms clean of misinformation. Meta, for example, has already taken steps to add a “watermark” to AI-generated content to alert users that it’s not real.

The issue with this approach is that deepfake technology is advancing at a dazzling rate, some might say faster than the countermeasures can be developed. However, cyber experts say that although it is becoming harder to tell what’s real —there can always be some signs that content is digitally manipulated.

McKinnon concluded that we are certainly going to see more deepfakes throughout the election process, but assures the reader that an easy step we can all take is verifying the authenticity of something before we share it.