This post is also available in: heעברית (Hebrew)

Many experts and members of the public have previously expressed their concern over the potential of AI becoming the catalyst for fake news and spreading misinformation. Last weeks events have surely demonstrated that this fear is not only justified but is already initiating massive negative consequences.

A single Twitter post writing “Large Explosion near The Pentagon Complex in Washington D.C. – Initial Report” had gone viral in only but a few hours. Attached to the post was an AI generated photograph depicting a rising column of smoke allegedly caused by a nearby explosion.

What is most concerning, besides the speed with which the generated image was spread and shared throughout the internet, is that the photograph was posted by a verified Twitter account, prompting users to truly believe in the occurrence of the fake event. A user belonging to a “verified” Bloomberg News account.

Twitter’s verification mark – a blue tick next to a user’s handle –is supposed to provide verification and legitimacy to official accounts of individuals and organizations but is now given out freely without much thought to those who can spare a measly $8 per month to pay the company directly.

The ease with which this AI generated photo was shared as true throughout Twitter not only highlights the problematic user verification system recently established by the social media company, but also a major issue with fake AI images in general.

As AI models continue to advance and perfect their craft, more and more experts have voiced their concern over the malicious potential of AI images and fake news, especially in regard to the upcoming 2024 US presidential elections.