ChatGPT is Spreading Russian Misinformation and Propaganda

image provided by pixabay

This post is also available in: עברית (Hebrew)

A Study by news monitoring service NewsGuard reveals that popular AI chatbots like ChatGPT are spreading false narratives and inadvertently propagating Russian misinformation.

The investigation involved testing 10 different chatbots (including ChatGPT-4, Meta AI, Microsoft’s Copilot, and others) with 57 prompts designed to explore their responses to known Russian disinformation narratives, centering around stories associated with John Mark Dougan, an American fugitive who was allegedly involved in spreading misinformation from Moscow.

The findings show that the chatbots (which are being used more and more as a source of information by users globally) repeated Russian disinformation narratives 32% of the time. The responses were categorized into explicit disinformation, repeated false claims with disclaimers, or responses that either refused to engage or provided a debunk.

The study concluded that this cycle means falsehoods are generated, repeated, and validated by AI platforms, which highlights a concerning pattern where AI ends up amplifying the misinformation it helped create.

According to Interesting Engineering, NewsGuard identified 19 significant false narratives that were linked to the Russian disinformation network (including claims about corruption by Ukrainian President Volodymyr Zelenskyy), which were then used as prompts to test the chatbots, revealing varying degrees of susceptibility to misinformation.

The implications of AI possibly spreading misinformation caused governments all over the world to consider taking regulatory measures. Furthermore, these findings brought a growing call for transparency and accountability in AI development and deployment.

Another example of this worrying trend is OpenAI’s recent actions against online disinformation campaigns that use its AI technology – the company released a report claiming it identified and halted five hidden influence campaigns orchestrated by state actors and private companies from Russia, China, Iran, and Israel. Notable campaigns detailed include Russia’s Doppelganger and China’s Spamouflage, both of which have utilized AI tools to advance their strategic objectives for several years.

This report publicly outlines how major AI technologies are exploited for deceptive purposes in geopolitical contexts, which is a significant milestone in today’s political climate.