This post is also available in:
עברית (Hebrew)
A recent study by NewsGuard has revealed that several prominent Western AI chatbots, including ChatGPT-4-o, Gemini, Claude, and others, have been spreading Russian disinformation, particularly related to the war in Ukraine. The research highlights how false narratives are absorbed by AI systems through a Russian disinformation network known as Pravda, which has been actively pushing misleading and fabricated stories.
The Pravda network, a central hub for Russian propaganda, has reportedly spread 207 false claims, according to NewsGuard, ranging from allegations of U.S. bioweapons labs in Ukraine to fabricated stories about Ukrainian President Volodymyr Zelensky misusing U.S. military aid. These narratives are first disseminated by pro-Russian websites before being picked up by search engines and web crawlers. As a result, chatbots like ChatGPT and others often end up repeating these false claims in their responses, unknowingly amplifying the disinformation.
In the study, NewsGuard analyzed ten popular chatbots, including OpenAI’s ChatGPT-4o, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and Perplexity. It was found that 33% of the chatbot responses incorporated disinformation from Pravda. Notably, 56 out of 450 chatbot-generated replies contained direct links or references to Pravda articles that spread false information. Seven of the chatbots even cited Pravda directly as their source. This shows the extent to which Russian disinformation is influencing the outputs of major AI platforms.
Pravda’s network, though it doesn’t generate significant web traffic, publishes an alarming volume of articles—approximately 3.6 million per year, explains NewsGuard. These articles are incorporated into AI training data, inadvertently contaminating the responses of the chatbots. As a result, AI systems that rely on vast datasets of information to generate responses are at risk of spreading these falsehoods to users globally.
With the increasing use of AI in both personal and professional settings, this issue underscores the potential risks of AI chatbots unintentionally becoming conduits for misinformation. As this technology becomes more integrated into daily life, ensuring its integrity and reliability is crucial for preventing the spread of harmful narratives.