AI and Disinformation: Problems and Solutions

AI and Disinformation: Problems and Solutions

image provided by pixabay

This post is also available in: heעברית (Hebrew)

Whether we like it or not, our opinions and narratives are constantly being shaped by social media, and in the age of AI this might turn dangerous.

The new study “Social Bots and the Spread of Disinformation in Social Media: The Challenges of Artificial Intelligence” reveals the potential of AI-powered social bots to spread misinformation, and examines the need for organizations to detect and mitigate these harmful effects.

According to Techxplore, the study was led by a team of researchers from Canada and the UK and utilizes cutting-edge text mining and machine learning techniques to dissect the behavior of social bots on X (formally Twitter). After analyzing a dataset of 30,000 posts in English, the researchers revealed a complicated web of interactions between human and non-human accounts, thus shedding light on the spread of disinformation online.

Dr. Mina Tajvidi who co-wrote the study explains: “Social bots are not just benign entities; they have the power to influence public opinion and even manipulate markets. Our research underscores the importance of understanding their intentions and detecting their presence early on to prevent the spread of false information.”

The study draws from the actor-network theory (ANT), providing a theoretical framework through which to examine the dynamics between humans, bots, and the digital landscape. The researchers integrated ANT with deep learning models and discovered the symbiotic relationship between actors and the language they use, offering new insights into the spread of disinformation.

“Our findings highlight the need for enhanced detection techniques and greater awareness of the role social bots play in shaping online discourse. While our research focuses on X (formally Twitter), the implications extend to all social media platforms, where the influence of AI is increasingly prevalent,” claimed Tajvidi.

The study had several limitations, however, which it did acknowledge (including lack of metadata and the focus on tweets only in the English language). The researchers acknowledged this and emphasized the need for future studies to explore additional languages and communication modalities to provide a comprehensive understanding of social bot behavior.

The researchers conclude by saying that as the digital landscape continues to evolve, they call for vigilance and proactive measures to combat the spread of disinformation. This can potentially be done through harnessing the power of AI for good and equipping organizations with the tools to detect and mitigate harmful social bots.