Can Humans Inherit AI Biases?

image provided by pixabay

This post is also available in: עברית (Hebrew)

AI systems can achieve amazing results, holding human-like conversations and providing incredibly phrased outputs, which gives this technology an image of high reliability. Furthermore, an increasing number of professional fields are incorporating AI tools to support specialists and minimize errors in decision-making.

However, the biases that exist within AI results prove this technology is not without risks. The data used to train AI models reflects past human decisions, so if it hides patterns of systematic errors, the AI algorithm will learn and reproduce these errors. AI systems do inherit and amplify human biases.

Surprisingly, new research by psychologists Lucía Vicente and Helena Matute from Deusto University provides evidence that the opposite can also occur: people can inherit artificial intelligence biases (systematic errors in AI outputs) in their decisions. Not only would AI inherit its biases from human data, but people could also inherit those biases from AI, with the risk of getting trapped in a dangerous loop.

According to Techxplore, the researchers conducted a series of three experiments in which volunteers performed a medical diagnosis task. A group of the participants were assisted by a biased AI system (which exhibited a systematic error), and the control group was unassisted. The participants assisted by the biased AI system made the same type of errors as the AI, while the control group did not make these mistakes, which proved that AI recommendations influenced participant’s decisions.

The most significant finding of the research was that after interaction with the AI system, those volunteers continued to mimic its systematic error when they switched to performing the diagnosis task unaided. This shows that biased information by an artificial intelligence model can have a long-lasting negative impact on human decisions.

An inheritance of AI bias proves that there is a need for further psychological and multidisciplinary research on AI-human interaction, and evidence-based regulation to guarantee fair and ethical AI, considering not only the AI technical features but also the psychological aspects of the AI and human collaboration.

This research was published in Scientific Reports.