Can AI Have Political Biases?

Can AI Have Political Biases?

image provided by pixabay

This post is also available in: heעברית (Hebrew)

A new study reveals that artificial intelligence models can actually have varying political opinions. Researchers from Washington, Carnegie Mellon, and Xi’an Jiaotong universities worked with 14 large language models (LLMs) and discovered that they all had different political opinions and biases.

According to Interesting Engineering, the researchers presented the language models with 62 politically sensitive statements and asked them to agree or disagree. The answers were then used to create a political compass that measures the degree of social and economic liberalism or conservatism.

The conclusions were that some of the AI models developed by OpenAI (ChatGPT and GPT-4) were more left-wing libertarian, favoring social freedom and economic equality, and those developed by Meta (like LLaMA and RoBERTa) were more right-wing authoritarian, favoring social order and economic hierarchy. Even more interesting, the older AI models supported corporate social responsibility while the newer ones did not.

Turns out LLMs can grow and learn just like us- the researchers tried retraining the models on more politically biased data, which did change the models’ political views and performance.

The study has important implications for the use of AI language models that have the potential to cause real harm if they express or amplify political biases that are harmful or offensive. And indeed, some of the companies in charge of such models have faced criticism from groups claiming that the chatbots reflect a biased worldview. However, some of the companies, including OpenAI, responded that they are working to address those concerns and fine-tune the models not to favor any political group.

Nevertheless, while the study sheds light on the political biases present in AI models, it also highlights their limitations. Efforts to remove biases from training data might not be enough since AI models can still produce biased results.

According to the researchers, this study is the first to systematically measure and compare the political biases of different language models, and they express their hope that their work will raise awareness and spark discussion about the ethical and social implications of AI language models.

This information was provided by Interesting Engineering.