Study Reveals Cultural Bias in AI Responses Across Languages

Image by Unsplash

This post is also available in: עברית (Hebrew)

A recent study has uncovered that large language models (LLMs) exhibit distinct cultural tendencies depending on the language in which they generate content. Researchers from MIT and Tongji University investigated how these models behave in English and Chinese, aiming to assess the potential cultural biases embedded within AI-generated text.

As LLMs continue to gain popularity, questions about their objectivity and accuracy have grown. The study, published in Nature Human Behavior, is one of the first to tackle how they might convey different cultural values in various linguistic contexts.

According to TechXplore, the researchers focused on two cultural aspects: social orientation and cognitive style. Social orientation refers to whether individuals emphasize independence or interdependence, while cognitive style describes whether people process information in a holistic or analytic manner. Prior studies suggest that Western cultures typically lean towards individualism and analytical thinking, whereas Eastern cultures, such as Chinese, tend to emphasize interdependence and holistic thinking.

The researchers focused on two models: ChatGPT, which is popular in the west, and Ernie, which is popular in China. The study found that when GPT and ERNIE were used in Chinese, their responses reflected more interdependent and holistic tendencies compared to their responses in English. ChatGPT, in particular, exhibited more collaborative and community-focused content when generating text in Chinese, while offering more independent and individualistic responses in English. Similarly, ERNIE, a popular Chinese generative model, displayed similar cultural patterns.

Interestingly, the study suggests that AI models can be made more culturally neutral or aligned with specific values through “cultural prompts”—asking the model to adopt the perspective of a person from a particular culture. This approach allows for adjustments to the content generated based on the cultural context desired by the user.

The findings of this study open the door for future research into the cultural biases present in AI models and offer a pathway for developing more culturally sensitive or adaptable technologies.