Study: People Cooperate Less When They Know AI Is Involved

This post is also available in: עברית (Hebrew)

A new behavioral experiment published in PNAS Nexus has found that people tend to be less cooperative, trusting, and fair when they know they’re interacting with an AI system, even if a real person ultimately benefits from the outcome. The findings raise questions about the use of AI in any context requiring human-level trust, such as using it in training or interviews, for example.

According to TechXplore, the research used well-known strategic interaction games to examine human responses when faced with artificial intelligence instead of another person. Over 3,500 participants played a series of online games—including the Prisoner’s Dilemma, Ultimatum Game, and Trust Game—designed to measure collaboration, fairness, and mutual decision-making.

In each case, participants either played against a human or an AI system (specifically large language model ChatGPT) acting on behalf of a human. In theory, the results should be the same—the AI was only making decisions for a real person. But in practice, players behaved differently.

When they knew an AI was involved, people were significantly less likely to share, trust, or cooperate—even if the AI was acting fairly. Prior experience with AI tools made little difference. And when participants could use AI to make their own decisions, they were more likely to do so if they knew the other side wouldn’t find out.

The results suggest a deeper discomfort with AI in human-facing roles—particularly those involving moral or strategic judgment. This phenomenon, often called “algorithm aversion,” reflects a tendency to resist AI participation in social processes, even when the outcomes are logical or beneficial.

As AI becomes more integrated into systems, understanding how people react to AI decision-making will be critical. Designing systems that incorporate AI without eroding trust or cooperation may be just as important as technical performance.