AI Trained on 10 Million Choices Sheds Light on Human Decision-Making

This post is also available in: עברית (Hebrew)

A newly developed artificial intelligence model is showing promising results in simulating human behavior with a level of accuracy not previously achieved in cognitive science. The model, called Centaur, is designed to predict how people make decisions across a wide range of situations, including unfamiliar ones.

Developed at Helmholtz Munich’s Institute for Human-Centered AI, Centaur is based on a large dataset—over ten million individual decisions made by more than 60,000 participants across 160 controlled psychological experiments, according to the press release. These included studies on risk-taking, reward processing, moral reasoning, and more. The dataset, named Psych-101, was specifically structured to train a language model on cognitive behavior.

Earlier models were built on rigid rules or task-specific designs. Centaur, on the other h learns generalizable patterns in human decision-making. It processes inputs described in natural language and predicts not just outcomes, but also likely response times—offering a layered understanding of the decision process. This capacity to model cognition dynamically brings researchers closer to replicating the underlying mechanisms of thought.

One of the model’s potential applications is in clinical psychology. By simulating how individuals with mental health conditions approach decisions, the model may help researchers identify behavioral markers for conditions like anxiety and depression. Future versions of the dataset will incorporate psychological profiles and demographic factors, expanding its relevance for personalized mental health research.

Beyond psychology, the model may offer tools for decision-making in fields such as healthcare, social policy, and behavioral economics. The ability to simulate human reasoning in response to complex, real-world scenarios could assist in designing more effective interventions or services.

The research team emphasizes ethical deployment, advocating for transparent and open systems. Their goal is to maintain full control over data and model behavior, ensuring that such tools are used responsibly—particularly in sensitive domains.

As research progresses, the team plans to examine how internal computations within the model correspond to actual cognitive strategies. This could provide further insight into how individuals process information, and how those processes differ across populations.

The findings were published in Nature and mark a step forward in the use of AI to explore human cognition at scale.