This post is also available in:
עברית (Hebrew)
Artificial intelligence may be better at understanding human emotions than previously believed. A recent study from Swiss researchers reveals that leading generative AI systems are not only capable of interpreting emotionally complex situations—they actually outperform humans on standardized emotional intelligence (EI) assessments.
The research, conducted by the University of Geneva and the University of Bern, evaluated six large language models (LLMs), including ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Claude 3.5 Haiku, Microsoft Copilot, and DeepSeek V3, according to TechXplore. These systems were subjected to five well-established EI tests typically used in psychological research and workplace assessments. The tests present emotionally charged scenarios, such as conflict, betrayal, or stress, and evaluate responses based on their emotional appropriateness.
In one scenario, a character is praised for an idea that was actually stolen from a colleague. Participants must identify the most constructive response, such as speaking to a supervisor rather than retaliating or remaining silent. Across multiple such situations, the AI models consistently chose the most emotionally intelligent responses.
According to the researchers, the AI systems scored an average of 82% on these tests, while human participants averaged just 56%. This suggests that LLMs can recognize, assess, and respond to emotional dynamics in ways that align with established psychological principles.
The study went a step further by asking one model, ChatGPT-4, to generate entirely new EI assessments. These new scenarios were then tested on over 400 human participants and judged to be just as valid and realistic as traditional tests, despite taking a fraction of the time to create.
This ability to both understand emotional context and create new, credible assessments highlights a growing role for AI in fields previously thought to require inherently human judgment. While researchers emphasize that expert oversight remains essential, they suggest applications in education, coaching, personnel training, and conflict resolution may benefit from integrating emotionally intelligent AI.
As large language models become more nuanced, their potential to support, not just simulate, emotionally sensitive human interactions is expanding rapidly.
The research was published in the journal Communications Psychology.