Study Finds AI Models Are Spontaneously Categorizing the World Like Humans

Image by Unsplash

This post is also available in: עברית (Hebrew)

A new study published in Nature Machine Intelligence reveals that large language models (LLMs) may be independently developing cognitive-like abilities to classify and interpret natural objects—mimicking a key element of human perception.

Researchers from the Chinese Academy of Sciences and South China University of Technology tested several AI systems, including ChatGPT-3.5 and Gemini Pro Vision, to determine whether LLMs can spontaneously sort information in a human-like manner. Using “odd-one-out” tasks across 1,854 natural objects—ranging from animals and food to tools and vehicles—the team collected more than 4.7 million AI-generated responses.

The results showed that the models organized these objects along 66 distinct conceptual dimensions. These went far beyond simple categories such as “fruit” or “furniture” and included abstract features like emotional relevance, texture, and appropriateness for children. This pattern of classification mirrors how humans intuitively group objects—not just by type, but also by context and meaning.

Particularly striking was the finding that multimodal models, which integrate both visual and textual input, showed even greater alignment with human cognition. These systems processed visual and semantic cues simultaneously, bringing their organizational structures even closer to the way our brains handle object recognition.

Neuroimaging comparisons added another layer of insight: the brain regions activated during human object categorization showed significant overlap with patterns observed in AI model responses, according to Interesting Engineering. This suggests that LLMs may be converging on functionally similar strategies to those of the human mind—at least when it comes to sorting and contextualizing information.

However, the researchers caution that while this pattern recognition resembles human understanding, it is not grounded in lived experience or sensory interaction. AI systems do not “understand” objects emotionally or physically. Instead, they reflect statistical patterns learned from massive datasets.

Still, the study challenges the belief that LLMs merely echo data. If these models are beginning to build internal conceptual frameworks, it could signal a step toward more intuitive, human-compatible artificial intelligence—and possibly even toward general-purpose AI.