AI Does Not Pose an Existential Risk to Humans

Image by Unsplash

This post is also available in: עברית (Hebrew)

In recent years, the rapid advancement of artificial intelligence (AI) has sparked concerns about the technology surpassing human capabilities and posing a significant risk to human existence. However, new research from the University of Bath and the Technical University of Darmstadt in Germany challenges this notion. According to a study presented at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), large language models (LLMs) do not represent an existential threat to humanity. The study finds that LLMs cannot independently learn or acquire new skills without explicit guidance, which ensures they remain controllable, predictable, and safe.

The research highlights that while LLMs are adept at following instructions and demonstrate proficiency in language tasks, they do not possess the ability to develop complex reasoning skills autonomously. Despite being trained on increasingly large datasets, LLMs are unlikely to evolve beyond their current capabilities. Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and a co-author of the study, notes that the fear of LLMs as a threat to humanity diverts attention from more pressing issues and hinders the technology’s potential benefits, according to TechXplore.

Led by Professor Iryna Gurevych at the Technical University of Darmstadt, the research team examined what are termed “emergent abilities” of LLMs—tasks they were not specifically trained to perform. For example, LLMs can answer questions about social situations without prior training, a capability previously thought to indicate an inherent understanding of such contexts. The study reveals that this performance actually stems from “in-context learning” (ICL), where models use provided examples to complete tasks.

The team’s extensive experiments demonstrated that LLMs’ abilities are the result of their proficiency in following instructions, combined with their memory and language skills, rather than any emergent complex reasoning abilities. Dr. Tayyar Madabushi addressed concerns that larger models might develop dangerous abilities like reasoning or planning, emphasizing that such fears are unsupported by evidence.

He explains that there has been significant discussion, including at the AI Safety Summit last year, about the potential for models to acquire unpredictable and hazardous capabilities as they scale. The study shows that the notion of LLMs developing entirely unexpected and dangerous skills is unfounded.

While concerns about existential risks persist among experts and non-experts, Dr. Madabushi argues these fears are misplaced. He suggests focusing on practical issues, such as the misuse of AI for generating fake news or committing fraud, rather than hypothetical threats.