This post is also available in:
עברית (Hebrew)
As artificial intelligence (AI) continues to evolve at a rapid pace, leading AI developer Anthropic is sounding the alarm about the potential catastrophic risks posed by AI models. In November 2024, the company unveiled an upgraded version of its Claude model, which is now capable of controlling a computer, underscoring the need for urgent regulation to address the risks associated with AI.
In a statement, Anthropic emphasized that the “window for proactive risk prevention is closing fast.” AI models have already demonstrated their ability to drive significant advances in scientific research, healthcare, and economic growth. However, the company warns that without carefully targeted regulation, these technologies could also expose society to severe risks. Anthropic argues that the lack of appropriate regulation could lead to a scenario where AI progress is hindered by poorly designed, reactionary policies that fail to address the true dangers.
Anthropic’s urgency is rooted in the rapid development of AI models, some of which are now achieving performance levels previously thought to be years away. The company pointed to its own advancements, such as Claude 3.5 Sonnet having a success rate of nearly 50% on real-world coding tasks only six months after Devin AI, by Cognition Labs, reached 13.5%, which was considered an amazing feat at that time. With the next generation of AI models on the horizon, capable of planning complex, multi-step tasks, Anthropic sees this as a critical moment to prepare for the future risks that could emerge.
The risks associated with AI are not just theoretical. AI models are already showing expert-level knowledge in many scientific fields, where they could potentially be misused. Anthropic has long warned that frontier models could soon pose significant risks in cybersecurity and chemical, biological, radiological, and nuclear (CBRN) domains, areas where the stakes are incredibly high.
Anthropic urges governments to take action within the next 18 months to ensure AI is developed responsibly. While progress in AI holds immense promise, careful, strategic regulation will be essential to harness its benefits while mitigating the dangers. Without it, the risks posed by AI could soon become an inescapable reality.