Experts Claim AI Safety is Rooted in Hardware, Not Software

Experts Claim AI Safety is Rooted in Hardware, Not Software

Hardware. image by pixabay

This post is also available in: heעברית (Hebrew)

A major new report calls for the regulation of the hardware that underpins all AI (“compute”) to help prevent artificial intelligence misuse and disasters.

Several highlighted policy options in the report include a global registry tracking the flow of chips destined for AI supercomputers, “compute caps” (built-in limits to the number of chips each AI chip can connect with), and distributing a “start switch” for AI training across multiple parties (to allow for a digital veto of risky AI before it feeds on data).

According to Techxplore, researchers argue that AI chips and data centers offer more effective targets for scrutiny and AI safety governance because these assets have to be physically possessed while data and algorithms can be duplicated and disseminated.

The experts also point out that the powerful computing chips required to drive generative AI models are constructed using highly concentrated supply chains that are dominated by a handful of companies—making the hardware itself a strong intervention point for risk-reducing AI policies.

Co-lead author of the report from Cambridge’s LCFI Haydn Belfield states: “Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms. Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control. AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centers often the size of several football fields, consuming dozens of megawatts of power.”

He further explains that computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.

The report provides “sketches” of possible directions for compute governance, the policy ideas being divided into three camps: increasing the global visibility of AI computing, allocating compute resources for the greatest benefit to society, and enforcing restrictions on computing power.

One suggestion includes a regularly inspected international AI chip registry requiring chip producers, sellers, and resellers to report all transfers, which would provide precise information on the amount of compute possessed by nations and corporations at any one time. An addition to this concept could be a unique identifier that is added to each chip to prevent industrial espionage and “chip smuggling.”

The report’s authors clarify that their policy suggestions are more “exploratory” than fully fledged proposals, adding that they all carry potential downsides from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.