Biden Issues First US Federal AI Regulations

image provided by pixabay

This post is also available in: עברית (Hebrew)

Governments and technology experts worldwide agree that the rise of AI and machine learning has immense potential, but can also pose immense danger and must be regulated.

There have been many debates regarding what needs to be done, with various governing bodies dealing with it differently. For example, the EU has already prepared the AI Act, with the current version intending to ban the use of software that creates unacceptable risks. Now the US government rears its head and takes a stand.

According to Cybernews, US President Joe Biden issued earlier this week a “landmark” executive order outlining the government’s first-ever regulations on AI systems. The statement reads: “The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”

The statement divides the key components of the executive order into eight parts that touch on safety and security standards for AI, consumer privacy, equity and civil rights, consumer protection, support for workers, innovation, and competition.

The regulations, for example, include the requirement that the most advanced AI products be tested to ensure that they cannot be used to produce nuclear or biological weapons.

The White House also states that it will protect Americans from AI-enabled fraud and deception, as part of which the Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.

Cybernews states that the American perspective has always been liberal and believed in the power of innovation with minimal intervention, so tech companies are usually in favor of voluntary commitments to responsible development rather than actual laws.

In July 2023, Biden’s administration announced that seven leading AI companies in the US (including Amazon, Google, Meta, and OpenAI) formally agreed to voluntary safeguards on the technology’s development, and eight more firms involved in AI joined the pledge in September.

Nevertheless, some experts claim that these pledges are not enough. President and chief technology officer at LightPoint Financial Technology Michael DeSanti said that voluntary commitments need to be combined with laws, because big businesses have already proven that they’re often untrustworthy.

The White House may have heard the criticism, because Biden’s document orders the Department of Labor and the National Economic Council to study AI’s effect on the labor market, stating: “AI is changing America’s jobs and workplaces, offering both the promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement.”