Leading AI Experts Call for World Leaders to “Wake Up and Act”

image provided by pixabay

This post is also available in: עברית (Hebrew)

25 of the world’s leading AI scientists are calling for world leaders to take stronger action on AI risks and warning that the progress since the first AI Safety Summit is not enough. The leading academic experts from the US, China, EU, UK, and other AI powers, published a paper in ‘Science’ in which they outline urgent policy priorities that global leaders should adopt to counteract the threats from AI technologies.

The paper claims it is crucial that world leaders seriously consider the possibility that highly powerful generalist AI systems that outperform human abilities across many critical domains will be developed within this or the next decade. They state that although governments have been discussing the implications of AI and made some attempts at introducing initial guidelines, this is simply unmatched by the possibility of rapid, transformative progress that many experts predict.

These are the actions recommended to governments by the paper’s authors, as provided by Techxplore:

  • They first recommend establishing fast-acting, expert institutions for AI oversight and providing them with sufficient funding.
  • Secondly, they call for governments to mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations.
  • They then call to require that AI companies prioritize safety and demonstrate their systems cannot cause harm, which would include the use of “safety cases” (used for other safety-critical technologies, like aviation).
  • Lastly, the experts call to implement mitigation standards that are appropriate for the risk levels posed by AI systems. Governments should urgently set in place policies that automatically trigger when AI hits certain capability milestones.

When it comes to risks and consequences of future powerful AI, the authors mention that it is already making rapid progress in critical domains like hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. They claim it could advance undesirable goals by gaining human trust, acquiring resources, and influencing key decision-makers.

They go as far as theorizing that in conflict, AI systems could autonomously deploy a variety of weapons, and that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and even the marginalization or extinction of humanity.

Stuart Russell OBE, Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says, “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry. It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless.”

This information was provided by Techxplore.