Australia’s Plan to Regulate ‘High-Risk’ AI

Australia’s Plan to Regulate ‘High-Risk’ AI

image provided by pixabay

This post is also available in: heעברית (Hebrew)

The Australian government revealed its response to managing safe and responsible AI this week, as was announced by federal Minister for Industry and Science, Ed Husic.

Instead of going the EU’s route and enacting a single AI regulation, the Australian government plans to focus on the high-risk areas of AI implementation with the greatest potential for harm (the justice system, surveillance, self-driving cars, etc.).

According to Techxplore, this initiative has raised many questions, like how will high-risk areas be defined and who makes that decision, should low-risk AI applications face similar regulation, and how are organizations meant to anticipate risks for new AI technologies and new applications of AI tools in the future without a permanent advisory board?

Nevertheless, it is important to remember that the assessment of “risk” in new technologies is nothing new and has many existing principles, guidelines, and regulations that can be adapted to address concerns about AI tools.

One significant problem with AI regulation is that there are many tools already used in Australian homes and workplaces, without the regulatory guardrails to manage risks. And while consumers and organizations need guidance on the appropriate adoption of AI tools to manage risks, many uses are outside the “high-risk” areas.

This brings us to the challenge of defining “high-risk” settings- the concept of “risk” is not absolute but is rather more of a spectrum. The risk is not determined by the tool itself or the setting where it is used but rather arises from contextual factors that create a potential for harm. Furthermore, risks posed by people and organizations using AI tools must be managed alongside risks posed by the technology itself.

The government stated that the expert advisory body on AI risks will need diverse membership and expertise from across the industry (from various sectors and company sizes), academia (AI computing experts and social scientists), civil society, and the legal profession.

The next step is deciding how the government wants to manage those potential future AI risks- a permanent advisory body could manage risks for future technologies and for new uses of existing tools. Such a body could also advise consumers and workplaces on AI applications at lower levels of risk, particularly where limited or no regulations are in place.

This information was provided by Techxplore.

SIMILAR ARTICLES