Experts Explain Why It’s Difficult to Regulate AI

Image provided by pixabay

This post is also available in: עברית (Hebrew)

Due to the potential for widespread harm as technology companies roll out AI systems and test them on the public, policymakers are faced with the task of determining whether and how to regulate the emerging technology.

The Conversation asked experts on technology policy to explain why regulating AI is such a challenge—and why it’s so important to get it right.

Initially, experts says that the reason to regulate AI is not because the technology is out of control, but because human imagination is out of proportion.

“Gushing media coverage has fueled irrational beliefs about AI’s abilities and consciousness. Such beliefs build on “automation bias” or the tendency to let your guard down when machines are performing a task. An example is reduced vigilance among pilots when their aircraft is flying on autopilot,” said one of the experts.

Experts say that understanding the risks and benefits of AI is also important. Good regulations should maximize public benefits while minimizing risks. However, AI applications are still emerging, so it is difficult to know or predict what future risks or benefits might be. These kinds of unknowns make emerging technologies like AI extremely difficult to regulate with traditional laws and regulations.

Lawmakers are often too slow to adapt to the rapidly changing technological environment. Some new laws are obsolete by the time they are enacted or even introduced. Without new laws, regulators have to use old laws to address new problems. Sometimes this leads to legal barriers for social benefits or legal loopholes for harmful conduct.