More Transparency Required in AI Development

More Transparency Required in AI Development

Photo illust. AI by Pixabay

This post is also available in: heעברית (Hebrew)

A new move could change how AI is developed by the US government. In a bid to promote transparency, the Defense Innovation Unit, which awards DOD contracts to companies, has released what it calls “responsible artificial intelligence” guidelines that it will require third-party developers to use when building AI for the military, whether that AI is for an HR system or target recognition.

According to militaryaerospace.com, the AI ethics guidelines provide a step-by-step process for companies to follow during planning, development, and deployment. They include procedures for identifying who might use the technology, who might be harmed by it, what those harms might be, and how they might be avoided—both before the system is built and once it is up and running.

The purpose of the guidelines is to make sure that tech contractors stick to the DoD’s existing ethical principles for AI. The DoD announced these principles last year, following a two-year study commissioned by the Defense Innovation Board, an advisory panel of leading technology researchers and businesspeople set up in 2016 to bring the spark of Silicon Valley to the US military. 

However, the guidelines say nothing about the use of lethal autonomous weapons, a technology that some campaigners argue should be banned. But Bryce Goodman, from the Defense Innovation Unit, who coauthored the guidelines, points out that regulations governing such tech are decided higher up the chain. The aim of the guidelines is to make it easier to build AI that meets regulations, according to technologyreview.com.