US DoD Publishes Ethical Directives on Commercial AI Tech

US DoD Publishes Ethical Directives on Commercial AI Tech

AI

This post is also available in: heעברית (Hebrew)

Users of artificial intelligence (AI)-based technology want to know that they can trust and verify that their tools protect American interests without compromising US collective values. The US Defense Innovation Unit (DIU) has published new directives for how it plans to use the Pentagon’s recently adopted “Responsible AI Guidelines” in its commercial prototyping and acquisition efforts.

The guidelines will help the unit operationalize the five principles of ethical AI use that were recommended by the Defense Innovation Board — an advisory panel to the Pentagon — in 2020.

According to DIU’s statement cited in c4isrnet.com, the guidelines have already had the following effects:

  • Accelerated programs by clarifying end goals and roles, aligning expectations, and acknowledging risks and trade-offs from the outset.
  • Increased confidence that AI systems are developed, tested and vetted with the highest standards of fairness, accountability and transparency in mind.
  • Supported changes in the way AI technologies are evaluated, selected, prototyped and adopted, and helped avoid potential bad outcomes.

Established in 2015, the DIU’s mission is to help DoD organizations use commercial innovation. As such, DIU has been working since March 2020 to integrate the Pentagon’s Ethical Principles for Artificial Intelligence with its ongoing AI efforts. The unit consulted with experts across industry, government and academia, including researchers at Carnegie Mellon University’s Software Engineering Institute.