AI for Military Uses – Unexpected News

AI for Military Uses – Unexpected News

AI

This post is also available in: heעברית (Hebrew)

A US think tank evaluates that the civil-military divide over the potential use of artificial intelligence (AI) is not as wide as one might think.

The US Department of Defense (DoD)’s engagement with leading high-tech private-sector corporations provides a valuable conduit to cutting-edge AI-enabled capabilities and access to leading AI software developers and engineers. 

To assess the views of software engineers and other technical staff in the private sector about potential DoD applications of AI, a research team from RAND research organization conducted a survey that presented a variety of scenarios describing how the US military might employ AI. 

The scenarios varied several factors, including the degree of distance from the battlefield, the destructiveness of the action, and the degree of human oversight over the AI algorithm. The results from this survey found that most of the US AI experts do not oppose the basic mission of DoD or the use of AI for many military applications.

Among the key findings, according to rand.org:

An unbridgeable divide between Silicon Valley and DoD does not appear to exist. 

Respondents from Silicon Valley technology firms and alumni of universities with top-ranking computer science departments are comfortable with a variety of military applications for AI.

However, there is a meaningful difference in the comfort level for AI applications that involve the use of lethal force – about one-third of respondents from the three surveyed Silicon Valley technology corporations were uncomfortable with lethal use cases for AI.

Another finding: Tech workers are most concerned about cyber threats to the United States – more than 75 percent of respondents from all three populations also regarded China and Russia as serious threats to the United States.

The report recommends the establishment of mechanisms to expand collaborations between DoD and Silicon Valley companies regarding threats posed by cyberattacks, a potential application for AI that Silicon Valley engineers see as a critical global threat.

Expansion of engagements among personnel involved with military operations, DoD technical experts, and Silicon Valley individual contributors (nonmanagerial employees) working in technical roles should be explored to assess possible conduits for developing greater trust between the organizations.

The recently published DoD ethical principles for AI demonstrate that DoD itself is uncomfortable with some potential uses for AI: This could serve as the foundation for a conversation with Silicon Valley engineers about what AI should and should not be used for.

It is also recommended to try and assess the benefits and adapt various types of engagements to help the most innovative and experienced US AI experts learn how DoD accomplishes its mission and discover how their talents and expertise can contribute to solving DoD’s and the nation’s problems.