This post is also available in: heעברית (Hebrew)

In the wake of significant internal protest that included an open letter and a dozen workers quitting, Google has created an artificial intelligence (AI) code of ethics that prohibits the development of autonomous weapons. The background has been the use of Google’s AI technology by a Department of Defense initiative called the Algorithmic Warfare Cross-Functional Team, or Project Maven, designed to improve the accuracy of drone strikes, among other things.

But the principles leave sufficient wiggle room for Google to benefit from lucrative defense deals down the line, evaluates technologyreview.com.

Google’s blog The Keyword outlines the company’s new set of rules regarding AI applications. Basically, it contends that the company will take into account a broad range of social and economic factors, including that the technology be socially beneficial; avoid the creation or reinforcement of unfair biases by AI algorithms (regarding ethnicity, gender, etc.). The development will be built and tested in accordance with best practices in AI safety research. AI systems will provide opportunities for feedback and accountability; incorporate privacy principles. The company will promote high standards of scientific excellence in this field; and will work to limit potentially harmful or abusive applications of AI.

The blog said the company will not pursue technologies that cause overall harm, weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people, technologies that gather or use information for surveillance violating internationally accepted norms, and technologies whose purpose contravenes widely accepted principles of international law and human rights.