This post is also available in: heעברית (Hebrew)

The recent call by scientists to prevent artificial intelligence (AI) from being used to develop autonomous weapons brought about many responses.

Founders and lead scientists from 116 tech companies urged the UN in an open letter to adopt a set of measures to prevent artificial intelligence (AI) from being used to develop autonomous weapons. Elon Musk, the founder of Tesla, SpaceX, OpenAI, and Mustafa Suleyman co-founder of Google’s DeepMind were among those who signed the letter.

“These private sector leaders and Campaign to Stop Killer Robots are frustrated at the slow pace of the diplomatic talks and are worried that countries are not focused on the outcome, which in our view should be new international law that bans fully autonomous weapons systems. So far, no progress has been made,” Mary Wareham, advocacy director of the Arms Division at Human Rights Watch said.

She elaborated that the open letter should not be seen as an attempt to hamper progress in the further development of AI or autonomous systems, but rather as an attempt to regulate the industry because “those companies don’t want to see their good work tarnished by the creation of weapons systems that can kill human beings.”

“If fully autonomous weapons systems were to commit a crime, it would be virtually impossible to hold anybody responsible for that. The bottom line is the ethical one.  It’s about if we’re comfortable allowing a machine to take a human life on the battlefield, in policing, in law enforcement, in border control. We need to start acting before it gets too late and the window is closing,” Wareham told

Formal talks on autonomous weapons began in the end of 2016 with 123 countries participating in the negotiations. According to Campaign to Stop Killer Robots, 19 countries to date called for a ban on lethal autonomous weapons systems while over a dozen have publicly supported some sort of regulation in this area.