This post is also available in: heעברית (Hebrew)

Thanks to the rapid development of AI, autonomous weapons systems (or “killer robots”) will soon become a reality, and many international organizations have called for limits or even bans.

According to Techxplore, Killer robots are systems that choose a target and fire on it based on sensor inputs rather than human inputs. They are rapidly becoming a reality, and experts are concerned.

Weapons systems with significant autonomy are already being used on the battlefield, though it is still a relatively fine line when it comes to the question of what counts as a killer robot and what doesn’t. Take for example loitering munitions increasingly using autonomy that allows them to hover above the battlefield and wait to attack until they sense a target.

Bonnie Docherty, lecturer on law at Harvard Law School’s International Human Rights Clinic, was interviewed by Harvard Gazette about the matter. “The ethical concerns are very serious. Delegating life-and-death decisions to machines crosses a red line for many people. It would dehumanize violence and boil down humans to numerical values.”

Other risks Docherty raises include the serious risk of algorithmic bias, where discriminating against specific people is possible because machines may be intentionally programmed to look for certain criteria. Another is the legal concern, like the inability of machines to distinguish soldiers from civilians, and what it might mean when passing laws about it.

When it comes to the issue of accountability, autonomous weapons systems are at risk of falling into an accountability gap because one can’t hold the weapon system itself accountable. Furthermore, it might be problematic to hold the operator accountable when the machine is operating autonomously.

While there have been international efforts to ban killer robots, they have so far been unsuccessful. There could be several reasons, one of which is that these are weapons systems currently in development. Unlike long existing weapons like landmines and cluster munitions, which were banned after reviewing the documented harm they have already caused.

When asked about a possible international ban on autonomous weapons systems, Docherty explained: “We are calling for a treaty that has three parts to it. One is a ban on autonomous weapons systems that lack meaningful human control. We are also calling for a ban on autonomous weapons systems that target people because they raise concerns about discrimination and ethical challenges. The third prong is that we’re calling for regulations on all other autonomous weapons systems to ensure that they can only be used within a certain geographic or temporal scope. We’re optimistic that states will adopt such a treaty in the next few years.”