The Good, The Bad, And The Autonomous In Battle

The Good, The Bad, And The Autonomous In Battle

This post is also available in: heעברית (Hebrew)

Autonomous machines are developing at incredible rates, and we may soon find them fighting our battles for us. While the possibilities of technology keep expanding, they do raise some tricky ethical questions.

Representatives from more than 82 countries came together in Geneva earlier this year to discuss this issue, and after deliberations recommended that the “key UN body that sets norms for weapons of war should put killer robots on its agenda,” writes Eydar Peralta for NPR.

Some organisations, like Human Rights Watch and Harvard Law School’s International Human Rights Clinic, are urgently calling for a complete ban on autonomous killer robots before the technology crosses a “moral threshold,” in the words of Harvard Law School’s Bonnie Docherty. Docherty warns of a robotic arms race that can cloud the issues of responsibility.

Not everyone, however, is so certain of the undesirability of autonomous killer robots.

Paul Scharre, who heads a programme on ethical autonomy at the Center for a New American Security presents a comparison with Lockheed Martin’s long-range anti-ship missile. The LRASM, in the event it loses contact with its human operators, can still search the seas to find a target to attack.

“It sounds simple to say things like: ‘Machines should not make life-or-death decisions.’ But what does it mean to make a decision?” Scharre asks. “Is my Roomba making a decision when it bounces off the couch and wanders around? Is a land mine making a decision? Does a torpedo make a decision?”

But robot ethics can be even more complicated than that, argues Ron Arkin, a roboethicist at Georgia Tech. He says the potential benefits of these machines could outweigh the bad.

“They can assume far more risk on behalf of a noncombatant than any human being in their right mind would,” he says. “They can potentially have better sensors to cut through the fog of war. They can be designed without emotion — such as anger, fear, frustration — which causes human beings, unfortunately, to err.”

While this future is still a distant possibility, one day robots could become a precision-guided weapon of sorts. A robot could take out snipers in a hostile environment without risking the lives of soldiers, which could save many innocent lives. Too many innocent lives are lost in war, Arkin says, and “We need to do something about that … technology affords one way to do that and we should not let science fiction cloud our judgment in terms of moving forward.”

Arkin argues that killer robots could one day become so precise it could be unethical not to use them.