Watch: A Robot Refusing An Order

Watch: A Robot Refusing An Order

This post is also available in: heעברית (Hebrew)

Researchers at Tufts University Human-Robot Interaction Lab are working on an issue that has been troubling science fiction fans for decades: when can (and should) a robot say no?

Robots performing instructions precisely according to what they are programmed and ordered to do seems an obvious requirement – what use is a robot that behaves unexpectedly or one that doesn’t do what it’s made to? But following instructions unquestioningly could in itself pose serious risks to human lives.

Imagine a robot – twice the size of an average man, hurtling down the street at the behest of its owner, completely disregarding the safety of bystanders. Or, a robot following orders to knock down a building without taking into account the people that could still be inside. No, when robots finally become part of our daily lives, we want them to abstain from performing actions that might harm us.

The Tufts team is working on a digital mechanism analogous to human decision making. When completed, a robot equipped with this system will ask itself the equivalent of questions such as: “am I capable of doing this?” through “do I need to do this to perform my job?” and finally “will doing this violate some moral principle of operation?”

As part of their experiment, the researchers directed their robot to walk through a wall it could easily smash. The robot said “no,” as the person telling it to do so wasn’t trusted, and the maneuver could endanger lives.

So are robots about to say “no” to our demands? Is the robot uprising coming? Probably not. But similarly to Asimov’s three laws of robotics, this system should make living alongside robots that much safer.