This post is also available in: heעברית (Hebrew)

Units of robots patrolling along borders, communicating, coordinating their actions, helping one another and in general acting like a great team. They’re joined by robotic eyes in the sky who provide intelligence to their earthbound friends. They have a computerized commander in charge of forming strategies, planning and monitoring the team’s actions and getting personally involved only when necessary – like any good commander. This truly intelligent commander knows exactly what the robots in the field are doing, thinking and perceiving, aware of every piece of communication between them; it sees all, knows all. Sounds like science fiction, right? A visit to Dr. Noa Agmon’s robotics lab at Bar Ilan University demonstrates how reality is not that far off.

The AscTec Firefly, at the Bar Ilan University Robotics Lab
The AscTec Firefly, at the Bar Ilan University Robotics Lab

“Our vision is autonomous clusters of robots,” said Dr. Noa Agmon, a Bar Ilan robotics researcher specializing in the development of algorithms for robot clusters. “They will be able to decide how to divide the mission between them, without requiring human involvement in the process. They will be capable of overcoming difficulties and malfunctions. That’s our vision, what happens until then is that there is going to be a person controlling the clusters. You’ll have a security officer who will notice a problem and decide to send robots to investigate, or a mission commander who will decide it’s necessary to send robots to monitor an area based on previously collected intelligence. The question is what exactly should we do with the other robots in the cluster? Do they have to divide the problematic area between themselves, and if so – how? Concerning robotic intelligence, that decision making can be done autonomously even today. In addition, right now the human operator analyzes the problem but the system knows how to present him or her with the best possible responses. In general the system can order robots directly in the same way it provides advice for operators, but since a human is still there in the loop it only provides recommendations.”

“Once we have a human controlling the cluster you can basically have them only make strategic decision, for example, or, in case of major malfunctions, the system can alert operators when robots get lost; if he thinks they encountered an acute problem the operator could get personally involved, but there still won’t be a need for operators to actually control individual robots.” Today robots are individually controlled by a single operator or by a team of operators. “The goal for the immediate future is to have one person operating as many robots as possible, and as time goes on have that operator only deal with increasingly strategic decisions. The human won’t micro-manage, he’ll only decide, for example, whether an object is a bomb or not, or perhaps take momentary control of a robot to move it a short distance before allowing it to resume normal operations.”

One of the challenges is that robotic movement implemented today, just like human patrols, can be analyzed by hostile elements who collect intelligence on border patrols. Dr. Agmon developed algorithms aimed at confusing enemies, so that even if they monitor border activity for a prolonged period of time they’ll still find it impossible to predict the robots’ movement patterns. “Our algorithms can make sure that even if an enemy knows everything about me, like Hezbollah militants who constantly monitor Israeli border activities, our algorithms could still tell robots what actions and strategies are most likely to surprise the enemy.” Another, just as important issue: Unlike soldiers robots never get tired or depressed, they’re always ready and focused on the mission at hand. Robotic patrols take on the physical dangers faced by soldiers, in addition to the boredom and frustration faced by all soldiers who had to go on patrol everywhere.

The RoboTICan Komodo, at the Bar Ilan University Robotics Lab
The RoboTICan Komodo, at the Bar Ilan University Robotics Lab

iHLS – Israel Homeland Security

AUS&R-2014  650x80

Decision making processes and cooperation between robots happen not only on the ground but also in the air. An unmanned aircraft can be taught to think and make decisions autonomously, or to cooperate with other robots on ground and in the air. “The aircraft can navigate in the field with no outside assistance, plotting its course by itself,” explains Dr. Agmon. “It has mission planning algorithms. I could, for example, have it enter a building, tell it to give me a map and the aircraft will be able to find its own way from there on. It will fly, see an open door and pass through it, for example. It will know which areas it already covered and which it hasn’t, and move on. It can do all these calculations by itself. It has considerable processing power allowing it to carry out all these activities, but it will also be able to operate as part of a team.” One of the current projects conducted in Dr. Agmon’s lab concerns joint missions carried out by robots on the ground and helped by an aircraft that provides additional points of view. The research is part of the Israel Aerospace Industry ROBIL project, funded by the Ministry of Defense.

So what’s holding us up on the way to a defense world comprised entirely of robots?

Dr. Noa Agmon. Photo: Bar Ilan University
Dr. Noa Agmon. Photo: Bar Ilan University

There’s a very large gap between technical capabilities and decision making capabilities. Sensory capabilities require extreme processing power. There are many technical difficulties having to do with sight (sensing). As for robot cluster algorithms, decision making capabilities are much more advanced than those actually used today. I believe that ten years from now you could put one operator in charge of fifty drones, instead of having two operators be in charge of one. The system itself will be smart enough to not only have the robots communicate between themselves and decide what’s right and wrong, but it could also function as a sort of central command – process all this information and come to conclusions based on it.”

Can machines really replace humans? Can they be more intelligent?

I don’t think they can be more intelligent, but I do think they can be more useful. In certain ways they can reach human levels or even surpass them. Today if you give a human a map and ask him or her to plot the quickest course between two points, the humans will find it much more difficult to process all that information. A program will do it very quickly, depends on the mission. Humans, however, will find it much easier to analyze images – at least for the next few years.”

Some elements oppose the integration of autonomous robots into armies, claiming that robots can’t make decisions based on morals like a human soldier can. Do you agree?

“The bottom line is that robotic decision making is defined by humans. If you order a robot to do this but not that, if you define clear limits – people are still in charge. The problems are much more complex than whether that missile should be launched or not. If the system is almost 100% credible then it shouldn’t be a problem. People also make mistakes sometimes.”