This post is also available in: עברית (Hebrew)
A new technology could protect robots communication networks from malicious hackers. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and their colleagues present a new technique that could provide an added layer of security in systems that encrypt communications, or an alternative in circumstances in which encryption is impractical.
“The robotics community has focused on making multirobot systems autonomous and increasingly more capable by developing the science of autonomy. In some sense we have not done enough about systems-level issues like cybersecurity and privacy,” says Daniela Rus, a researcher from MIT and senior author on the new paper on this subject.
“But when we deploy multirobot systems in real applications, we expose them to all the issues that current computer systems are exposed to,” she adds. “A cybersecurity attack on a has all the perils of attacks on computer systems, plus the could be controlled to take potentially damaging action in the physical world. So in some sense, there is even more urgency that we think about this problem.”
“The work has important implications, as many systems of this type are on the horizon — networked autonomous driving cars, Amazon delivery drones, et cetera,” says David Hsu, a professor of computer science at the National University of Singapore. “Security would be a major issue for such systems, even more so than today’s networked computers. This solution is creative and departs completely from traditional defense mechanisms.”
According to the university’s website, most planning algorithms in multirobot systems rely on some kind of voting procedure to determine a course of action. Each makes a recommendation based on its own limited, local observations, and the recommendations are aggregated to yield a final decision.
A natural way for a hacker to infiltrate a multirobot system would be to impersonate a large number of robots on the network and cast enough spurious votes to tip the collective decision, a technique called “spoofing.”
The researchers’ new system analyzes the distinctive ways in which robots’ wireless transmissions interact with the environment, to assign each of them its own radio “fingerprint.” If the system identifies multiple votes as coming from the same transmitter, it can discount them as probably fraudulent.
In their paper, the researchers consider a problem known as “coverage,” in which robots position themselves to distribute some service across a geographic area — communication links, monitoring, or the like. In this case, each ’s “vote” is simply its report of its position, which the other robots use to determine their own.
The paper compares the results of a common coverage algorithm under normal circumstances and the results produced when the new system is actively thwarting a spoofing attack. Even when 75 percent of the robots in the system have been infiltrated by such an attack, the robots’ positions are within 3 centimeters of what they should be. To verify the theoretical predictions, the researchers also implemented their system using a battery of distributed Wi-Fi transmitters and an autonomous helicopter.
The MIT researchers found a way to make accurate location measurements using only two antennas, spaced about 8 inches apart. Those antennas must move through space in order to simulate measurements from multiple antennas. That’s a requirement that autonomous robots meet easily. In the experiments reported in the new paper, for instance, the autonomous helicopter hovered in place and rotated around its axis in order to make its measurements.
When a Wi-Fi transmitter broadcasts a signal, some of it travels in a direct path toward the receiver, but much of it bounces off of obstacles in the environment, arriving at the receiver from different directions. For location determination, that’s a problem, but for radio fingerprinting, it’s an advantage: The different energies of signals arriving from different directions give each transmitter a distinctive profile.