New Development to Help Drones Avoid Flying Objects

New Development to Help Drones Avoid Flying Objects

drones

This post is also available in: heעברית (Hebrew)

Event cameras are sensors that are not good at interpreting a scene visually like a regular camera, but they’re extremely sensitive to motion, responding to changes in a scene on a per-pixel basis in microseconds. A regular camera that detects motion by comparing one frame with another takes milliseconds to do the same thing, which might not seem like much, but for a fast-moving drone it could easily be the difference between crashing into something and avoiding it successfully.

Researchers have developed a technology that applies event cameras on drones in order to dodge thrown objects. The question that the University of Zurich researchers wanted to answer was how much the perception latency actually affects the maximum speed at which a drone can move while still being able to successfully dodge obstacles.

<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/sbJAi6SXOQw” frameborder=”0″ allow=”accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture” allowfullscreen></iframe>

The Robotics and Perception Group at the University of Zurich pioneered the use of event cameras on drones. They found that using an event camera on drones moving at high speeds can help them dodge the danger. And to validate their research, they hurl soccer balls at a drone as hard as they can, and see if it can dodge them. The tests mimicked obstacle encounters in high-speed flight, since the relative velocity that’s important. Also, the researchers say that in each case, motion capture data confirms that “the ball would have hit the vehicle if the avoidance maneuver was not executed.”

The time it takes a robot (of any kind) to avoid an obstacle is constrained primarily by perception latency, which includes perceiving the environment, processing those data, and then generating control commands.

Depending on what sensor you’re using, what algorithm you’re using, and what computer you’re using, typical perception latency is anywhere from tens of milliseconds to hundreds of milliseconds. The sensor itself is usually the biggest contributor to this latency, which is what makes event cameras so appealing—they can spit out data with a theoretical latency measured in nanoseconds.

Comparing the kind of traditional vision sensors that you’d find on a research-grade quadrotor (both mono and stereo cameras) with an event camera, it turns out that the difference is actually not all that significant, as long as you’re dealing with a quadrotor that’s not moving too quickly.

As the speed of the quadrotor increases, though, event cameras can start to make a difference — a quadrotor with a thrust to weight ratio of 20, for example, could achieve maximum safe obstacle avoidance speeds that are about 12 percent higher than if it was using a traditional camera. Quadrotors this powerful don’t exist yet (maximum thrust to weight ratios are closer to 10), but we’re getting there, according to spectrum.ieee.org.

Other advantages of event cameras are the fact that they don’t suffer from motion blur, and they’re much more resilient to lightning conditions, able to work in the dark as well as when you’re dealing with high dynamic range, like looking into the sun.