Self-Driving Cars Get Enhanced Object-Tracking Abilities

image provided by pixabay

This post is also available in: עברית (Hebrew)

Researchers at the University of Toronto Institute for Aerospace Studies (UTIAS) introduced new high-tech tools that improve the safety and reliability of autonomous vehicles by enhancing the reasoning ability of their robotic systems.

They do so by employing multi-object tracking, which is used by robotic systems to track the position and motion of objects (like vehicles, pedestrians and cyclists) to plan the path of self-driving cars in densely populated areas. According to Techxplore, tracking information is collected from computer vision sensors (2D camera images and 3D LIDAR scans) and filtered at each time stamp, 10 times per second, to predict the future movement of moving objects.

The researchers published a paper introducing the Sliding Window Tracker (SWTrack) — a graph-based optimization method that uses additional temporal information to prevent missed objects and improve the performance of tracking methods, particularly when objects are obstructed from the robot’s point of view.

The team reports it tested, trained and validated their algorithm on field data they got through nuScenes – a public large-scale dataset for autonomous driving vehicles on roads in cities worldwide.

Sandro Papais, a Ph.D. student in UTIAS who worked on the project said he’s looking forward to building on the idea of improving robot memory and extending it to other areas of robotics infrastructure – “This is just the beginning. We’re working on the tracking problem, but also other robot problems, where we can incorporate more temporal information to enhance perception and robotic reasoning.”

Professor Steven Waslander, director of UTIAS’s Toronto Robotics and AI Laboratory, says the advancement outlined in the paper builds on work his lab has been focusing on for several years – “[the lab] has been working on assessing perception uncertainty and expanding temporal reasoning for robotics for multiple years now, as they are the key roadblocks to deploying robots in the open world more broadly. We desperately need AI methods that can understand the persistence of objects over time, and ones that are aware of their own limitations and will stop and reason when something new or unexpected appears in their path. This is what our research aims to do.”