This post is also available in: heעברית (Hebrew)

Though robots don’t have eyes or retinas, the technology commonly used by ophthalmologists can give them the means to see and communicate with the world in a more natural, safe way.

Researchers have improved a LiDAR system by utilizing the lessons learned from Optical coherence tomography (OCT) imaging technology, resulting in faster and more accurate results that could eventually be implemented in autonomous vehicles and manufacturing robots. In OTC devices, which are optical equivalents to ultrasound, light waves are timed and phase is calculated by comparing them to identical light waves that have travelled the same distance without crossing paths with other objects.

Research published in Nature Communications shows that researchers from Duke University have successfully applied the principles they learned during their work on OTC technology to develop a FMCW LiDAR system that achieves submillimeter localization accuracy while providing 25 times the data throughput as previous demonstrations.

According to, the majority of robotic devices today are equipped with LiDAR systems. As LiDAR systems are sensitive to sunlight, they have some drawbacks which make it difficult to use them with different 3D vision applications. A second challenge is improving the depth resolution, which is essential for orienting and completing work in large areas such as highways and factories. Researchers are increasingly turning to FMCW LiDAR systems to solve these challenges, as it is much cheaper to manufacture and operates according to OCT-compliant principles.

So what’s next? Researchers plan to develop a new generation of lidar-based 3D cameras that can easily integrate 3D vision into a variety of products, allowing robots to see how humans see the world, in 3D.

Sounds interesting? Discover more interesting innovations at AUS&R 2022 – the Unmanned Systems and Robotics Conference – the tenth annual event, by iHLS.