This post is also available in:
עברית (Hebrew)
Robots are increasingly deployed in environments that are unsafe or inaccessible for humans, from collapsed buildings and underground tunnels to industrial sites and disaster zones. In many of these settings, visibility is extremely limited or nonexistent. While robots often rely on cameras and computer vision to move, map surroundings, and identify objects, most vision systems are designed for visible light. When light disappears, those systems struggle, forcing developers to redesign perception stacks or add costly hardware.
A new machine learning approach aims to remove that limitation. Researchers from the University of Manchester have demonstrated a method that allows robots to operate effectively in total darkness by using infrared cameras paired with image reconstruction algorithms. Infrared sensors can detect reflected radiation even without visible light, but their raw output often lacks the clarity required by standard vision software. The team’s solution uses machine learning to convert infrared imagery into clear, camera-like images that existing robotic vision algorithms can already understand.
The key advantage of the method is compatibility. Instead of rewriting navigation or object-recognition software, the system reconstructs infrared data into a form that current algorithms can process without modification. This reduces computational overhead and shortens deployment time, making it easier to adapt robots for low-visibility missions. According to TechXplore, this approach also lowers development costs, since it builds on hardware and software platforms that are already widely used.
For defense and homeland security applications, the implications are straightforward -military and security robots are often required to operate at night, underground, or inside structures where lighting is unavailable or deliberately suppressed. Systems that can “see” without visible light support reconnaissance, search-and-rescue, tunnel mapping, and inspection tasks while reducing reliance on external illumination that could reveal a robot’s position. The ability to reuse existing vision software also simplifies integration into current robotic platforms.
The researchers emphasize that their work is not limited to infrared cameras. The same machine learning framework could be adapted to other sensing modalities, such as thermal imaging or sonar, potentially extending robotic perception into environments affected by smoke, dust, or other visual obstructions. This opens the door to more resilient robotic systems that can maintain situational awareness across a wider range of conditions.
While the work is still at a research stage, it points to a practical path for improving robotic perception in environments where traditional vision fails—without overhauling the systems that robots already rely on to navigate and act.
The research was published here.

























