Researchers at Stony Brook University have created a navigation system that allows robots to detect objects and navigate spaces beyond their direct line of sight. The technology uses single-photon LiDAR sensors, which are lightweight and commercially available, to sense what is around corners by detecting faint light signals reflected off surfaces.
The research was inspired by the convex mirrors often placed at blind intersections to help drivers see oncoming traffic. Akshat Dave, assistant professor of Computer Science at Stony Brook and former Postdoctoral Associate at MIT Media Lab, explained the approach: “We asked ourselves, what if a robot could use walls the same way — by turning walls into mirrors?” The team’s method enables robots to process indirect light information in order to map hidden areas.
Dave also discussed broader applications for the technology. “We want to take this project beyond navigation, to challenges that pose real Non-Line-of-Sight problems, like teaching robots to lift hidden objects, exploring and mapping unreachable areas, and conducting search and rescue operations,” he said. “These systems will be able to see the world in ways we do not.”
The project is titled Enhancing Autonomous Navigation by Imaging Hidden Objects using Single-Photon LiDAR. It is supported by funding from the National Science Foundation (CMMI-2153855) as well as the NSF Graduate Research Fellowship.
More details about this development can be found on the AI Innovation Institute website in an article authored by Ankita Nagpal.


