This post is also available in: heעברית (Hebrew)

Engineering has created devices with advanced capabilities over the past few decades. A capability that has recently been significantly enhanced is known as “spatial computing”, which refers to the use of computers, robots, and other electronic devices to recognize their environment and produce digital representations based on the information. With sensors and mixed reality (MR), along with spatial programming, advanced sensing and mapping systems are now possible.

Recent collaboration between Microsoft Lab (Microsoft Mixed Reality and AI Lab) and ETH University in Zurich, Switzerland, resulted in the development of a new mixed reality and robotic framework that improves spatial programming applications. They presented the findings in an article published in arXiv after applying and testing the framework.

Combined spatial programming and egocentric sensors on mixed reality devices enable them to perceive and comprehend human actions, which are then translated into spatial meaning, according to the article. These technologies are expected to open up a variety of new possibilities for collaboration between humans and robots, including planning test missions, gesture-based control, and remote control. HoloLens MR glasses from Microsoft are required for all the systems mentioned in the study.

As reported by, the study examined three systems with separate functions. As part of the first system, a person walking in a room and wearing MR glasses places holograms modeled as waypoints that are used to define a robot’s trajectory in order to examine the environment, which will then be processed and translated by the robot. Using a second system, a person can communicate and control the robot more efficiently, for instance using hand gestures to control the robot’s movements. Third, the third system focuses on controlling the robot remotely, allowing the user to remotely control all robotic activities through the camera lenses.