New AI System Teaches Four-Legged Robots to Adapt Movement Like Animals

Image by Pexels
Representational image of robot dog

This post is also available in: עברית (Hebrew)

A new AI-based approach is enabling four-legged robots to move with more flexibility and awareness than ever before, adjusting their gait automatically based on the terrain—without needing specific instructions or prior training on the surface. The technology, detailed in Nature Machine Intelligence, is considered a meaningful step in developing robots that can operate reliably in unpredictable or dangerous environments.

Inspired by the way animals such as dogs adapt their movement, researchers from the University of Leeds and UCL created a system that allows a robot to switch between different gaits such as trotting or running depending on the terrain beneath it. Instead of relying on fixed commands or being pre-programmed for every situation, the robot uses learned strategies to decide how to move.

According to TechXplore, the system was developed using deep reinforcement learning. In practice, this means the robot was trained entirely in a simulated environment where it encountered a wide range of virtual terrains. Over several hours, it learned not only how to walk, but how to choose the most effective way to do so depending on the conditions. The AI learned faster than many animals do, acquiring stable and energy-efficient movement in under a day.

Once training was complete, the system was deployed in a real-world robot—without any additional adjustments. It successfully navigated overgrown vegetation, uneven wooden planks, loose materials, and other surfaces it had never seen during training. The robot also remained stable when physically disturbed, demonstrating the AI’s ability to recover from unexpected challenges.

Unlike many robots, this system does not rely on vision or external sensors. Instead, it uses internal motion feedback to adapt in real time. The underlying framework combines several key elements of animal-like motion, such as remembering useful gait patterns and adjusting step-by-step to changes in the environment.

Although tested on a single robot, the framework is designed to work across different four-legged platforms. It could help improve robotic mobility for tasks like disaster response, inspection of remote infrastructure, or other fieldwork where adaptability is essential and conditions can’t be predicted in advance.