This post is also available in: עברית (Hebrew)
In a groundbreaking advancement for medical robotics, researchers have successfully trained a robot to perform complex surgical procedures simply by watching videos of seasoned surgeons. This innovation, which leverages imitation learning, marks a significant step toward autonomous robotic surgery.
The research, led by Johns Hopkins University and Stanford University, demonstrated how the da Vinci Surgical System robot was trained to perform basic yet essential surgical tasks, such as manipulating a needle, lifting body tissue, and suturing. Instead of manually programming each step of a procedure, the team used hundreds of surgical videos, recorded by wrist cameras mounted on da Vinci robots during actual surgeries, to teach the robot how to perform these tasks by mimicking the movements of skilled human surgeons.
“It’s really magical to have this model, and all we do is feed it camera input and it can predict the robotic movements needed for surgery,” said Axel Krieger, senior author of the study, according to TechXplore. The model is built on the same machine-learning architecture that powers major LLMs. However, instead of working with text, it translates human movements into precise kinematic data—the mathematical angles and motions of robotic arms required for surgical procedures.
The researchers leveraged a massive archive of surgical footage, with videos sourced from over 7,000 da Vinci robots operating worldwide, and over 50,000 surgeons trained on the system. Despite its widespread use, the da Vinci system is known for some limitations in precision. However, the team overcame this by training the robot to focus on relative movements rather than trying to replicate absolute actions, which can be prone to error. Using this AI-based system, even just a few hundred examples are sufficient for the model to learn how to execute procedures and adapt to new, unfamiliar environments.
The implications of this research are far-reaching. Previously, programming a robot to perform even one small step of a surgery, such as suturing, could take years of manual coding. With imitation learning, the robot can learn these tasks in just a few days, simply by watching video demonstrations. This accelerates training, reduces human error, and leads to more accurate surgeries.
The research team is now looking to expand their approach to full surgeries, aiming to teach robots more complex procedures. With continued advancements in imitation learning, surgical robots are moving closer to autonomy, revolutionizing the future of surgery.