This post is also available in: heעברית (Hebrew)

An intriguing challenge has led to the development of a robotic dexterous skills solution. Last year, the Real Robot Challenge organized by Max Planck Institute for Intelligent Systems in Germany posed the problem of repositioning and reorienting a cube using a low-cost robotic hand. The teams participating in the challenge were asked to solve a series of object manipulation problems with varying difficulty levels.

To tackle one of the problems posed by the Challenge, researchers at the University of Toronto’s Vector Institute, ETH Zurich and MPI Tubingen developed a system that allows robots to acquire challenging dexterous manipulation skills, effectively transferring these skills from simulations to a real robot. 

This system achieved a remarkable success rate of 83% in allowing the remote TriFinger system proposed by the challenge organizers to complete challenging tasks that involved dexterous manipulation.

“Our objective was to use learning-based methods to solve the problem.. in a low-cost manner,” Animesh Garg, one of the researchers who carried out the study, told techxplore.com. Essentially, Garg and his colleagues wanted to demonstrate that they could solve dexterous manipulation tasks using a Trifinger robotic system, transferring results achieved in simulations to the real world using fewer resources than those employed in previous studies. To do this, they trained a reinforcement learning agent in simulations and created a deep learning technique that can plan future actions based on a robot’s observations.

The researchers decided to use ‘keypoint representation,” a way of representing objects by focusing on the main ‘interest points’ in an image. These are points that remain unchanged irrespective of an image’s size, rotation, distortions or other variations.

In their study, the researchers used keypoints to represent the pose of a cube that the robot was expected to manipulate in the image data fed to their neural network. They also used them to calculate the so-called reward function, which can ultimately allow reinforcement learning algorithms to improve their performance over time.

The researchers trained their reinforcement learning model in the simulated environment they created using Isaac Gym learning platform, over the course of one day. In the simulation, the algorithm was presented with 16,000 simulated robots, producing ~50,000 steps / second of data that was then used to train the network.

The system was presented in a paper pre-published on arXiv.