This post is also available in:
עברית (Hebrew)
Google has unveiled two new artificial intelligence (AI) models designed specifically for the rapidly advancing robotics sector. These models, based on Google’s Gemini 2.0 framework, aim to enhance the capabilities of robots across various industries.
The first of these models, Gemini Robotics, is a vision-language-action model that incorporates physical actions as outputs. This development represents a significant leap in the application of AI for robotics, offering robots not only the ability to perceive and understand their environment but also to interact with it more effectively. The second model, Gemini Robotics-ER, focuses on spatial awareness, enabling robots to better navigate complex environments. This model allows developers to integrate their own programs, providing a versatile platform for various robotic applications.
Google’s launch comes amid increasing momentum in the robotics field, fueled by rapid advancements in AI technology. Over the past few years, the integration of AI with robotics has accelerated, driving innovation in industrial robots, particularly in automated factories.
The Gemini Robotics models are designed to cater to a wide range of robotic systems, including humanoids and other specialized robots used in industrial and commercial settings. Google has tested its Gemini Robotics model using data from its own ALOHA 2 bi-arm robotics platform, with plans to adapt it for more intricate tasks.
Google’s ongoing involvement in robotics isn’t new—though the company sold Boston Dynamics to SoftBank in 2017, it remains deeply engaged in the sector. The launch of the Gemini Robotics models highlights Google’s continued push to shape the future of AI-driven robots, further bridging the gap between advanced technology and real-world applications.
These new AI tools are not just a boon for large enterprises but also a crucial asset for robotics startups, helping them cut development costs and accelerate time-to-market for their innovations. As robotics continues to evolve, these AI models could play a pivotal role in shaping the next generation of intelligent machines.