This post is also available in: 
     עברית (Hebrew)
עברית (Hebrew)
On Wednesday, December 11th, Google launched Gemini 2.0, its most advanced artificial intelligence model by far, ushering in what the company calls “a new agentic era” in AI development. This marks a major leap forward in AI capabilities, with models designed not only to understand and process information but also to make decisions and take action on behalf of users.
CEO Sundar Pichai described Gemini 2.0 as a tool that will “make information much more useful,” with its enhanced ability to understand complex contexts, think through multiple steps, and take supervised actions autonomously. Pichai added that the model brings Google closer to its vision of creating a universal assistant that can seamlessly interact with users and their environment.
Gemini 2.0 represents the next phase in the AI arms race among tech giants, following the global impact of OpenAI’s ChatGPT in 2022. At the heart of Gemini 2.0 is Google’s proprietary sixth-generation TPU (Tensor Processing Unit) hardware, Trillium, which powers both the training and execution of the model. Notably, Google has avoided reliance on Nvidia, which has become a dominant player in AI chip manufacturing.
The model is currently being released to be tested by developers and trusted testers only, with future plans to integrate it into various Google products, especially in Search. The first release in the Gemini 2.0 family, Gemini 2.0 Flash, promises faster performance and the ability to process various inputs such as text, images, video, and audio, while delivering outputs like generated images and speech.
Google also teased future innovations, including a version of Project Astra, a smartphone digital assistant capable of responding to both verbal commands and images, potentially competing with Apple’s Siri. With these advancements, Gemini 2.0 sets the stage for AI’s next frontier, transforming how users interact with technology.

 
            
