This post is also available in: עברית (Hebrew)
Microsoft has unveiled two new chips that are specifically tailored for AI workloads.
The first is the custom-designed “Azure Maia 100” silicon chip optimized for artificial intelligence tasks. Microsoft reportedly hopes to use these new Maia 100 accelerators to power its largest internal AI workloads. It includes 105 billion transistors, making it one of the largest chips built on 5nm process technology.
The second processor is the “Cobalt 100”. It is Arm-based and fit to run general-purpose computing workloads on the Microsoft Cloud. The Cobalt 100 CPU is designed for energy efficiency and optimizes “performance per watt.”
Microsoft declared that its 64-bit 128-core chip delivers an “up to 40 percent performance improvement over current generations of Azure Arm chips,” adding that the chips are expected to start to roll out early next year to Microsoft’s data centers, initially powering the company’s services like Microsoft Copilot or Azure OpenAI Service.
These new chips show that Microsoft wants to meet the increasing demand for efficient, scalable, and sustainable computing power to take advantage of the latest breakthroughs in AI and cloud technologies. The company also reportedly co-designed software to work with the new hardware, with the end goal being an Azure hardware system that offers maximum flexibility and can also be optimized for power, performance, sustainability, or cost.
Nevertheless, this move does not mean Microsoft is turning a cold cheek to Nvidia- to complement its custom silicon efforts and “to provide more infrastructure options for customers,” Microsoft is expected to offer new systems that utilize high-end Nvidia H100 Tensor Core GPUs.
Microsoft is the last of the largest data center providers to announce its own chips, with Google and Amazon already developing their ARM counterparts.
This information was provided by Cybernews.