This post is also available in:
עברית (Hebrew)
A research team at Université Laval has developed an optical chip capable of transmitting data at 1,000 gigabits per second—an advancement that could significantly reduce the energy burden of artificial intelligence systems and reshape how data centers operate.
The new small chip enables data transfer speeds roughly 18 times faster than today’s common optical links, which typically peak around 56 gigabits per second. More importantly, it achieves this at an energy cost low enough to heat just one milliliter of water by a single degree Celsius—marking a leap forward in sustainable computing.
The innovation comes at a time when the energy demands of AI are growing rapidly. Current AI models lrequire vast computational resources, with data centers relying on tens of thousands of processors to handle complex workloads. Communication between these processors can become a performance bottleneck. This new chip helps overcome that limitation.
According to the press release, what sets the device apart is its use of both light intensity and light phase to transmit data. While traditional optical systems primarily modulate light intensity, this chip also leverages phase modulation to carry significantly more information without additional power consumption.
Published in Nature Photonics, the study outlines how this hybrid approach allows for much denser and more efficient data transfer. The chip’s design builds on the microring modulator, a component already gaining traction in advanced AI hardware. However, most current implementations only use it for light intensity modulation, leaving substantial performance potential untapped.
Researchers estimate that the chip could transfer the equivalent of over 100 million books in under seven minutes—highlighting its potential in AI model training, where massive datasets must move quickly between systems.
While the chip is still in the research phase, commercial applications are expected in the near future. Its compatibility with emerging systems positions it as a likely candidate for integration into next-generation AI infrastructure, where speed and energy efficiency are critical priorities.