Why Is Microsoft Drowning Its Servers?

Why Is Microsoft Drowning Its Servers?

This post is also available in: heעברית (Hebrew)

We sometimes tend to think of the internet as some ethereal thing. We connect to the WiFi, and it’s just there – Facebook, YouTube, whatever. The reality is, of course, is that they’re incredibly physical things: datacentres that fill entire warehouses and suck up lots of energy and water for powering and cooling all those servers. Because of this, they tend to be built in areas where land and power are cheap – far from the dense urban areas where most users live.

The folks at Microsoft, however, thought that just wasn’t good enough and found a new location to put their servers: The bottom of the ocean.

Project Natick submerged a 17,230 kg, 3 by 2 metre steel tube in the ocean off the coast of California to test whether the servers inside it continued to function at the end of three months as well as at the beginning. According to Microsoft, this is the first a server farm was made to work under the sea – and work it did.

There are certainly some advantages to seafloor datacentres. The water at the bottom of an ocean usually stays at a stable relatively cold temperature, eliminating the need for much of the vigorous cooling traditional datacentres require. The team behind the project is also looking to wave power generating equipment to harvest the hydrokinetic energy of the sea, further reducing operating costs.

Even without these benefits there’s a good argument for putting servers undersea. Because datacentres are so often far from population centres, the distance makes for problems with high latency – the farther data has to travel, the slower the connection between the user and the server is made. As populations tend to congregate in coastal areas the world over, putting servers on the seafloor could help them be much closer to where they really need to be.