We have too many few data centers. And too few too


A big issue, this is.

We have too many few data centers. And too few too /img/citadel-data-center.jpg

We produce a really, really huge amount of digital data, every year more. The Conversation just explained how MUCH “huge” is, and WHERE we keep it. Most of that data is stored in the “core” of the Internet, that is inside traditional data servers and cloud data centers. Among those “data warehouses”, those with more than five thousands physical servers are called “hyperscale data centers”, and, says The Conversation, “39% of them are in the US, while China, Japan, UK, Germany and Australia account for about 30% of the total.”

Interesting data center factoids

The Conversation mentions that:

  • The largest hyperscale data centers are China Telecom Data Centre, in Hohhot, China, which occupies 10.7 million square feet and The Citadel in Tahoe Reno, Nevada, which occupies 7.2 million square feet and uses 815 megawatts of power.
  • We produce so much data that around 100 new hyperscale data centres are built every two years

Much more interesting data centers factoids

The Conversation also explains what really matters, in my opinion:

  1. If we keep going like this, in just 110 years the power needed by data centers worldwide “will exceed the total planetary power consumption today”, which of course would be impossible… even if it were not a major cause of global warming, or other pollution
  2. But the really important factoid, the one already unsustainable today, is that currently there are around six hundred hyperscale data centers in the world.

As in “ONLY six hundred”. We are talking of the containers of our digital data, private and public. That is pretty much our whole current way of living, not Ikea or Wal-Mart stores. That is systemic fragility of the highest order.

Think what would happen if a non-negligible part of just six hundred such warehouses worldwide went simultaneously offline. Not good.

Stop at Zona-M   Never miss a story: follow me on Twitter (@mfioretti_en), or via RSS