For Efficiency’s Sake: It’s About More Than Speed, It’s About Sustainability
Antonio Neri, President & CEO, Hewlett Packard Enterprise
If the Internet were a country, it would be the 5th largest energy consuming “country” in the world – right behind China, the US, Russia and India.
That startling fact got me thinking recently about that famous line from legendary singer-songwriter Joni Mitchell’s 1970 hit, “Big Yellow Taxi”: “You don’t know what you’ve got…‘til it’s gone.”
As we sit here at the dawn of an incredibly exciting era of artificial intelligence, I’m particularly struck by this line in the context of our energy consumption. Because let’s face it: most of us in the developed world take this thing we call ‘electricity’ almost entirely for granted.
We take for granted that when we come home and turn on the light… that the light will in fact simply turn on. It’s usually not until that moment – when a mid-August heat wave knocks out a power grid and thrusts us into darkness – that we remember how good we had it.
We in the tech sector tend to take energy consumption for granted as well, having been intensely focused on achieving Moore’s Law and the computational prowess that comes with it. That’s the race we have all been running, and tirelessly so.
But, Moore’s Law, that comfortable drumbeat of free progress, was broken a while ago. It may soon come to an end entirely as our high-powered, processor-centric architecture meets the limitations of physics.
At the same time, our computers have been getting more energy efficient, but not nearly fast enough. No linear improvement can keep pace with exponential demand. We now need to do something different. We need step change, to make computers at least 100 times faster and more efficient than what they are today.
Because as things stand, not only does the Internet already account for a whopping 7% of global electricity use, but it continues to grow at 7% per year, more than twice as fast as the rate of total global energy consumption. And recent research suggests that, factoring in the always-on processing demands of AI, information technologies could consume as much as 20% of the world’s electricity generation by 2025 – which is just 7 short years away.
Put simply: despite having run this race and run it well, the current model of information technology cannot be applied to an AI-powered world, where we have more of everything computing all the time.
The race we now need to run can no longer be focused on compute power alone. It must be focused on generating the most processing output possible for the least amount of energy. The promise of AI is so great that we cannot risk it coming to an end.
The rise of cryptocurrencies and the infrastructure that supports it provides a glaring example of what can happen when these environmental considerations are not made upfront. Bitcoin “miners’” computations that encrypt the virtual currency and approve transactions already consume more electricity in a year than the whole of Ireland, effectively converting electricity into the world’s fastest-growing currency. And it has only been around since 2009.
Or, take self-driving cars. To meet today’s global production volume of 100 million cars and vans, we would need roughly the equivalent of 1 billion additional iPhones per year to support the computational requirements of bringing those cars into full, self-driving mode. That’s about 5 times the volume of iPhones produced on the planet last year.
And it doesn’t end there. Quite the opposite: our energy needs will only further skyrocket as additional processing power enable more demanding algorithms that then lead to a greater demand for data; which in turn requires additional processing power – creating a vicious cycle that will only accelerate in the years to come, unless something is done. Something unconventional.
I’m pleased to say that at the core of our mission we’ve now mandated finding these unconventional solutions.
For example, we are exploring an unconventional architecture, which we call Memory-Driven Computing, where we take cutting edge, but existing, technologies –like fast non-volatile memory, high-performance interconnects, and powerful processors – and reassemble them into new architectures that throw away 60 years of convention and compromise. Doing so enables any combination of these technologies to communicate at the speed of memory rather than relegating them to the thousand-times slower world that storage and networking inhabit today.
And, we’re not just tossing out the programming manual and architecture rulebook, we’re even tossing out the transistor! Memory-Driven Computing provides an open framework to connect unconventional, novel kinds of processing elements called “accelerators”. We’re building accelerators that calculate entirely with light, and nanoscale analog computers to do instantly what conventional transistor logic needs hundreds of thousands of lines of code to do.
And we’re not stopping there. To combat the escalation in cost and climate impact, our goal is to increase the energy performance of our product portfolio by 30 times by 2025 while we work to provide our customers custom solutions that reduce data center inefficiencies and optimize management of energy, capacity and costs along the way.
The energy challenges facing our AI future are very real, and today more than ever it is paramount that we collectively make solving them our number one priority. It is our obligation to continue pursuing the AI-powered future we envision because of the promise it holds, but we must do it right.