Gaming execs: Join 180 select leaders
from King, Glu, Rovio, Unity, Facebook, and more to plan your path to global domination in 2015. GamesBeat Summit
is invite-only -- apply here
. Ticket prices increase
on March 6 Pacific!
A week ago Roadrunner, the world’s first petaflop supercomputer and still capable of a staggering one quadrillion floating point operations per second, was tossed aside like an old shoe, or the PC you bought in 2007. Five years ago it was the fastest computer on the planet, running nuclear warhead degradation simulations for Los Alamos National Laboratory, today it’s yesterday’s news.
But speed wasn’t the only issue.
Rather, the race for better, smaller, faster supercomputers now includes an adjective that wasn’t nearly as common five years ago: greener. In other words, more energy efficient. And, not incidentally, cheaper.
“I’ve been to Los Alamos,” Rob Clyde says. “In many cases it’s not just the cost of the electricity, it’s the fact that you just can’t get more.”
Clyde is the CEO of Adaptive Computing, which produces workload management software to make supercomputers and company’s private clouds more efficient — more green. I talked to him about what it takes to make super-computing super-efficient.
It’s a big problem.
One supercomputer that was the fastest in the world from 2010 to 2011, TH1, reached 2.566 petaflops. But to do so, it consumes over four megawatts of power, which at $0.10 per kilowatt/hour is $3.5 million a year. And four megawatts is about the electricity needed to power perhaps 3500 homes, which now need alternative energy sources.
So, how do you fix it?
Increased efficiency is, of course, key. But that increased efficiency can’t come at the cost of performance — not when supercomputers or major private clouds are running mission-critical applications for governments or companies. Clyde says Adaptive helps with both.
“We run on the fastest supercomputer in the world,” he told me. “And we run on the greenest supercomputer in the world.”
There are essentially three ways to improve efficiency.
The first is to minimize waste by consolidating workloads. In most private clouds and data centers, the average server utilization rate is a pitiful 8.5 percent. Supercomputers, which are intensively used by scientists, are better utilized, but some are still only in operation half the time. If you can drive the utilization rate up by consolidating workloads and scheduling computing jobs, you can achieve 3-4X energy savings alone, Clyde says. Especially when paired with the second strategy: green policies. When workloads are scheduled, there are times when the data center or supercomputer is not being used.
In those cases, Adaptive’s software, MOAB, simply powers down.
The third is to use more efficient processors, which is not so much about buying one particular type of processor as it is using the right kind of processor for the right kind of job. GPUs, or graphical processing units, are hyper-efficient at certain types of mathematical calculations. Intel’s Xeon chip is better at other operations. Using the right chips for the right job can yield another impressive slice of planet-friendly power.
All together, the changes add up. Bigtime.
“Today’s Beacon supercomputer is the top green system in the world,” Clyde told me. “It has the best gigaflops per watt rating at 2.5 gigaflops per watt, which is six times more efficient than RoadRunner.”
That’s impressive, considering that five years ago, RoadRunner was considered one of the most efficient supercomputers. And needed, as energy not only gets more expensive, but we get more aware of conserving and efficiently using the energy we produce.
That same approach works with what is currently the fastest supercomputer in the world, Titan, a hybrid machine that uses both GPUS and traditional CPUs to achieve more than 10 petaflops. And it also works with company’s internal clouds or data centers, which are growing at startling rates — and using massive amounts of energy.
“We run one of the largest private clouds in the world at a global bank,” Clyde told me. “They have 30-50,000 servers right now, and they tell us that by the end of the year, that will be close to 100,000. For a typical data center, we can provide 2-2.5X power savings, and for supercomputers, we can often come close to saving them half their energy costs.”
That’s green, clean … and fast.
photo credit: Argonne National Laboratory via photopin cc, Los Alamos National Laboratory via photopin cc, carrierdetect via photopin cc
VentureBeat is studying social media marketing tools
, and we’ll share the data with you.