Even more than other industries, the tech world is unforgiving to large companies that can’t respond to external change. To keep up to date, Cisco last year revamped the Cisco Development Organization, a group tasked with innovation, making a number of key hires and and giving it greater leeway to act within Cisco.
One of the hires was Paul Marcoux, a former American Power Conversion exec and founding member of The Green Grid, a non-profit group advancing efficiency issues in data centers and enterprise computing. Marcoux was tapped to be vice president of green engineering, getting Cisco up to speed with both environmental issues and efficiency.
I spoke with Marcoux a few weeks ago, both to get an idea of how Cisco will update its routers, switches, servers and other equipment, and to hear his take on the problems that data centers face, as well as the opportunities startups have to make a difference (not to mention a profit).
VB: How competitive are Cisco’s products right now, when it comes to efficiency and the broader issue of climate change?
Quite frankly, I would say Cisco is even with the curve. We’re not ahead or behind. The development of green processes within companies is really something that a lot of the major corporations in our industry embraced no more than 18-24 months ago. You can jump across the pond and look at European companies, some of whom embraced climate change as long as 15 years ago and baked it into the culture. It will become part of our DNA also.
VB: What will Cisco do to make itself a leader?
As it stands, our high-end products are actually robust and efficient. But the real impact, on a global level, will come from our ability to effect meaningful change on our lower-end products, and that’s where you’ll see very rapid changes.
The goal is to develop intelligence into our power supplies. If you have a virtualized process in the data center, and you need to access more capacity, today you go to a rack and grab another server. In the very near future, it’ll be entire racks. It won’t be 3,000 watts, it’ll be 30,000 watts. The management for that power is what we’re looking to develop, and for other people to develop.
VB: You spend a lot of time talking to data center managers and technical executives. What are you hearing?
They’re overwhelmed and frightened. A lot of their solutions, they feel, are only based around technology, around boxes. But some of the more advanced chief technology and chief information officers understand that it’s more than just boxes. They get that, but they need to acquire a solid understanding of hardware and software and a good understanding of power, cooling and networking.
Sadly, a lot of these elements don’t exist in their bag of tricks. One of the important things we need to understand going into the 21st century is that most data centers are operating on a design that was started in the 60s and 70s. And organizationally, they’re not set up for meaningful change.
VB: What are they looking for?
They’re looking for methodologies to reduce expense, reduce risk, and also do it with a smaller footprint on the environment. The real issue is that within the last five years, the cost of electricity grew by 30 percent. Within the next 2-3 years, it will grow another 30 percent, and by 2020 it will be around 2.5-5 times what it is today.
The cost of energy will be one of the gating factors of success in a data center. How you manage that success will clearly rely on your ability to install sensors and real-time monitoring systems, and how you handle all these as a system.
VB: What do you typically find yourself recommending to data centers?
If the focus is immediate power reduction, the first step is to enable energy saving features in your servers. The second is to take a look at virtualization.
A big problem is that most people don’t fully understand virtualization. The industry forgot the other side of virtualizing, which is in the facility. So when they virtualize, they can drive energy consumption below the facility design limits. When you do that, your cooling systems become unstable and can crash, and so can your power systems. This is not well understood in the industry.
The reason a lot of virtualization practices never show up as savings is because the facilities department, which is not part of IT, will do what they think is right: Install processes on their cooling system that add an artificial load in direct proportion to the load you saved in the virtualization process. The device they install is called a hot-gas bypass. They take the output of the cooling system and pipe it into the input. So it keeps the chillers operational.
VB: That’s funny, in a sad way. And nobody realizes what’s going on?
Most companies now, when they employ these solutions, aren’t metering and managing them. They have one electrical meter sitting on the side of the building somewhere. So they can’t tell what’s going on.
VB: I guess adding an energy monitoring system would be one of your top suggestions, then. What else?
Analyze your power and cooling systems. If you have legacy equipment, run the numbers. To remove that equipment and install modern cooling equipment, which is upwards of 97 percent efficient, is typically such an advantage that it becomes a no-brainer.
The next idea is to ensure that you’re using rudimentary hot-and-cold dials. And if you’re using a lot of blades and pizza-box servers, you may not even need an under-floor cooling system. Overhead will be quite sufficient. They refer to it as close coupling. You’re coupling your cooling to the heat generation system, which is your server system. From there, you can take on free-cooling. If you’re in a northern, cooler environment, you can go on plate and frame or exchange cooling. On cooler days or the winter-time, you can use the winter air to cool your data center inexpensively. You don’t have to go to Siberia, you can get some value right here in San Jose. Anywhere it goes 60 degrees or below, there’s an opportunity for significant savings.
Finally, you should commission your data center, or re-commission it if it’s been operating over 40 years. That validates all your points that have been set back to your design requirements. Things do drift. It’s like tuning up a car.
VB: What are the opportunities for startups and entrepreneurs to address some of the problems?
The ironic fact is, every state has unique energy requirements, from billing to consumption. The quagmire of energy rules is very confusing to IT folks. A startup, by producing and delivering understandable ways to impact a server connected to a specific utility, [would be] incredibly valuable to IT folks. They’re more than willing to pay for that.
In virtualization, the cutting edge is moving away from the server to the network area, to the physical power and cooling aspects of data centers. Your systems actually have to be able to anticipate the load of IT virtualization and begin to control the power systems ahead of time. It would be fabulous if you could have a traffic cop that anticipated your needs and began to cool a specific area in a data center. The virtualized world may be digital, but it will always work in an analog world. Heat is an analog function. You can’t make something hot or cool instantaneously.
VB: What about power generation?
There are many, many valid types of systems — fuel cells, wind power, photovoltaics, high-speed turbines, and others. All of them are going to become players in the data center. There’s plenty of room, because data centers are operated in very localized regions, with very localized requirements. You may have abundant wind energy, so that’ll be an alternative for you. You may be down in the desert, so abundant sunlight is available. You may be near the Artic, so geothermal could be useful, or you may be down in Saudi Arabia where gas is abundant, so micro-turbines become a possibility, as do fuel cells.
Will one outshine the others? I don’t think so. I think there’s room for everything. What the companies making the stuff have to do is identify their niche.
VB: Any last words, for the data center guys?
One of the themes that IT managers are not familiar with is free money. They’ve probably heard of free cooling, but not free money. But it actually exists, and it comes from the utility companies. The utilities are very willing to work with IT people with energy-efficient systems, in terms of both engineering and installation. It’s not because they feel philanthropic. It’s simply good business. If they don’t have to build another power plant, their return on their previous investments is much faster.
The IT manager has to understand that there’s a whole world out there that’s willing to help. It’s very expensive, but there are multiple solutions.
If you liked this Q&A, please check out our others:
Byron Acohido, author, “Zero Day Threat”, on who to blame for identity theft
John Antal, chief of staff and military/historical director at Gearbox Software, making “Brothers in Arms: Hell’s Highway”
Wagner James Au, author “The Making of Second Life”, on life in a virtual world
Jeff Boyd, CEO of Miles Electric Vehicle, on the future of cars
Jim Crowley, CEO of Turbine, on keeping the online game machine humming
Jon Goldman, chairman Foundation 9, on game development as a model
Seth Goldstein, CEO Social Media, on social networking’s future
Henk Rogers, Tetris pioneer, on saving the earth
Curt Schilling, founder of 38 Studios and Boston Red Sox pitcher, on starting a fantasy online game
Dwayne Spradlin, CEO of InnoCentive, on expanding R&D crowdsourcing
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more