The Hot Cost of Cooling Data Centers

With the price of oil forecast to stay above $50 per barrel over the next seven years, the time has come to start thinking long and hard about what impact the cost of electricity is going to have on information-technology budgets.

For example, Google engineers have already warned their bosses that the cost of the electricity needed to run the company’s servers will soon be a lot greater than the actual purchase price of the server.

American Power Conversion CTO Neil Rassmussen, who admittedly has a vested interest in the topic, takes it a step further by estimating that the total cost of ownership of a rack for a 10-year period ranges from $80,000 to $150,000 per rack, with electricity costs accounting for about 20% of those dollars (see Baseline’s Electricity Cost Calculator, p. 82).

While most electricity costs are rising due to factors outside the control of I.T., such as burgeoning oil demand in China or natural disasters in the Gulf of Mexico, other factors at play here are within I.T.’s grasp.

The most obvious of those elements concerns the way technology shops deploy blade servers. Power consumption of these servers can run 30 kilowatts or more per rack.

Typically, the units’ heat dissipation requires I.T. departments to bring in air conditioning units to cool the data center, which in turn consumes more electricity; as much as 50% of the power consumed in a data center is essentially wasted because of inefficient architectures.

This double-whammy effect can often go unseen by I.T. because more often than not, the electric bill for any given company is managed by a facilities department that has little interaction with the technology department.

So, all the company’s CFO can see is spiraling electricity costs that are unattached to any specific root cause.

Of course, server vendors such as Sun, AMD and Intel have their usual assortment of solutions that require I.T. organizations to buy the next generation of systems based on processors that consume less power. Sun is touting its Niagara processor, while AMD is promoting its PowerNow architecture for the Opteron processor. IBM, on the other hand, cools its BladeCenter systems with vector fans that crank out about as many decibels as a jet engine.

But for the vast majority of servers already in the field, heating and cooling requirements remain a daily challenge. The only real way to deal with the issue is to get a fairly deep understanding of how heat flows through your data center, and then take steps to even out the distribution of that flow to prevent any one system from exceeding its thermal threshold.

Unfortunately, data center design has become a lost art in an I.T. world racked by waves of downsizing over the last five years. But there is help available from companies such as APC, Emerson Network Power, IBM and Hewlett-Packard that have made strides toward creating more efficient cooling systems for data centers.

APC has even gone a step further with “data center in a box” solutions that take into account the thermodynamic needs of the systems it supports. The company’s argument: The concept of developing custom data centers for each customer’s installation is obsolete. Instead, customers can significantly lower the cost of building a data center by wiring together modular racks that manage heat dissipation and cooling requirements as a fundamental part of their design, as opposed to moving racks around a custom-built room with its own quirks concerning the ebb and flow of heat.

Whatever solution to this problem an organization hits upon isn’t necessarily as important as the fact that it needs to recognize the problem in the first place.

That recognition, in turn, creates the opportunity to set up a task force across I.T. and Facilities that maps out a two-year plan for reducing the amount of energy consumed by the data center. And maybe as an added incentive, half of the money saved could be dropped directly into the organization’s bottom line, while the other half could be used to fund the necessary upgrades. That way, a data center modernization effort could probably pay for itself.

Michael Vizard is editorial director at ziff davis media’s enterprise technology group. He can be reached at [email protected].