Data Center Efficiency: Shedding a 10-Ton Air Conditioner

Last summer, Cimarex Energy encountered a crisis in its Tulsa, Okla., data center. Its data systems, as well as the seismic and geologic data critical to its oil and gas exploration and production business, were at risk due to overheating equipment.

The server room was at capacity: There was simply no space available for equipment. And the company learned the hard way that it had packed in too much equipment for its power and cooling systems. The densely packed micro-circuitry was overheating servers and other I.T. devices.

It’s no wonder the data center ran into problems.

Exhaust from one device was sucked into the air intake of others nearby. Equipment—including Network Applicance enterprise storage devices and Hewlett-Packard servers—was failing despite running the air conditioning at full throttle.

“NetApp tech support was replacing two or three drives a week in the array due to failure, and were telling us just as often that if we didn’t do something with the heat issues we were having, that they were going to discontinue support for these devices,” explains Cimarex network engineer Rodney McPhearson, who was charged with finding a solution to the problem. “We were seeing temperature logs running from 74 to 76 degrees on the front side of the rack row, and above 100 degrees behind the racks. This was causing drive failures in our HP servers also.”

The weekly equipment failures were getting costly, and McPhearson knew it was time for a new approach to keeping equipment cool as the data center expanded into a new server room.