Intel Data Center Efficiency Project Should Save $200 million Annually

By Ericka Chickowski  |  Posted 2008-10-20

As with many organizations, data centers at Intel provide valuable resources to all aspects of their business, but they are also a tremendous resource drain on the company. With hundreds of data centers spread around the globe, the chip maker faces the same struggles as other enterprises do in keeping data center costs under control.

Several years ago, Intel’s IT operations staff decided to face these issues—such as poor utilization rates, server assets maintained for up to eight years and overall disorganization—and meet them head-on with an efficiency initiative that the group expects to help save the organization up to $200 million annually.

This isn’t some bogus green data center initiative, executives say. This is an efficiency exercise that saves the green that matters most in these tough economic times.

“Our main concern is not about being green,” says Brently Davis, manager of Intel’s data center efficiency project. “It is more so about being efficient, figuring out how we can run our computing environment better.

Project Genesis

Davis and his team started their push for data center efficiency back in 2006.

“We found out we were spending almost $1 billion per year in the data center environment,” Davis says, “and we said, ‘No, we’ve got to be able to do something better.’

“At Intel, we do an awesome job building chips and running the factories that we build those chips in, but we did not do a very good job at managing the data centers the same way. So we wanted to get to a standard environment where processes were in place, where governance was in place and where we could start managing those data centers better.”

Intel developed Davis’ group within its IT operations department just so it could tackle such issues. “You’ve always got to pull people in to help look horizontally,” he says. “A lot of times, the focus of IT groups is myopic: They like to concentrate on their own vertical. But you’ve got to have a horizontal overview, and I think that’s what we try to help them do.”

One of the first things that Davis did to make that happen was to bring in the financial folks to get a clear picture of spending beyond the overarching price tag—to see where the dollars were going. That’s the key step in any efficiency project, he says.

“You can’t do this if you don’t understand what you’re spending,” Davis says. “We brought in our finance team and said, ‘Pull all of this stuff together so we can figure out what we’re spending. Put these numbers together, get it validated and make sure it makes sense.’”

Doing that makes it easier to garner buy-in among what Davis calls the “but why-ers.” He explains them this way: “Whenever you explain what you’re trying to do, the first question that comes out of a ‘but why-er’ is, ‘But why do we have to go through this? But why do I have to move my servers? But why do I have to close that data center?’

“These are people who are normally your customers, who are, in some cases, part of your teams, who have differing agendas and motivations. And they say, ‘No, I don’t want to go do that. That’s not the approach I’d like to take.’ But once you show them that overall picture, they understand what the key value is for what you’re trying to deliver.”

Standardization Begets Better Utilization

With the business case laid out and the “but why-ers” on board, Intel is now putting all the logistical puzzles into place. The first step has been to work on standardizing the environments and practices to reduce redundancies and improve the way all the data centers work together. Because, as Davis says, when Intel surveyed the data center landscape in 2006, it had 150 centers and “there was no synergy to anything; we were all over the place.”

Standardization begets improved utilization rates, he notes, adding that it was Intel’s goal to eschew the “normal” data center utilization rates and shoot for the moon with unheard of rates of more than 80 percent to 90 percent usage.

“Everybody tells you the same story—it’s always 10 or 15 percent utilization on a box; it’s never 80 to 90 percent,” he says. “We wanted to increase that. If we could do that, we could begin to consolidate. After we standardized, we had processes in place, and we began to increase utilization.”

This was enabled through a number of strategies, including virtualization, grid computing and cloud computing. And it was coupled with efforts to do a better job refreshing servers—replacing them with fewer servers along the way.

Before the program started, Davis reported that by 2014, Intel was on track to move up from 90,000 servers to 225,000 servers. The goal, he says, is to keep that number at 100,000 in six years’ time and reduce the cost and power draw of each of these servers significantly.

“The only way we could do that was by getting off the old hardware,” he recalls. “We were just as guilty as everyone else. We were sitting on servers that were possibly seven or eight years old. We needed to start refreshing those servers to reduce the power consumption in the data center.”

Davis’ team set a retirement date of four years for servers, which they identified as the point at which Intel saw diminishing margins or returns when it came to maintenance contracts and the like. As they refresh, they take advantage of the higher utilization rate and the dual-core and quad-core technology that their own colleagues innovated in order to put fewer servers in the environment.

“We don’t have to take out one server and bring back another one,” he says. “We’re taking out four servers right now and bringing back one.”

During 2008, Intel will refresh 20,000 servers, with similar projections for the next couple of years.

“Of the 85,000 servers we’re sitting on today, we’re going to refresh [about] 60.000,” Davis says.

Air Economizer

Of course, as you increase utilization and develop higher-density data centers, the real problem is keeping up with power and cooling needs. Intel is currently addressing this with an air economizer proof of concept project that it hopes may help rein in costs. Intel found that using an air economizer that expels hot air outdoors and draws outside air in (rather than using localized air conditioners) could reduce the company’s power consumption by 74 percent over a 10-month period. This could mean a potential saving of $2.8 million annually for a 10-megawatt data center.

This may sound like sacrilege to the average data center manager, who might be concerned about the lack of control over humidity and air quality in such a setup. Intel did report that it saw layers of dust and large humidity fluctuations during the experiment. But the company challenged industry beliefs, stating that there was no increase in server failure rate as a result. How Intel will move forward with this is still up in the air, but this air-economizer set-up is certainly on the table.

Already Intel has begun to see significant savings in its program, Davis says. In 2007, the company was able to shut down 21 data centers; this year, it will close 20 to 24. By 2014, it will have reduced the total data center footprint from 450,000 square feet to 300,000 square feet, and is projected to cut from $1.4 billion to $1.8 billion through his team’s efforts.

Though company leaders may not have foreseen today’s state of the economy several years ago, their forward-thinking strategy has put Intel in a good position. Last week, Intel reported a better-than-expected gross profit margin for its third quarter, at 58.4 percent—a number which company officials told The Wall Street Journal was in large part sustained by companywide efforts to reduce costs.

Paul Otellini, Intel’s CEO, told the Journal that Intel has cut $3 billion in annual spending since 2006. “These actions put us in an excellent operating position for changing economic conditions,” he said.