Burts Bees: What IT Means When You Go GreenBy Ted Hein | Posted 2008-11-26 Email Print
WEBINAR: On-demand webcast
Next-Generation Applications Require the Power and Performance of Next-Generation Workstations REGISTER >
Virtualization proved to be virtuous and cost-effective for the environmentally conscious Burt’s Bees. Get a breakdown of what it took to really go green with virtual storage, measuring power consumption and following ISO standards for the environment.
Being green is part of Burt’s Bees’ DNA. The Durham, N.C.-based health and beauty products company is committed to being a good steward of the environment. In pursuit of that goal, Burt’s Bees focuses on what it calls the “triple bottom line,” which ranks people and the planet alongside profits.
Established in 2006, Burt’s Bees’ ECOBEES (Environmentally Conscious Organization Bringing Ecologically Empowered Solutions) volunteer force develops and champions environmentally friendly business practices within the company and among its partners.For the group’s Energy Reduction Team, this includes looking at ways to make the company’s IT operations green.
CIO Ted Hein discusses how server and storage virtualization, along with other green IT initiatives, have helped Burt’s Bees accomplish this objective—and save money at the same time.
When we began looking for ways to make Burt’s Bees’ IT operations green, our data center quickly emerged as the biggest area of opportunity. For each 100 watts saved in this energy-intensive facility, we could reduce our cooling load by an additional 20 to 30 watts, magnifying our impact.
But there were also more traditional motives for taking a fresh look at how we did things. Our business was undergoing tremendous growth, driving the need for new application servers and ever-expanding data storage. At the same time, many of our existing servers were past their usable lifespan or needed to be upgraded to support our rapid growth. All told, we would need 21 new servers to support our operations—a significant cost in capital, as well as in provisioning time and effort.
Virtualization offered a clear strategy for reducing our server hardware consumption. Previously, many of our servers with direct-attached storage were either underutilized, such as our Microsoft Exchange server, or over-utilized. With routine monthly growth of 15 to 20 gigabytes (GB), our ERP system consistently operated at capacity.
Virtualization also promised improvements in reliability: Instead of having to invest in making each Tier 1 or Tier 2 application fault-tolerant, we could make everything on a given cluster fault-tolerant at no additional expense.
From a disaster recovery perspective, we could eliminate the need to run exercises on dissimilar hardware in a disaster recovery center (a nightmare of low-level device-driver problems) and simply leverage virtual machines as secondary servers—a much faster, easier and more rational approach.
We began by moving most of our core applications to a virtual server environment based on VMware. Instead of purchasing those 21 new servers, we balanced 53 virtual machines across a cluster of three highly efficient and powerful physical servers for a net energy consumption saving of 102,972 kilowatts a year, plus a reduction of 32,377 kilowatts a year in the energy needed to cool the data center. At that point, we’d saved enough energy to power 7.5 three-bedroom houses, but we didn’t stop there.
At the same time we were virtualizing our servers, we made the transition from direct-attached storage to more scalable and efficient virtualized storage. However, our initial efforts—which used high-capacity, cheaper Serial ATA-based network-attached storage—came up short on performance and scalability. That led us to look for a better storage-array network (SAN) solution.
We evaluated NetApp’s core storage system and data-reduction technologies and determined that these solutions could support the needs of our VMware environment and significantly reduce our physical storage footprint. We also liked the company’s approach to data deduplication, as we felt it would offer us more efficient storage.
Instead of the usual method, in which only entire files are checked for duplication, NetApp looks for identical 4KB chunks within different files. For example, think about how many times your company logo appears in documents and presentations and how much expensive network disk space that takes. This system locates each of those instances—even across different servers and logical disk units—and replaces them with pointers. It’s seamless to users and technical staff, but it can yield a tremendous savings in storage.
We put the new system through some rigorous testing—doing our best to crash the NetApp server—but it didn’t break a sweat. This built up our confidence in the direction we’d taken with both our virtual server and our virtual storage platforms.