Promoting Virtual Efficiency

By David F. Carr  |  Posted 2007-07-18 Print this article Print

University of Pittsburgh Medical Center had runaway growth in its server and storage infrastructure. Here's what it did.

Promoting Virtual Efficiency
One of the technological habits UPMC had to break was setting up a separate server (or several of them) for every application. There are good reasons why it has become common to run Windows and Unix server applications in an isolated environment. If there are no other applications running on a server, anything that goes wrong is likely to be the fault of either the application itself or the operating system. Problems will be more likely and harder to diagnose if multiple applications are running on the same server and competing for resources. For that reason, software vendors and internal support organizations tend to like employing separate servers, where the operating system can be individually configured to support that specific application.

But the downside is that it can be very wasteful, particularly given that the processor and memory resources of modern servers now often exceed the requirements of a single application. Many enterprises report utilizing less than 15% of the computing resources on the average standalone Windows server, according to Microsoft.

"We realized that even though we were an award-winning I.T. shop, we were also a mirror of the industry with all these individually configured systems," Sikora says.

Virtualization makes it possible to put many applications on one physical server but give each its own virtual server—a separate instance of the operating system that is logically isolated from the other operating systems and applications running on that same computer. Although such tricks are old hat on the mainframe (where the notion of virtual machines dates to the 1960s), virtualization of Windows servers is still relatively new. For that, UPMC is taking advantage of VMware's ESX Server, running on IBM xSeries Windows servers. ESX Server provides a specialized operating server environment that can host multiple guest operating systems, each running within compartmentalized virtual machines that prevent applications from interfering with each other.

High-end Unix systems have supported "logical partitions" for years, but IBM has made its Unix solutions significantly more flexible in recent years. For example, it used to be that every logical partition on an IBM AIX Unix system had access to a fixed number of processors on that server. Now, IBM's Unix systems support more flexible partitions that can grow and shrink as needed, based on the demand for processor and memory resources, as well as "micro-partitions" that only consume a fraction of a processor's capacity; this makes it possible to pack more partitions onto a single physical server. Partitions can now be assigned as little as one-tenth of a processor's capacity.

This capability is based on the IBM "hypervisor"—a layer of software that acts as a server's bedrock operating system, intercepting calls to system hardware from the "guest" operating system instances running on virtual machines. On IBM'sSystem p servers, the hypervisor is implemented as firmware (instructions embedded in hardware rather than software), which AIX has exploited since the release of version 5.3 in 2004.

By taking full advantage of virtualization on its IBM p595 servers—high-end machines equipped with 64 processor cores and high-availability features—UPMC was able to do more with fewer of them. In one specific example, where UPMC was expanding and upgrading its PeopleSoft software for financial management, the use of virtualization techniques made it possible to avoid the purchase of an entire p595. According to IDC's review, that saved $3.1 million in up-front purchasing and provisioning costs, and a total of $6.7 million over three years when maintenance, data-center and system administration staffing expenses are factored in.

Prior to the Transformation Project, the server infrastructure for eRecord already took advantage of Unix logical partitions, but UPMC was able to push this advantage much further by utilizing IBM's micro-partitioning technology. Where before the application consumed 68 CPUs of capacity for production systems and another 26 for development (a total of 94), micro-partitioning allowed UPMC to squeeze the total computing horsepower requirement to between 40 and 50 CPUs.

UPMC wound up deploying one p595 where three were originally planned, for a savings of more than $12 million on Unix infrastructure. Windows virtualization saved another $11 million to $12 million, and UPMC is revising these estimates upward as virtualization efforts continue.

By that math, the advantages seem obvious. But it's not always obvious to the business managers who depend on a particular application, or the I.T. people who provide direct support for it, that they won't be giving up something by moving from a dedicated server to a virtual one. While virtualization technology simulates some of the advantages of running on a dedicated server, the application will in fact be running in a shared environment. And there can be other issues, such as the memory conflict Sikora's team ran into with Cerner.

More often, concerns about the virtual environment turn out to be unfounded, says Kevin Muha, a senior systems architect with UPMC and leader of the Windows virtualization effort. "At first, people are like, 'Aw, I don't want a virtual machine.' But in the end, they say, 'Hey, this really isn't so different. In fact, this is better.' It's just a matter of getting over that hump," he says.

One key advantage is that setting up a new virtual machine is much faster than buying, installing and configuring a separate physical server, Muha points out. If developers need a new test system, it only takes a few minutes of work from an administrative control panel to copy a standardized system image—a snapshot of a fully configured operating system—place it on a partition of a running server and activate it. When the test system is no longer needed, it can simply be deleted and its system resources reassigned for another purpose, Muha says. On the Windows platform, VMware also provides a capability called VMotion that makes it possible to move an application from one server to another, even while the software is running—a handy trick when it comes to taking server hardware out of service for maintenance, without disrupting the applications running on top of it.

But even as staff resistance fades, vendors can be another obstacle, especially when they refuse to support their software in a virtual environment. One that particularly annoyed Sikora and Muha (although not quite enough to name the vendor publicly) demanded that UPMC deploy its software across 41 servers at a cost of $523,000 when they saw no reason they couldn't run it on multiple virtual machines on six physical servers, at a cost of $63,000. The compromise they settled on was to only virtualize the Web servers used by the application; the rest would run on dedicated hardware as per the vendor's dictate. The result: a 13-server configuration costing $250,000.

"So, we saved the company half," Muha says, but he still thinks the vendor needs to get a clue. When its technicians came out to help with the deployment, he had to stop them from setting up each Web server individually. "I had to tell them, 'No, no, no, we'll make this first one into a master template, we'll schedule the provisioning of 20 more, and then we'll go to lunch,'" he says. When they got back from lunch, all that was left of the job were a few tweaks to the IP addresses and other parameters assigned to the virtual machines.

"What they said would take 80 hours we got done in eight," Muha says. "So, what are the odds that they're going to be around in five years if they don't learn to work with us?"

In much the same way that virtualization improved server utilization, it also improved storage efficiency. Even though UPMC had already invested in storage area network (SAN) technology, too much of its storage capacity was stranded in "SAN islands" dedicated to different applications and running on disparate hardware. Of 98 terabytes of installed enterprise storage, UPMC estimated that only 41 terabytes were actually being used. By using IBM's SAN Volume Controller to create a single pool of combined storage capacity, UPMC was able to reclaim wasted disk space and simplify its setup.

Virtualization savings come not only from fewer servers but from lower power, cooling and floor space requirements. While giving a tour of the data center in UPMC's Forbes Tower office building, Sikora points out the path that network cables follow out of each server chassis, out of the racks and over runways to network switches. Even though the physical servers hosting multiple virtual machines have multiple network ports, they still represent many fewer ports than what would have been required for an equivalent number of non-virtualized servers, which would have meant more racks, more cabling and more network switches—all adding cost and complexity.

Next page: The UPMC-IBM Playbook

David F. Carr David F. Carr is the Technology Editor for Baseline Magazine, a Ziff Davis publication focused on information technology and its management, with an emphasis on measurable, bottom-line results. He wrote two of Baseline's cover stories focused on the role of technology in disaster recovery, one focused on the response to the tsunami in Indonesia and another on the City of New Orleans after Hurricane Katrina.David has been the author or co-author of many Baseline Case Dissections on corporate technology successes and failures (such as the role of Kmart's inept supply chain implementation in its decline versus Wal-Mart or the successful use of technology to create new market opportunities for office furniture maker Herman Miller). He has also written about the FAA's halting attempts to modernize air traffic control, and in 2003 he traveled to Sierra Leone and Liberia to report on the role of technology in United Nations peacekeeping.David joined Baseline prior to the launch of the magazine in 2001 and helped define popular elements of the magazine such as Gotcha!, which offers cautionary tales about technology pitfalls and how to avoid them.

Submit a Comment

Loading Comments...
eWeek eWeek

Have the latest technology news and resources emailed to you everyday.

By submitting your information, you agree that baselinemag.com may send you Baselinemag offers via email, phone and text message, as well as email offers about other products and services that Baselinemag believes may be of interest to you. Baselinemag will process your information in accordance with the Quinstreet Privacy Policy.

Click for a full list of Newsletterssubmit