Virtualization Is the New Clustering

 
 
By David Strom  |  Posted 2008-07-30
 
 
 

As enterprises become more involved in virtualizing their application servers, they are finding that virtualization can deliver more than just better utilization of their computing resources. High availability and near-term server failover, previously the province of clustered servers, are now available in virtualization applications for less money and less hassle than had been the case with pure-play clustering applications.

The combination of better resource use, reduced power and cooling in the data center, and more manageable applications delivery has made virtualization a very popular solution. As IT shops gain more expertise in delivering virtualized applications, they can also get a better handle on the kinds of load balancing and availability issues that once were the exclusive domain of clustering solutions.

Indeed, virtualization continues to be complementary to—and is sometimes a less expensive replacement for—some applications that don’t require the up-to-the-nanosecond transaction-level failover that clustering provides.

“We are at a point where constantly changing business requirements, coupled with perennial pressure on IT to lower costs, are forcing the two to come together and scale infrastructure resources in and out in real time based on workloads,” says Anurag Chauhan, a consultant on data center technologies with Accenture in Chicago. “This is creating an architecture in which resources are virtualized, but also aggregated or clustered for higher processing loads through smart policies.”

The intersection of both technologies has produced a big business benefit that is widening the appeal: disaster-recovery protection. “In the past, you needed to buy another physical server in case the primary machine went down,” says Bob Williamson, a senior vice president at Steeleye, a specialized virtualization vendor. “By using virtualization and hosting these servers at a remote location, enterprises can use the machines if their data center goes out. That lowers the entry cost for deploying wider-area disaster recovery and opens up this protection to a whole new set of companies that haven’t been able to consider it before.”

Clustering may still be required in specific cases. “If you are using some sort of transaction-processing system where you need to preserve the state of the server, then you are going to need some special-purpose clustering solution,” says Ken Oestreich, director of product management for Cassatt, which makes automation tools for managing virtualized sessions. “But the majority of the applications in the data center don’t need this level of granularity.”

Driving the Convergence

Several factors have come together to make this trend possible: First, the mainstream virtualization vendors—such as Citrix, Microsoft, Novell and VMware (see table below)—are getting more adept at migrating applications between virtual and physical servers. This makes it easier for IT managers to virtualize their applications and understand how they can be duplicated in the event of a communications outage or disaster.

Microsoft is helping things along by giving away its Hyper-V virtualization software as part of the Windows Server 2008 64-bit edition. With a few mouse clicks, you can turn a physical application into a virtual one. Novell’s PlateSpin and Cassatt’s Active Response both have migration tools that can move workloads from physical to virtual environments and vice versa with relative ease.

Also giving virtualization a boost is the fact that IT managers are doing a better job of understanding which of their applications can tolerate longer failovers, thus making them more suitable for virtualized solutions. “Eighty percent of data center applications don’t need to preserve their session state,” Oestreich says. “If you can tolerate a failover of one or two minutes on your less critical applications, then you can deploy virtualization.”

In those cases, virtualization is a way to deliver business continuity at a relatively low cost. But IT must understand the disaster timeframe that is acceptable when a failed resource needs to be restarted, and must know which applications are not transaction-based and can tolerate small gaps in continuous uptime.

“There are still times when you need clustering, such as when you can’t afford to lose a single transaction and have to restart this transaction on the new machine after a failover,” says Carl Drisko, an executive and data center evangelist at Novell. “If your virtual machine [VM] goes down, anything being processed in memory is going to be lost.”

In addition, there are specialty vendors that focus on providing better automation and high-availability orchestration solutions. “This software allows multiple systems to watch one another and restart on another system after a failure,” explains Dan Kusnetzky, a virtualization consultant based in Osprey, Fla. “In this case, it is possible to substitute the combination of virtual machine software and orchestration/automation software for clustering software.”

“The amount of time it takes to start an existing VM is a lot less than it takes to boot a physical server because you’re just taking the VM image and loading it into memory,” says Novell’s Drisko. “This can save minutes of precious time, especially when compared to the boot time of a heavily loaded Windows Server. We’ve been able to bring up a VM in milli-seconds with the right kinds of network-optimized storage interconnect fabric.”

Having this fast-restart feature has made it easier for enterprises to manage patches and server operating system updates, because they can readily bring up a new instance of their most current OS environment with a virtual machine.

Managing and Monitoring

Virtualization vendors are offering more mature management and migration features, and can examine the actual operations of the applications being hosted inside their virtual machines. Take a server farm with a dozen machines all delivering a Web application.

If an enterprise has designed things for peak load performance, there are going to be plenty of other times when many of those machines are doing little or no work. The ideal solution would be able to spin up or spin down new instances of application servers when those loads change—to match a particular service delivery metric and to keep the costs of power and cooling to a minimum.

“If you have only physical-machine monitoring in place, you don’t always know if a machine has gone down in a virtual environment,” says Drisko.

The problem is that not all monitoring tools can deliver the same level of insight and automated responses. For example, VMware’s High Availability software can detect whether a VM is actually running on a host, but it can’t deliver any insight into whether the application inside the VM—such as an Exchange or database server—is operational. Plus, VMware is only looking at the VMs and not the physical server applications.

Steeleye and Cassatt can monitor both physical and virtual applications. If a host of the VM dies, or if the application hangs inside the VM, they can automatically bring up another host and a working VM with the same IP address and continue the application.

To match workloads with resources, Novell’s Drisko suggests this analysis:

1 Identify what resources are contained in the data center.

2 Determine what workloads are running on them.

3 Examine where those workloads should be assigned.

4 Understand how to optimize them as your workloads and needs change.