Complex Challenges

By David Strom  |  Posted 2009-05-12 Email Print this article Print

As you dive deeper into virtual machine technology, you need a solid understanding of the issues involved, particularly when you run your VMs on what are called "bare-metal hypervisors."

Complex Challenges

But the bare-metal hypervisors have their own complexities. The first issue is whether they can be managed in the typically dense enterprise environments, including adding new VM guests to a physical server, converting existing physical machines into VM guests and doing routine OS upgrades and maintenance on the guest machines.

Wolf recently assembled a long list of requirements to help enterprises select the most appropriate hypervisor and notes that “only VMware’s ESX satisfies all of our criteria. All the others are missing some elements, such as live migration in Microsoft’s HyperV, enterprise-class support lacking in Virtual Iron and no role-based access controls in Citrix’s Xen Server.” He recommends looking carefully at your own needs before picking a vendor and focusing on these management tasks.

The second issue involves the subtle differences between hypervisor and nonhypervisor environments, even from the same vendor. “You need to keep in mind that there are different sets of features between the various versions of VMware workstation and ESX,” says James Sokol, senior vice president and CTO for the Segal Group, a benefits consultancy based in New York. “There have been occasions where a new feature in VM workstation was not yet available on ESX.”

Segal uses 12 physical servers running ESX with an average of 15 guests on each. VMware makes two versions of ESX: The free version, ESXi, has a smaller memory footprint and lacks features that come with the full-blown, fee-based ESX version.

Microsoft’s HyperV comes in two versions as well: One comes as part of its 64-bit Windows Server 2008 OS, and the other one, HyperV Server, is an independent, free version that’s labeled as a bare-metal hypervisor, but is basically a command-line version of Windows to run its guest VMs. Some people find that HyperV Server isn’t much of an operating system, particularly for denser virtualized setups that require multiple network cards and storage adapters.

“We found the HyperV Server to be next to impossible to configure, since we use 10 adapters in three-node clusters on each of our servers,” says Frank Smith, an IT manager at Lionbridge Technologies, based in Waltham, Mass. The company ended up using the built-in HyperV technology in Windows Server 2008, which is much easier to configure and “requires very little understanding of the underlying physical architecture.” Lionbridge is now running 20 physical servers with up to 25 guest VMs per server.

A third issue is ensuring you have the right hardware to run your hypervisor because they can be very picky about the processor and other internals. Part of this decision is choosing the right CPU family because there are differences in how Intel and AMD chip sets deal with hypervisors. “There is no compatibility between the two platforms,” says Burton Group’s Wolf. “If you start using Intel for your virtual servers, you should stick with that processor family as you add new physical servers.”

In addition to the CPU, any hypervisor needs to have virtualization-friendly BIOS enabled as well, and most of the modern servers from the major vendors include this. “A lot of our older servers couldn’t run HyperV, and we had to buy some new hardware,” Lionbridge’s Smith says.

One way to go is to buy servers with the hypervisors preinstalled. This option is attractive because you are guaranteed that the server has been optimized for the hypervisor. Both ESX and Xen Server come preinstalled on a number of vendors’ products, including Dell’s PowerEdge R and M series, Hewlett-Packard’s ProLiant DL and BL series, and IBM’s BladeCenter HS21 XM.

“The uptake on preinstalled hypervisors hasn’t been as much as the vendors predicted, but it makes a lot of sense,” says Burton Group’s Wolf. While Sokol hasn’t done this yet, he is considering it for his future server purchases at Segal.

You also want to make sure that your server has plenty of room to install additional RAM dual in-line memory module (DIMM) sticks and enough PCI slots for additional network and storage adapters. “We wanted a very high guest VM density, so we ended up having to buy new servers that could support 32 DIMM slots and lots of PCI cards,” says Lionbridge’s Smith.

Segal’s Sokol says good hypervisor planning means bal-ancing the number of guest VMs with bulking up on the RAM required to best provision each guest VM. “You want to run as many guests per host to control the number of host licenses you need to purchase and maintain,” he says. “We utilize servers with dual quad-core CPUs and 32GB of RAM to meet our hosted server requirements.”

Lionbridge’s Smith says a good rule of thumb for Windows guest VMs is to use a gigabyte of RAM for every guest VM that runs on your servers.


Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters