Scaling Up or Out?

By David F. Carr  |  Posted 2008-06-16 Email Print this article Print
 
 
 
 
 
 
 

Google proved that a large distributed server infrastructure works as well—if not better—than consolidated high-end servers. But what works for Google may not work for everyone. The decision to scale up or out depends on your organization’s needs.

Understanding patients’ needs for medical supplies, treatments and medications is critical for cost-conscious health care providers. If providers can predict prescription refill rates, they can better manage low-cost mail-order drug fulfillment, saving them and their patients money. Surveillance Data Inc. (SDI) was founded on that principle, mining the mountain of data produced by hospitals, health care facilities and pharmacies for trends, patterns and anomalies to help providers optimize their services.

Offering this type of monitoring and analysis requires tremendous computing power and storage. SDI took the approach that has worked so well for Google, Yahoo and Facebook: a distributed—or horizontal—infrastructure comprised of low-end commodity servers. While an efficient computational engine, scaling out servers horizontally eventually pushed the limits of SDI’s performance and storage capacity.

“We hit diminishing returns,” says SDI CIO Don Ragas. “As the volume of data was going up, the throughput of the servers was degrading.”

“The guy who had the job prior to me was basically mimicking the Google strategy,” Ragas continues. “He was doing scale-out on—well, I shouldn’t say cheap servers because they were pretty good—HP 385s and things like that. But he ran the whole data center on that approach, and when a server was maxed out, he would add another server.”

Solving SDI’s capacity problem required scaling up (vertically) the computing infrastructure with high-end, fault-tolerant servers that have multiple processors and boatloads of memory. All the network overhead of multiple database servers communicating to share data was eliminated when SDI consolidated the data warehouse—processing 60 TB of data—onto a single Unisys ES7000/one server.

“We did not have to add staff by going with the scale-up strategy,” Ragas says. “If we had continued with scale-out, we would have had to add more people.”

Many enterprises face the same question at some point about their server infrastructure: Scale up or scale out? While businesses of all sorts have discovered uses for grids, clusters and server farms populated by racks of commodity servers, vertical scaling remains alive and well in the enterprise, where many technology managers still tout the virtues of scaling up one or more high-powered servers.

At the high end, servers with eight or more processor sockets tend to come from IBM, Hewlett-Packard and Sun Microsystems and run mostly Unix. However, Unisys has built a business around offering servers with as many as 32 sockets to support the largest installations of Windows Datacenter Edition. HP’s Superdome servers, built around the Intel Itanium processor, can also run Windows or a combination of Windows, Linux and HP-UX Unix running in separate partitions.

The number of sockets installed may become less significant with the advent of multicore processors, which duplicate the core functions of two or more processors on a single chip. In other words, a 32-socket server fully loaded with dual-core processors would in some ways match the capacity of a 64-processor machine. However, two cores on a single chip may not have the clock speed, memory access and input/output capacity of two processors on separate chips. Still, multicore designs are boosting processing power overall, with quad-core processors now becoming more prevalent. Sun has introduced an eight-core processor, and the trend toward higher numbers of cores is likely to continue.

The number of processor sockets in a server is also only one measure of scalability. In addition, the highest of high-end servers sport greater memory capacity, as well as maintenance, reliability and expansion features you won’t find on the typical two-socket models. For example, some models allow processors, disks and other components to be swapped out while the server is running. These features pay off in reliability and ease of maintenance, in addition to scalability.



123>
 
 
 
 
David F. Carr David F. Carr is the Technology Editor for Baseline Magazine, a Ziff Davis publication focused on information technology and its management, with an emphasis on measurable, bottom-line results. He wrote two of Baseline's cover stories focused on the role of technology in disaster recovery, one focused on the response to the tsunami in Indonesia and another on the City of New Orleans after Hurricane Katrina.David has been the author or co-author of many Baseline Case Dissections on corporate technology successes and failures (such as the role of Kmart's inept supply chain implementation in its decline versus Wal-Mart or the successful use of technology to create new market opportunities for office furniture maker Herman Miller). He has also written about the FAA's halting attempts to modernize air traffic control, and in 2003 he traveled to Sierra Leone and Liberia to report on the role of technology in United Nations peacekeeping.David joined Baseline prior to the launch of the magazine in 2001 and helped define popular elements of the magazine such as Gotcha!, which offers cautionary tales about technology pitfalls and how to avoid them.
 
 
 
 
 
 

Submit a Comment

Loading Comments...

Manage your Newsletters: Login   Register My Newsletters