Scale Diagonally

By David F. Carr  |  Posted 2008-06-16 Email Print this article Print
 
 
 
 
 
 
 

Google proved that a large distributed server infrastructure works as well—if not better—than consolidated high-end servers. But what works for Google may not work for everyone. The decision to scale up or out depends on your organization’s needs.

Scale Diagonally

The scale-up vs. scale-out decision is not always an either/or situation, explains Sun’s Atwood. “There’s nothing that says when you do a cluster it has to be two processor servers.” Some Sun customers in finance and telecommunications are running clusters of Sun’s big 72-processor servers, he says, and many others are pursuing some variety of what Atwood refers to as “diagonal scaling,” clustering multiple midsize to large servers.

SDI’s Ragas concedes that following a scale-up strategy comes with its own challenges in the Windows environments, where it is not as established. For example, some of the software he runs on the ES7000/one, including the Oracle database and SAS analytics, isn’t tuned to run in a 64-bit Windows environment without customization and workarounds, he says.

On Unix, scaling up makes sense more often than not, says Mark Graham, an independent consultant who specializes in configuration and support of HP Superdome servers. He sees greater potential for virtualization on high-end machines than most companies are willing to embrace, simply because of the value application owners place on having their own separate servers. “It’s really kind of an emotional thing, which is frustrating to someone like me who sees the value in consolidation,” Graham says.

The company that puts one Superdome in place of a dozen other servers will reap savings in power consumption, data center floor space and administration, Graham says, so the total cost of ownership is likely to be better, despite the premium pricing of a high-end server.

Chris Reavis, director of enterprise infrastructure at wind and geothermal power specialist PPM Energy, believes both scale up and scale out have their place. As a former technical marketing manager at high-end server vendor Silicon Graphics, he has seen the issue from both the vendor and customer sides. In fact, one of his roles at SGI was consulting with customers on horizontal vs. vertical scaling strategies.

“Typically, for the back-end database and anything to do with heavy business logic, compute I/O or data I/O, I tend to go with vertical scaling,” Reavis says. “On the other hand, for front-end commodity computing, it tends to make sense to go with a simple rack-and-stack strategy for Web servers, FTP servers and e-mail servers.”

Reavis says he has seen the horizontal strategy work at operations as big as AOL and Yahoo, and it’s not particularly tough to manage for an organization with some competency in using automated deployment and systems management tools. He also has experienced success using Linux-based grid computing for data analysis.

Reavis has resisted the trend toward server virtualization and consolidation, considering it simpler to run applications on individual servers than to deal with the complexities of virtualization.

“I work with a lot of third-party code that may or may not play well with others, so I don’t want to mess around,” he says. “It’s just not worth the business risk. But for things like Oracle or our e-business applications, I go with scale up.”

For Reavis’ midsize enterprise at PPM Energy, four-socket Dell servers with quad-core Intel processors provide plenty of computing power. Even having worked for a vendor of high-end server hardware—and seen the value it brought to the financial services and scientific computing applications where it was employed—he concluded it would be overkill for PPM.

“If I were in banking or insurance, I would take a totally different approach,” Reavis says. “In some ways, we’re a little closer to a Yahoo, where we can afford to throw away a front-end box that dies.”



<123
 
 
 
 
David F. Carr David F. Carr is the Technology Editor for Baseline Magazine, a Ziff Davis publication focused on information technology and its management, with an emphasis on measurable, bottom-line results. He wrote two of Baseline's cover stories focused on the role of technology in disaster recovery, one focused on the response to the tsunami in Indonesia and another on the City of New Orleans after Hurricane Katrina.David has been the author or co-author of many Baseline Case Dissections on corporate technology successes and failures (such as the role of Kmart's inept supply chain implementation in its decline versus Wal-Mart or the successful use of technology to create new market opportunities for office furniture maker Herman Miller). He has also written about the FAA's halting attempts to modernize air traffic control, and in 2003 he traveled to Sierra Leone and Liberia to report on the role of technology in United Nations peacekeeping.David joined Baseline prior to the launch of the magazine in 2001 and helped define popular elements of the magazine such as Gotcha!, which offers cautionary tales about technology pitfalls and how to avoid them.
 
 
 
 
 
 

Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters