Primer: Grid Computing

By David F. Carr  |  Posted 2006-05-06 Email Print this article Print
 
 
 
 
 
 
 

Grid computing harnesses the power of many computers for a common task.


Is grid computing right for your company? Click here to take the quiz.

What is it? An approach to pooling the computational resources of many computers—typically, low-cost ones built from commodity components—to accomplish tasks that otherwise would require a more powerful computer or supercomputer. A true grid should be more flexible than a traditional server cluster or server farm, where many machines are assigned to perform the same function in parallel. That is, it ought to be possible to dynamically reassign computers participating in a grid from one task to another, or to make grids at different locations cooperate to perform a demanding task.

Who came up with this definition? Argonne National Laboratory computer scientist Ian Foster has been the most outspoken advocate of what he calls The Grid, a vision in which computing power will eventually flow worldwide like electricity on the power grid. According to Foster and his colleagues, a grid becomes a grid when it crosses organizational boundaries (for example, between companies and independent departments) and uses standard protocols to accomplish a significant task.

What are some examples? SETI@home, the Search for Extraterrestrial Intelligence organization's effort to harness idle computer cycles on the world's PCs to analyze radio-telescope data for evidence of intelligent signals, is a grid based on volunteer resources. Hewlett-Packard has designed a corporate version of a "cycle scavenging" grid for an automaker, using idle time on engineering workstations to perform simulation tasks (see Reference box). There are many other examples of academic grids performing scientific number-crunching, sometimes with computers at many universities linked to solve a given problem. The early commercial examples also often have a scientific or engineering bent, such as genetic analysis within biotech firms or oil field analysis by petroleum companies. Other number-crunching applications include analytic models run by financial institutions.

What about similar buzzwords? There's a lot of overlap, particularly with concepts such as utility computing and on-demand computing. However, utility computing is also associated with a particular business model where users or organizations only pay for the computer cycles they use. On-demand computing allows the amount of computing power available to applications or organizations to expand and contract based on demand.

Similarly, many grid computing standardization efforts focus on using XML Web services to let nodes in a grid communicate with each other. And service-oriented architecture (SOA) is often mentioned as being complementary to grid computing because defining the components of a system as loosely coupled services is one way of dividing up the processing workload between nodes. However, a system built around SOA principles and Web services is not necessarily a grid, and not all grids incorporate Web services.

Where do I get this technology? Globus Alliance, formed by Foster and his allies, is working on standards for grid computing. Globus offers an open-source software product called the Globus Toolkit. However, most grids today are either custom-built or created with proprietary technology from vendors such as Platform Computing and United Devices. Sun Microsystems offers racks of computers pre-configured for grid computing, and will sell you time on The Sun Grid, a utility computing offering. IBM and HP offer their own assortments of hardware, software, utility computing and consulting.

In addition, SAS has produced a grid version of its statistical analysis package, in partnership with Platform Computing, and SAP is also grid-enabling some of its software.



 
 
 
 
David F. Carr David F. Carr is the Technology Editor for Baseline Magazine, a Ziff Davis publication focused on information technology and its management, with an emphasis on measurable, bottom-line results. He wrote two of Baseline's cover stories focused on the role of technology in disaster recovery, one focused on the response to the tsunami in Indonesia and another on the City of New Orleans after Hurricane Katrina.David has been the author or co-author of many Baseline Case Dissections on corporate technology successes and failures (such as the role of Kmart's inept supply chain implementation in its decline versus Wal-Mart or the successful use of technology to create new market opportunities for office furniture maker Herman Miller). He has also written about the FAA's halting attempts to modernize air traffic control, and in 2003 he traveled to Sierra Leone and Liberia to report on the role of technology in United Nations peacekeeping.David joined Baseline prior to the launch of the magazine in 2001 and helped define popular elements of the magazine such as Gotcha!, which offers cautionary tales about technology pitfalls and how to avoid them.
 
 
 
 
 
 

Submit a Comment

Loading Comments...

Manage your Newsletters: Login   Register My Newsletters