Spiral Development Beats Spiraling Costs

There is more to the FAA’s spiral development process than an iterative, try and try again methodology.

“It is being commonly practiced—and in some cases, commonly malpracticed,” says Barry Boehm, Director of the Software Engineering Center at the University of California.

INTERACTIVE TOOLS
Spiral-development tools that speed your project planning.

Many corporate technology projects start with a prototype that is refined through many iterations until it’s declared a production system.

But unless the development cycles are organized around a risk assessment, Boehm says it’s easy to fall victim to “hazardous spiral look-alikes.” For example, a project may win approval on the basis of a dynamite prototype with a fatal flaw—the developers made it look impressive by giving the demo computer far more memory and network bandwidth than would ever be available in the field. The project fails because of an unsound architecture.

Connecting development cycles with risk makes all the difference, says Boehm, who has promoted the spiral approach since the 1980s, after working on government contracts for TRW.

The key is to organize your project plan around eliminating the biggest risks as early as possible, he says. Each development cycle includes a reassessment of risks and assumptions, the creation of a functioning prototype, an evaluation of lessons learned, and a go/no go decision for the next phase.

The spiral approach is a reaction to the “waterfall” methodology, commonly illustrated as an irreversible flow down a series of steps, from the original determination of objectives and requirements to the production software that emerges at the bottom. The waterfall approach can be appropriate when the requirements and development challenges are clearly understood, Boehm says. But it can be disastrous to commit to requirements up front, before their impact is clear.

Boehm often uses the example of a government contract TRW won from “one of those agencies that prefers not to see its name in print.” The only detail he can share is that the contract required a 3-second response time on queries against a large database. After agreeing to these terms, TRW discovered it could only achieve this result with an exotic client-server architecture that would inflate the project cost to $100 million. On the other hand, TRW could cut development costs to $30 million if the requirement could be a 4-second response time—which was good enough to make 90% of the users happy.

The FAA’s best example of spiral development at work comes from a Free Flight project called the passive Final Approach Spacing Tool, or pFAST, that was supposed to increase the rate of landings at major airports. This tool was targeted at Terminal Radar Approach Control (TRACON) facilities as part of a $224 million program called the Center TRACON Automation System (CTAS) that grew out of NASA research.

CTAS also includes the Traffic Management Advisor (TMA), which is designed to smooth out the rate at which En Route controllers, who manage cross-country flights, hand off planes to their TRACON counterparts, who bring them in for a landing. TRACON controllers would then use pFAST to schedule the final arrival sequence and runway assignments.

Together, TMA and pFAST demonstrated promising results at the Dallas-Ft. Worth International Airport. The FAA began talking about a 3-5% improvement at airports nationwide. But problems cropped up at Los Angeles International Airport. Suddenly, it became apparent that what had worked in Dallas wasn’t going to cut it at LAX, with its more congested airspace and constellation of smaller, surrounding airports. Tailoring software to match peculiarities of each location is a big issue for many air traffic control systems, and it looked like pFAST would require extensive rework every time.

If the project had continued, FAA would have had to either significantly expand the budget “or go in with bulldozers and make every site look like Dallas,” says Free Flight Director John Thornton.

But the story doesn’t end there. At the same time controllers in Los Angeles were panning pFAST, they were working with project engineers on an alternative. By adapting code from the Traffic Management Advisor, they conceived a new tool, called CTAS Terminal, that can deliver improvements similar to those promised by pFAST, while addressing complexities pFAST missed.

For the FAA, spiral development is a way of resolving usability problems and winning over controllers and their union.

“I think you really do need to have their involvement because if they don’t want to use [the new system], they’re not going to,” says Ellen Bass, a University of Virginia professor specializing in human factors who worked on the FAA’s ill-fated Advanced Automation System, developed in the ’90s.

Sometimes what your assessment shows is that your original plan was way too optimistic, as happened with pFAST.

“A big part of this is not to throw good money after bad—and that’s why the pFAST decision was so important,” says Amr El Sawy, general manager of Mitre’s Center for Advanced Aviation Systems Development, a federally funded research lab.

Jim Marple, a spiral development and risk management expert at the Software Productivity Consortium who has led training programs at the FAA, contrasts that with the Advanced Automation System, which was cancelled after a $2.6 billion investment. In his opinion, not the consortium’s, “the total problem with AAS was they didn’t pull the plug soon enough.”