Disarming Hidden IT Project Time Bombs

By Ron Bisaccia

As you read this, danger lurks in IT project portfolios around the world—perhaps even yours. Don’t think so? Neither did Bridgestone, Kmart, Hawaiian Telcom, JCPenney or Avon. At least, they didn’t think so before their failed IT projects brought down the CIO, the company or both.

We’ve all heard these IT project horror stories. Huddled around the glow of the conference room projector, we shake our heads at the lack of foresight (even hubris) of those who went before us. “How could they not see it coming?” we wonder. How, indeed?

After 20 years of being called in to rescue very large and very troubled IT projects, I can assure you: The overwhelming majority do not see it coming, despite excruciatingly detailed projections and planning.

How can you tell whether an IT project will be a boon or a bomb? More importantly, how can you spot the land mines in time to change course? The following three disciplines can help.

1. Measure effort, not cost.

Let’s start with understanding just how much risk your project poses. Over the past decade, most experts have equated the level of project risk in proportion to its size. But what determines a project’s size and how it affects the outcome has never been clear. Is it budget, schedule, team size (full-time equivalents [FTEs]) or effort (person-months)?

Budget (or cost) is a popular yardstick. Conventional wisdom says that projects above

$10 million to $15 million have a much lower success rate than those with smaller budgets. Should we then focus risk mitigation on projects greater than $15 million and breathe easy about the rest? No.

As a measure of risk, budget is too easily skewed. For example, does the project involve expensive yet simple-to-implement hardware or software? Is the work performed onshore or offshore? Are the costs of internal FTEs included in the budget? The answers to these questions can have a dramatic impact on your ability to rely on budget to gauge risk.

Instead, our experience is much more aligned with a 2007 study that examined the relationship of all four of the size dimensions to the risk of project failure. It concluded that cost is a poor indicator of risk.

The best indicator? Effort level. In fact, the study found that any project estimated to exceed 2,400 person-months of effort level is virtually guaranteed to blow its budget and schedule.

2. Know your risk tipping point.

Perhaps more relevant, project risk does not change linearly. Instead, it follows a fairly pedestrian arc in relation to the effort level until it reaches a certain threshold—a tipping point. Beyond that, the risk of project failure screams into the stratosphere.

Projects that fall below the tipping point have a much lower risk of failure and can usually be estimated using traditional techniques. However, projects above the tipping point have a much higher risk of failure and therefore require a risk-adjusted approach to project estimation.

Where is this tipping point? That is the hard part, because it varies based on the organization. However, its general position can be found with these three dimensions:

  • Estimated work: 2,000 person-months
  • Schedule:  more than18 months
  • Team size: more than 20 FTEs

On the surface, these dimensions appear to measure the overall size of a project. Yet size is really a proxy for the true driver of risk: complexity. Executing a complex project within the tight schedule and budget constraints of most corporate environments is fraught with peril. (And despite popular belief, methodologies such as agile versus waterfall have no appreciable effect on large project success rates.)

The first risk of complexity stems from ambiguous design and specification. The more complex the project, the more design effort is needed to accurately identify and estimate scope. Unfortunately, the pace of business demands a “figure-out-the-difficult-parts-later” approach to estimation, which virtually ensures that you’ll bake some special surprises into your project.

Surprises can include not truly understanding the business or functional requirements until the project is well under way; or relying on the collaboration of software modules that end up not playing nicely together; or underestimating the difficulty of merging and migrating legacy data.