Opinion: John Parkinson on the Ins and Outs of Benchmarking your IT Department

Late in 2005, the CIO of a fortune 20 company whom I have known for many years asked me to take a look at benchmarking his IT organization against its peers. On the surface, this was a reasonable request. The CIO wanted to know how well he was doing against some key performance indicators and if there were any areas where he was significantly behind the competition on policy, practices and performance.

However, in my view, there were two major potential issues with the request. First, just who were his peers? He’s in an industry where there are only a dozen or so global players, only a handful of whom are really of comparable size. Roughly half of these run their IT largely in-house; the other half are largely outsourced, so valid analysis would be difficult and complex and meaningful comparisons would be hard to make. Even if we were to drop industry as a selection criterion, there are only a few companies as large as his, and they are extremely diverse, making useful comparison even harder.

I knew exactly who he wanted to include in the benchmark study, but beyond the small potential sample size, you get the second problem of confidentiality of data and results. If the sample is too small, it’s too easy to use publicly available information to try to “decode” the supposedly anonymous benchmarks and get straight to a ranked list of your competitors’ performance; that’s why getting other companies to participate is generally difficult. These two factors, plus the cost of collecting and analyzing the information required make benchmarking for the largest organizations extremely challenging. And all too often the results can be meaningless or misleading—or both.

Here’s an example of how benchmarking results can lead you astray. A few years ago I worked on a major reorganization of an IT group that had had very good comparative benchmarking results for many years. According to the benchmarks, they were in the lead in their industry and amongst the best performers in any industry. Yet when we were done with the reorganization—which involved new information-driven policies and practices as well as a new organization structure—they had succeeded in reducing head count from around 700 to just over 250 without loss of delivery volume or quality. That result puts into question their years’ worth of benchmark comparisons and “excellent” performance ratings: Either everyone has the same set of chronic problems, or there is something wrong with the measurement and analysis process.

Read the full story on eWEEK.com: Opinion: John Parkinson on the Ins and Outs of Benchmarking your IT Department