The Perils of Not Being Parallel

If you work at a company that depends on just about any type of software to maintain a competitive edge over your rivals, the foundation you count on to provide that edge is quite literally changing right beneath your feet.

Driving this fundamental change to all things software is a new generation of multicore processors that everybody knows about but, as yet, does not fully appreciate. That?s because, just as with every great innovation that comes our way, we tend to use new things to do the same old things faster. In this case, we?re using multicore processors to run existing single-threaded applications faster.

What we?re not doing en masse is writing applications in parallel so that different parts of the application are executed simultaneously. Generally referred to as parallel programming, this new approach to software has the potential to help companies that adopt it first gain a substantial first-mover advantage over their competitors.

That?s what Jeffrey Birnbaum, chief technology architect at Merrill Lynch, expects to see happen. It?s still early days when it comes to parallel programming and there is an acute shortage of parallel programming talent, but Birnbaum is advising his colleagues that they might be ignoring the most radical change in computing in the past 30 years to their peril. As he puts it, the opportunity to leapfrog competitors by making the right investments is definitely at hand.

The problem for the industry as a whole is that the past four generations of programmers have been living in the shadow of a Moore?s Law construct that promised them their applications would double in performance every 18 months ? even if they did next to nothing to make that happen. As a result, many of them have become lax about optimizing code to get the best performance out of it.

Worse yet, Java or Microsoft application development tools have allowed programmers to develop applications at a high level of abstraction. The end result is that many of them are no longer familiar with the actual memory architecture of the underlying system. But in order to build a successful parallel application, you need to be familiar with the underlying memory system that powers that application. In other words, we all need to get close to the metal again.

Birnbaum fears we may be laying the groundwork for a major software crisis because most of the universities in the world have not taught the limited number of programmers we have how to develop applications in parallel. Some, like the University of Waterloo in Ontario, Canada, are reworking their curriculum. However, corporations are going to need to fund the retraining of multiple generations of programmers. And, as we all know, retraining anybody who has been doing something a particular way for a number of years is a hit-or-miss proposition.

Longer term, the rise of multicore processors will foster changes in the design of systems themselves. For the past 10 years, system design has tended to lean heavily on complex processors that owe much of their lineage to the concepts first brought forth by RISC processors. An alternative approach may be to rely more on a large number of simpler Pentium-class processors sharing the same memory core.

What might emerge from this is a model in which the number of cores available on the market doubles every 18 months, while the power of the individual processors attached to those cores remains relatively stable. Some pundits refer to this doubling of the number of memory cores as the ?New Moore?s Law.?

Of course, there will still be some need for complex processors attached to multiple cores for some compute-intensive applications. However, hundreds of low-cost Pentium-class processors sharing the same memory core could theoretically handle most data-processing applications just fine. That approach would cost less, use less electricity and throw off fewer carbons.

It will be awhile before all the great minds work through all the new system permutations made possible by multicore processors. But we can already see their potential today, so the only unknown question is how many corporations are going to have the intellectual capital at hand to take advantage of them. Those that do are likely to become the leading players in their vertical industry segments, while those that don?t may be wondering just what went wrong for years to come.