Primer: Dual-Core Chips

By Kevin Fogarty Print this article Print

New designs make microchips an almost two-for-one deal.

View the PDF -- Turn off pop-up blockers!

  • What are they? The next real design change in the engines that drive every computer, from the lowliest desktop to the highest-end Unix server. They're the same kind of chips we've been using, but instead of one area on the chip primarily responsible for crunching data, these chips have two or more.

  • What's the advantage? More power from almost the same space. Dual-core chips are about 70% to 80% faster than a single-core chip that has the same number of transistors, says Jonathan Eunice, research director at Illuminata, a consultancy that specializes in high-end systems. They are limited, however, by having only one system bus—the path through which all data passes on the way to the processing core. That creates a bottleneck because there's only one channel to feed data to two processing cores. But since two separate single-core chips deliver only about 85% more power than a single chip, dual-core models are a credible alternative to single-chip systems. They are also cheaper, mainly because it costs more to build the components necessary to support two chips than it is to get one chip to do twice the work. We won't know how much cheaper until Intel and AMD ship their dual-core chips late next year.

  • Why change to two cores? Complexity. Within two years, chipmakers will be putting a billion transistors on each chip. That's a lot of potential, but designing an effective use for 50 million transistors is hard enough; engineering the layout and connections for a billion is almost incomprehensibly difficult. Instead, chipmakers are using proven designs, subdividing each chip into several processing areas.

  • Who's doing it? IBM was first, but Hewlett-Packard and Sun also put them in Unix servers. Intel and AMD have promised to deliver dual-core 64-bit chips in late 2005.

  • Doesn't Intel already do something like this? Kind of. Hyper-Threading is a technique in which a chip basically fools software into thinking there are two chips in a machine instead of one. It takes advantage of the downtime a processor often has in the middle of a job while it waits for data to be delivered from various memory locations. Hyper-Threading schedules another job into those idle periods, delivering an extra 25% to 35% of oomph in the process, according to Eunice.

  • What's the downside to dual-core? Dual-core chips make it awfully hard to decide what, exactly, a processor is. Software makers often charge according to the number of processors in a machine. Having two processing cores complicates the equation. Should a license for a dual-core system cost the same as for a two-processor system?

    What's not in doubt is that system makers are moving quickly toward multi-core chips to save money and design effort. In the process, they're putting into desktop machines almost the same power as in the dual-processor servers for which they charge a premium. And that may change what customers are willing to pay for "server-class" machines.

    This article was originally published on 2004-10-01
    eWeek eWeek

    Have the latest technology news and resources emailed to you everyday.