4GB: The Next RAM Barrier

Back in the early days of DOS, we had a memory barrier of 640kB of memory.

I know, it seems quaint now, something that you can find on the chipsets of audio greeting cards rather than real computers, but we spent a lot of time juggling applications to fit in that space. We had special hardware cards that could address more memory, and we had swapping programs (remember Quarterdeck?) that could allow us to run bigger apps. (And for those of us who are really old, we even remember the 64kB barrier of the earliest Apple // computers!)

Now, we are approaching another memory barrier, only this time it is 4GB. That is the largest amount of memory that 32b processors can access. It is a problem, particularly for servers and has this eerie sense of deja vu all over again for me.

Four gigs seemed like a lot of memory just a few years ago. We didn’t really need to worry, and our desktop operating systems seemed comfortable inside it. Then Microsoft got greedy with Vista, RAM got much cheaper and apps got bigger. Before you knew it, we were once again running out of headroom.

What is driving these bigger applications is the popularity of both virtualization and database servers. Virtualization is especially memory-intensive. If you want to take advantage of this technology, you have to bulk up your machine with lots of memory and disk. And the more RAM you throw at database servers, the happier they are.

Another big consumer of RAM is the video card and how it interacts with system memory. Some of them share their memory space with the PC, which means when you are running graphics-intense operations, you take away some of that RAM from all your applications. Again, we’ve heard this tune before. And most of us haven’t really paid much attention to the video card in our servers because we didn’t think they needed much horsepower there. After all, we weren’t planning on running GTA4 on our servers, right?