Johns Hopkins Uses SSD to Reach for the Stars

By Samuel Greengard

Over the past few years, (SSD) have moved from the periphery of computing into the mainstream. Apple, HP, Lenovo, Toshiba, Sony and Samsung are among the vendors hawking the technology on PCs and storage devices. The drives are silent, fast and less susceptible to physical damage.

Market research firm IDC expects worldwide SSD shipments to increase at a compound annual growth rate of 51.5 percent from 2010 to 2015. The drives are appearing in personal computers and are increasingly used for mass storage cloud computing, where they can deliver up to a 10-times speed increase over conventional hard drives and provide as much as a 90 percent reduction in power consumption.

It’s a concept that appeals to Johns Hopkins University, which has turned to high-end SSDs from OCZ Technology to address the high-end performance and storage requirements associated with a digital multiverse project, the Sloan Digital Sky Survey. The enormous database of astronomical objects—currently about 13 terabytes (with 400 terabytes of flat files), is being constructed as part of a $2.1 million grant from the National Science Foundation. It provides a graphical view that encompasses both a microscope and telescope for viewing objects. The publicly accessible site currently receives about 500,000 hits a day.

The project will allow astronomers to perform their own data analyses through remote access to the entire database, without the need to download tens to hundreds of terabytes of data through the Internet. “The project is changing the face of science,” notes Alexander Szalay, a professor of physics and astronomy at Johns Hopkins University. “The database has become a major attraction for both scientists and the public.”

Two years ago, Johns Hopkins’ IT specialists began examining ways to build a more efficient and effective infrastructure to support the rapidly growing data environment. That led to the adoption of more than 400 OCZ Deneva 2 SSDs for storage. The total storage capacity is approximately 6 petabytes running on 90 performance nodes.

The environment runs on about 100 physical servers; there’s no virtualization in order to boost raw speed. Random access data is streamed directly from SSDs into co-hosted GPUs over the system backplane.

This takes advantage of the benefits of general-purpose computing on graphical processing units (GPUs) for scientific and engineering computing. Two major benefits are the elimination of access latency by the SSD tier of the storage hierarchy and the elimination of network bottlenecks. The latter is accomplished by co-locating storage and processing on the same server.

“The problem with hard drives handling so much data is that they are slower—there is typically a 5 millisecond latency delay when the disk head settles,” Szalay says. The adoption of the SSDs has resulted in “stunning” performance gains. While a typical hard drive is capable of about 150 I/O operations per second, standard SSDs perform between 30,000 and 60,000 per second, he notes.

What’s more, the center has begun deploying even faster SSDs that perform close to 500,000 operations per second. “In some cases we’re seeing I/O operations take place 10,000 times faster than in the past,” Szalay reports.

Moreover, the power consumed by the drives is “negligible,” he adds. “We have managed to construct an environment that offers maximum performance along with a high level of energy efficiency,” Szalay says. “This is a revolutionary project that is enabled by ongoing advances in storage technology.”