Primer: Networked StorageBy Sean Gallagher | Posted 2002-02-04 Email Print
How Real-World Numbers Make the Case for SSDs in the Data Center
Where is storage headed? New technologies merge network-attached storage and storage area networks for a lower-cost Internet Protocol-based architecture.
The conventional wisdom is that network-attached storage is easy to set up but slow, and a storage area network is difficult but fast. New storage technologies merge these models into a lower-cost Internet Protocol-based architecture; real products, however, are a year or two away.
A SAN is a private, high-speed network that sits outside the local area network. Since it does not affect normal corporate traffic, the approach is well-suited for moving large amounts of data. SANs are too expensive to make connecting large numbers of systems to storage devices appealing, however, and because vendors vary on the specifics of implementation, management can be difficult.
NAS servers connect directly to the corporate backboneeasy and inexpensive to set up, but bad for network traffic. NAS servers run a very thin, proprietary operating system optimized for network file access. Though growing rapidly in popularity for business and Internet applications, NAS lacks the performance and quality-of-service guarantees required for high-end data-center applications.
Companies managing multiple storage architectures can achieve faux convergence through "virtualization," which puts an open-protocol access point in front of storage networks and systems. Some virtualization devices use iSCSI (below), while others use NAS protocols; all simplify management by reducing the number of interfaces.
The technologies that connect systems and devices have improved greatly in the past five years. Emerging IP versions increase interoperability and reach.
The American National Standards Institute (ANSI) combined several standards it had built in the mid-'90s into the Fibre Channel (FC) spec. FCthe de facto foundation of SANscan transfer data at 100 MBps to 4 Gbps, has wide vendor support and can reach up to six miles over optical fiber. FC lets you set quality-of-service levels. But it's expensive$900 per port, on averageand compatibility among vendors is a problem.
In 2001, the Internet Engineering Task Force (IETF) upgraded the aging SCSI standard to iSCSI, which puts commands within IP packets. Used with Gigabit Ethernet (below), it competes with FC on speed and distance. It's also relatively inexpensive to buy and, being based on well-known standards, easy to manage. But vendor support isn't wide, and implementations vary.
The Institute of Electrical and Electronics Engineers (IEEE) finished work in 1998 on a speedier version of the popular Ethernet protocol. Gigabit Ethernet is better-standardized and less expensive than FC but doesn't support quality of service. And its reputation for being superfast is relative: To reach speeds of 80 MBps to 90 MBps, it requires a "jumbo frame" mode that not all hardware supports.
IBM, Intel, Microsoft and others merged competing proposals for next-generation I/O to create InfiniBand, which uses virtual direct connections between devices and processors (instead of a PCI bus, say) to move data at 500 MBps to 6 Gbps. The spec supports 128-bit IP addressing, quality-of-service controls and distances of just over half a mile. InfiniBand 1.0 was released in 2001whether it performs as promised won't be seen until hardware ships in earnest.
The IETF's development work includes tunneling FC packets over IP networks (FC/IP); connecting FC gateways over IP (iFCP), and running IP packets over FC.
By Q2, the IEEE hopes to complete this standard, which would make Ethernet as fast asor faster thanFC.
The growing popularity of the NAS architecture is driving the search for faster, IP-based protocols.
The Storage Networking Industry Association is a leading proponent of SAN-NAS convergence.