New World Storage Order
There’s an odd thing happening in the land of storage these days. Vendors have come up with so many different answers to increasing storage utilization rates that entire divisions within some companies are competing with each other.
At the root of the conflicting approaches to the next generation of storage management are two distinct sets of technologies. The first set is collectively known as “storage virtualization offerings,” while the second set is commonly referred to as “scalable network-attached storage” (NAS).
Both technologies are trying to solve many of the same problems. In the wake of all the hype around server virtualization, IT organizations now want to increase the utilization rates of storage arrays that typically hover around the 50 percent mark. That’s a noble goal, but the complexity associated with accomplishing that goal requires a little more nuance than installing a virtual machine on a server.
In particular, application performance is very sensitive to the amount of I/O bandwidth available. If more data is loaded on a specific storage array, there are more applications trying to use the same limited amount of I/O bandwidth. So while an IT organization can increase storage utilization rates, it can wind up doing more harm than good.
Vendors in the storage virtualization space have been trying to deal with this issue by relying more on caching software. Companies that build scalable NAS offerings have essentially built servers with processors that manage the I/O process. And both camps are looking forward to the day—in the next year or so—when 10 Gigabit Ethernet adapters are widely deployed, which should help mitigate a lot of the performance issues typically associated with increasing storage utilization rates.
The question that IT organizations will have to resolve is whether it makes more sense to adopt storage virtualization or NAS clusters. A survey conducted by Ziff Davis Enterprise, the parent company of Baseline, found that the biggest obstacles to adopting storage virtualization are the amount of complexity involved (34 percent), the fact that it might add another source of failure (20 percent) and a belief that NAS technology already provides the benefits of virtualization (18 percent).
Not many IT organizations are familiar with NAS cluster technology. The same study found that 42 percent of the IT executives surveyed said they are not likely to consider NAS clusters, and 22 percent said their organizations need more training on NAS cluster technologies.
The good news is that companies such as Exanet, Hewlett-Packard (HP), Isilon and NetApp are making it a lot easier to deploy NAS clusters. This means IT can think about scaling storage without having to add more people to manage those arrays or additional service contracts. Perhaps best of all, the cluster essentially creates a virtual pool of storage on a high-performance I/O platform, which increases utilization rates while providing built-in high availability because each node in the cluster can dynamically back up any other node.
Virtualization creates confusion in the market when vendors promote both storage virtualization and NAS cluster technologies. For example, HP, which recently acquired LeftHand Networks, previously acquired PolyServe. And IBM has storage virtualization offerings of its own, but also resells NetApp products.
The result of all this vendor posturing is a whole lot of confusion in the market. Vendors selling storage virtualization products are benefiting from all the hype surrounding server virtualization, which is being widely adopted using products from VMware, Microsoft and Citrix.
There are multiple paths toward creating virtual pools of storage that companies should consider. If all an IT organization needs to do is increase storage utilization with little regard to performance, there are plenty of storage virtualization options. But if a company needs to increase utilization while maintaining application performance, then NAS clusters are well worth a look.
Regardless of which path IT organizations decide to take, doing nothing is no longer an option. The amount of data that needs to be managed will continue to grow unabated no matter what direction the economy takes.
The only question is how IT organizations will manage all that data now that they can no longer just throw hardware at the problem—along with the commensurate service fees and software licenses that go along with every piece of hardware. In other words, it’s a brand-new day in the world of storage.