Managing Virtual Storage on Your SAN

 
 
By David Strom  |  Posted 2009-07-15
 
 
 

IT managers have a lot more motivation when it comes to keeping their storage area networks, or SANs, up and running. More and more data is being stored on these virtualized networks, and they continue to be popular. In a survey conducted of 388 executives in December 2008 for SHARE, an association of corporate users of technology, storage virtualization was under way at half of respondents’ companies. However, management was still a challenge, and many of the implementations were at departmental, rather than enterprise, levels.

So what can you do to keep your SAN properly maintained? Here are several best practices:

Understand how much effective storage you really have. Just because you have a terabyte disk doesn’t mean that much space is really available: After you set up RAID groups and subtract overhead by your SAN solution, it might be a lot less. So take advantage of deduplication software tools to further reduce storage needs.

Also, if you plan on using virtual servers with your SAN, keep in mind that “it is very easy to consume disk space very quickly because you can easily copy new virtual machines,” says James Sokol, the CTO for Segal, a benefits consultancy in New York. Sokol suggests doing a periodic audit to determine which virtual servers are really needed for testing and operations.

Organic Valley, a large organic farming cooperative based in La Farge, Wis., uses two NetApp SANs holding 10 terabytes each. It was able to save more than $500,000 in capital and maintenance costs by implementing various space-saving technologies and being careful about storing duplicate data on its SANs.

“We were able to remove a lot of duplicate data and saw 20 percent to 50 percent savings in some of our volumes as a result because we make use of virtual servers that are running the same configurations,” says George Neill, Organic Valley’s IT director.

Consider how your disaster recovery needs will influence your SAN configuration. Organic Valley has used its SANs to run a disaster recovery center at another location 30 miles from its headquarters. “If we had had a better idea of what our retention policies and recovery time objectives were before we started, we would have seen an even greater benefit from our SANs,” Neill says. “You really must understand which of your applications need which levels of recovery in your production systems.”

Manage changing storage needs. The essence of any storage solution is how well it can adapt to changing situations, particularly its ability to handle a large influx of data or to migrate data to other volumes. The ability to dynamically provision new data volumes, make snapshot copies of volumes or remotely mirror data to a disaster recovery location are all key features.

“All our SAN volumes are virtualized using FalconStor’s IPStor NSS 6.0,” says Frank Smith, manager of corporate IT core infrastructure at Lionbridge Technologies, which is based in Waltham, Mass. “This allows us a huge amount of flexibility, so we can move volumes from one RAID drive or array to another. We don’t suffer any downtime during these migrations or when we add or remove storage.”

Part of this management task is being able to view your storage allocation and monitor its growth and needs in real time, on a single dashboard that is easy for you to follow. The ideal situation is to show what drive letters are mapped to which hosts and what disk arrays are part of each drive, as well as the characteristics of the disks for each logical storage group.

“With NetApp, we have control over the management of our storage volumes that we never had before with our EMC SAN,” says Organic Valley’s Neill. “We can over-allocate our storage volumes and dynamically reallocate them, which has dramatically reduced the time to plan and manage our volumes. Now we don’t have to shut the system down to reprovision storage, and what is even better, it all takes just seconds.”

In the past, if Organic Valley had to resize a volume with its EMC SAN, it would take the better part of a day to complete. “With NetApp, we have visibility into our application activity, and that enables us to understand what’s happening now, map trends and make intelligent plans for our future needs,” Neill explains.

Segal is using three Hewlett-Packard StorageWorks Enterprise Virtual Array (EVA) SANs containing 30 terabytes. The company found that the HP disk virtualization worked great when migrating its Microsoft Exchange servers from dedicated storage to its SAN. “We didn’t have to pick and choose the individual disks,” Sokol says, “and we could let the EVA optimize it for us and not have to waste a lot of disk space in the process.”

Bill Hassell currently works for a financial services provider that stores terabytes of data for several thousand banks throughout the United States. “We recently transitioned from EMC Symmetrix storage to Axiom disks provided by Pillar Data Systems,” he says. “This was part of a disaster recovery project to replicate data more efficiently, as well as improve production database performance.

“Lowered cost was also a big part of the upgrade project, and that was achieved with Pillar’s help. We also needed close to real-time replication to our data recovery site because we didn’t want to upgrade the speed of the link to the site.”

One of the reasons for the change was Axiom’s simple administration when the financial services provider has to make changes to its storage needs. “With the Axiom system, we make use of a Web page and it’s more than adequate to assign LUNs [logical unit numbers] to different hosts,” Hassell explains.

“I have looked at the user interface of Fujitsu’s and EMC’s latest products, and the Axiom interface is far simpler and more intuitive. You can change the storage allocation at any time—without having to hire a consultant or learn a series of cryptic command-line parameters.” Hassell did need some significant consulting help from FalconStor, but he says it was easy to move up the learning curve with the storage management software.

An idea suggested by Gene Ruth, a senior storage analyst with the Burton Group, is to make use of quality of service to ease storage configuration management. “For example, rather than modeling the intricacies of thin provisioning or RAID sets, identify that the equipment supports specific features characterized by parameters associated with each feature,” he says.

“Why should users specify what disks go into a RAID set or what RAID level to use? Instead, users would define the level of redundancy and expected performance and let the equipment figure it out from there.”

Develop cross-team collaboration and training. Often, there are separate work teams that manage the SAN and servers, and they need to cooperate and cross-train so that responsibilities are shared for the operation of all network resources. One IT manager at a New York-based legal firm realized that they weren’t keeping the right set of hot spares for their SAN after upgrading to larger disk drives at their disaster recovery location—a decision that each group thought the other was responsible for tracking.

“Sometimes, it is hard to debug a performance problem between the SAN and the virtual servers that are using its storage because the two groups handling server and storage management start pointing fingers at each other,” Ruth says. “Very often, the SAN guys can’t tell how much data is passing by their switches or what its latency is because they don’t have the right tools to monitor these things. There are very few companies that offer large organizations instrumentation for their SANs.”

Ruth suggests looking at tools such as NetApp’s SANScreen, Symantec/Veritas CommandCentral Storage, Akorri Balance Point and Virtual Instruments’ NetWisdom for these purposes.

Finally, think about hiring a local VAR who has experience with your particular configuration and work that relationship. “Even the smallest SANs cost $50,000, and some companies don’t want to spend anything additional,” points out Adam Kuhn, a Washington, D.C.-based IT manager. “But they need to budget something for proper implementation by a good integrator.

“You need to know what the best practices are for your particular equipment, and you should leave it to someone who has done this hundreds of times.” Kuhn adds that you need to work with someone who knows “the secret sauce that makes it all work.”