Look at Duplication TechnologiesBy David Strom | Posted 2011-07-28 Email Print
WEBINAR: On-demand webcast
Next-Generation Applications Require the Power and Performance of Next-Generation Workstations REGISTER >
Best practices for managing storage area networks include developing a disaster recovery program and implementing storage virtualization, tiers and clustering options.
• Look at deduplication technologies. Deduplication is built into a variety of SAN and backup products, and it can offer big benefits in terms of storage costs and backup windows.
Lovell Hopper, manager of the Technology Services Branch at the California Emergency Management Agency in Sacramento, uses IBM’s Tivoli Storage Manager to handle 35TB of usable storage. The agency started several years ago with 200 physical servers, but it has reduced this collection by using 80 virtual machines running on 20 Hewlett-Packard blade servers. “It is cheaper to buy a new blade than to buy the additional RAM for an older server,” Hopper says. “Plus, we can purchase higher density and faster CPUs with faster storage fabric interfaces.”
The agency cut its backup times from hours to minutes by using Data Domain’s backup solution. They were able to cut storage needs by a ratio of 50:1. “We have lots of geospatial data, and we saved tons of space when we performed dedupes on the backups,” says Hopper.
Utah Education Network had a similar result. “We had 16-hour backup times with our Oracle database, but with direct-to-disk and deduplication, this dropped to four hours,” says Peterson.
Strand’s Bell says it’s important “to have a way to manage your bandwidth usage if you’re going to be doing replication across a wide-area network or lower speed connections.” He purchased FalconStor backup technology to minimize the amount of data that’s replicated down to the sub-block level. “It’s very efficient, and we see about 25 percent of the traffic going across our WAN as a result of this technology,” he adds.
• Thin provisioning is in the thick of things. You can create a lot of wasted space when you set up a new SAN array, but thin-provisioning products such as Symantec/Veritas Storage Foundation Manager, XenServer Essentials and Dell/3Par can cut that down substantially. Here’s how:
When you provision your SAN, you generally don’t know exactly how much storage you’ll need. This often makes you err on the high side with volumes large enough to meet your requirement for the life of the server. The same thing happens when you create the individual virtual machines on each virtual disk partition.
“We saw a lot of value with thin provisioning,” says Peterson of Utah Education Network. “We have oversubscribed our storage by about 20:1, and we don’t have any issues. We can increase SAN disk utilization to 60 to 80 percent.
“Without thin provisioning, we would have had to come up with really good storage guesses and spend more time managing the storage and purchasing a lot more disk drives. Thin provisioning gives us a fairly good life-cycle-management process, and we can easily move data between storage tiers.”
• Redundancy isn’t what it used to be. “Redundancy can be your friend in times of failure,” Peterson notes, “but, if taken to an extreme, it can make it hard to find unexpected failures. Make something only as complex as it needs to be.”
• Document everything. “You can’t document enough of your storage network,” he says. “This is important: Nothing is worse than going to one of your servers and finding unlabeled cables coming out of it and not knowing where they go.”