Location, Location, LocationBy David Strom | Posted 2009-11-02 Email Print
No-Size-Fits-All! An Application-Down Approach for Your Cloud Transformation REGISTER >
How to keep your company alive in the face of disaster.
Location, Location, Location
The first decision that should be part of any business continuity plan is where to locate the backup data center. There are four basic strategies you can pursue:
1. Make your headquarters data center your secondary center, and have a high-bandwidth connection to a remote primary data center. If you go this route, you will want primary Internet connections to the primary data center.
That was the case for Waltham, Mass.-based Oco, which built a data center a few miles away from its headquarters. “Our headquarters could go away tomorrow, and our customers wouldn’t’ even know because none of our customer data is hosted here,” says Joe Taylor, network operations manager.
2. Combine resources with a nearby organization. A few municipal IT managers are using each other’s data centers as their backups and sending encrypted data across on a regular basis. This works only if you have matching storage area network (SAN) hardware—and if there’s a certain level of trust between the organizations.
The city of Maryville, Tenn., took this route and partnered with a neighboring city’s IT department. Maryville had that city, Alcoa, Tenn., match its SAN solution with a duplicate Pillar Data Systems storage technology that it had.
“We didn’t need two-second recovery time at our DR site,” says Terry McCoy, Maryville’s IT director. “Our counterparts in Alcoa were thinking about this at about the same time.
Since we needed to increase our disk space, they purchased our existing Pillar from us, and we bought a newer model. Our goal is to fail over to their data center and them [to fail over] to ours.”
Teaming up was also a good solution for the Alvarado, Texas, school district and the nearby Glen Rose school district. “I needed a more reliable solution than sending backup tapes offsite,” says Kyle Berger, executive director of technology services for the Alvarado district. “We linked up with another school district 150 miles away and have become DR sites for each other. Now we are expanding our network to other districts around the state.”
Each district uses Compellent SANs to enable the replication of data, as well as the encryption of sensitive data, such as student and financial records. “There is a level of trust involved,” Berger says, “but each participating district is required to have quarterly health checks on their SAN to make sure they are up-to-date on software and security. These are done by our common VAR to make sure they are all consistent.”
3. Use one of your own remote offices for an offsite location. That’s what GreenBank in Greeneville, Tenn., did, with two of its offices. “We back up 400 gigabytes of data every night between two ExaGrid disk-based backup systems that are located 100 miles apart,” says IT Manager Jason O’Dell.
“Both data centers are live full time. Now I don’t have to rely on my network administrator to track down a backup tape and hope that he’s got the right one to start a restore. My help desk people can restore data with just a few mouse clicks.”
The Louisiana Supreme Court based in New Orleans changed its strategy after Hurricane Katrina. “Going through Katrina, we were battle-tested,” recalls Peter Haas, the court’s director of technology services. “We had a much clearer understanding of what everything meant. We started looking for tools to remove single points of failure [without] creating a lot of added overhead or staff.”
The court ended up deploying a backup data center in one of its offices 250 miles away. “I can put someone there in three hours if I have to,” he says, “and it is far enough away that it is out of harm’s way with the hurricanes that threaten us. Every-thing is monitored remotely, and we can fail over individual pieces of our infrastructure in a matter of minutes and not have to worry about how long routine maintenance will take.”
The court is using a variety of CA products, including XOsoft High Availability software and ARCserve for backups. “We had a situation with a faulty system attendant mailbox on our Microsoft Exchange server,” Haas explains. “We used our WAN sync tool to fail over to our remote data center and then were able to rebuild our Exchange server in our main data center to fix the problem. We have even done these failovers using a laptop on our VPN and a Verizon 3G broadband cellular data connection. It was that easy.”
4. Use the cloud for your backup data center. Using the cloud for DR makes sense for the United States Golf Association in Far Hills, N.J., because it supplements the organization’s in-house operations and gives an extra measure of coverage for the data housed on its critical servers.
“If I went the route of keeping a second data center internally for DR, the equipment would basically sit idle,” says Jessica Carroll, the association’s IT director. “That is a great deal of cash outlay, and with the systems becoming outdated after several months, it’s not the most efficient backup plan. The cloud offers smart flexibility and expedient recovery.”
Another golfing association is also using the cloud. Up until last year, the Professional Golf Association Tour in Ponte Vedra, Fla., was paying an annual subscription to have access to a remote relocation facility.
“The problem with our older DR subscription model is that we had to go to the premises and load tapes and applications, and then our staff would have to get there,” says Steve Evans, the association’s CIO. “The hurricane issue makes it difficult to get your people on a plane ahead of the evacuation schedules, so, with this model, it would typically take us about four days to get everything up and running.”
As the tour created new applications and expanded its enterprise resource planning applications, it found this subscription model outdated and unsatisfying, and ended up choosing CDW’s hosting facilities in Madison, Wis., for a cloud-based continuous replication of key servers. “With this new system, we can be up in just a few hours, and we can have a lot more flexibility to test individual applications for failover or to schedule particular business upgrades more easily,” Evans says.