Four Migration Cycles, No More

By Pierre Colin Print this article Print

Romancing the Clone: How Avaya overcame major challenges untangling data from its Nortel acquisition.

The cleansing of data that was touching the core applications required in-depth functional and integration testing. However, the four-hour window of time available during the cut-over didn’t permit extensive testing. To solve this issue, the migration process was staged in four cycles: 

Cycle 1 brought the environment up at Avaya. All of the hardware was built out for both the Clean Room and the Avaya data center. The data brought in was clean, although not necessarily up to date. Testing during this cycle identified corrupted data or transactions broken by either the cleansing or the migration. 
Cycle 2 tested the data migration techniques for the first time. The data went through a three-hop approach that would eventually be used during the cut-over process. Cycle 2 measured the duration needed for each step and exposed many areas for improvements. Data brought in during Cycle 1 was backed up and overwritten by Cycle 2 data on the target systems at Avaya. In-depth testing was conducted again.

Cycle 3 was a rehearsal for the complete cut-over process scheduled for two weeks later, only leaving room for minor adjustments. Cycle 3 testing was limited to a few defects that had not been resolved during Cycle 2.

Cycle 4 was the cut-over -- 96 hours of intense, orchestrated, minute-by-minute migration effort. Of this total window, only four hours were allocated to basic testing before the Go/No-Go decision.

Beating the physics

The issue at hand was one of simple math: Pushing 30 Terabytes of data through a 300 Mbps circuit takes nine days and six hours -- 300 Mbps being the biggest pipe we had of the many that would be used.  The challenge was to make 9 days fit in less than 24 hours which was the target duration for each of the three cut-over hops.

To break through this limitation, we deployed a combination of advanced data replication technologies that are traditionally used for Disaster Recovery. These technologies leverage de-duplication and compression algorithms. When we first migrated the data, these technologies performed a differential analysis between source (Cycle 2 data) and target (Cycle 1 data) and instead of pushing the entire 30 Terabyte dataset through the chain, they only sent the deltas.

Even then, we faced another challenge caused by the data cleansing. Replication technologies provide good results for systems that are in steady state, but some databases were completely re-organized after cleansing. This caused the source to be significantly different than the target, thereby making the replication technologies go ‘crazy’. We overcame the anticipated hurdle by using block/disk level replication for the third hop -- Clean Room to Avaya -- which was database or application agnostic, relying on the fact that the cleansing routines would produce a fairly similar output at each cycle.

In the end, the outcome exceeded our target goal: All applications successfully made it through the three hops of the cutover process in less than 90 hours.

This article was originally published on 2011-12-05
eWeek eWeek

Have the latest technology news and resources emailed to you everyday.