Avaya's Integration Challenge
In December 2009, Avaya Inc. acquired the Enterprise Business division of Nortel Networks, which had filed Chapter 11 Bankruptcy earlier that year. Avaya was one of several buyers for various portions of the company. The mission for the Avaya IT organization was to solve the complex issue of extracting relevant data from shared systems while meeting the systems integration challenges inherent to any large acquisition.
As a first step in the acquisition process, Avaya contracted a Transition Services Agreement (TSA) with Nortel requesting them to administer the IT systems that ran the acquired business. These systems were spread across five data centers and comprised 47 applications, among which four SAP landscapes supported annual revenue estimated at $1.4 billion. Approximately three-quarters of these applications were sharing data owned by buyers of other Nortel business units. The systems which then belonged to Avaya could not be accessed and migrated by Avaya IT personnel prior to undergoing custom data cleansing to eliminate master as well as transactional data.
As the costly TSA evolved, the number one objective rapidly became to end it as soon as possible. The Nortel hardware was shared across multiple buyers and could not be physically moved. With time of the essence, a decision was made to replicate the hardware configurations found at Nortel in the Avaya Global Data Center. Hence, the project got its name: “Clone.”
The high-risk, “Big Bang” migration was the only option for bringing Avaya applications and data over, since the acquired landscape was highly interfaced and wouldn’t allow for a phased approach. Indeed, it was not possible to move a few cleansed applications at a time to the Avaya data center and keep them interfaced with others that were not, due to the likelihood of non-Avaya data coming into the Avaya system and breach data privacy and legal requirements.
Finally, given the unusual nature and magnitude of the combined IT divestiture-acquisition, the IT organization was allowed 96 hours to migrate the environment. Although exceptional, even a 96-hour window still looked way too small to take 30 Terabytes of live data through intensive database record-level data cleansing and simultaneously conduct two consecutive data center migrations.
The Clean Room – the term used by project staff for the specially-built, temporary data center – provided a “no-man’s land” where Nortel representatives could develop, test and execute custom code to remove non-Avaya data. Once done, they would hand the environment over to Avaya IT.
The cut-over process was executed in three stages: First, the data was migrated from the five Nortel data centers to the Clean Room; the second stage cleansed the data, and finally, the Avaya-only data was migrated from the Clean Room to the Avaya corporate data center.
The cleansing of data that was touching the core applications required in-depth functional and integration testing. However, the four-hour window of time available during the cut-over didn’t permit extensive testing. To solve this issue, the migration process was staged in four cycles:
Cycle 1 brought the environment up at Avaya. All of the hardware was built out for both the Clean Room and the Avaya data center. The data brought in was clean, although not necessarily up to date. Testing during this cycle identified corrupted data or transactions broken by either the cleansing or the migration.
Cycle 2 tested the data migration techniques for the first time. The data went through a three-hop approach that would eventually be used during the cut-over process. Cycle 2 measured the duration needed for each step and exposed many areas for improvements. Data brought in during Cycle 1 was backed up and overwritten by Cycle 2 data on the target systems at Avaya. In-depth testing was conducted again.
Cycle 3 was a rehearsal for the complete cut-over process scheduled for two weeks later, only leaving room for minor adjustments. Cycle 3 testing was limited to a few defects that had not been resolved during Cycle 2.
Cycle 4 was the cut-over -- 96 hours of intense, orchestrated, minute-by-minute migration effort. Of this total window, only four hours were allocated to basic testing before the Go/No-Go decision.
Beating the physics
The issue at hand was one of simple math: Pushing 30 Terabytes of data through a 300 Mbps circuit takes nine days and six hours -- 300 Mbps being the biggest pipe we had of the many that would be used. The challenge was to make 9 days fit in less than 24 hours which was the target duration for each of the three cut-over hops.
To break through this limitation, we deployed a combination of advanced data replication technologies that are traditionally used for Disaster Recovery. These technologies leverage de-duplication and compression algorithms. When we first migrated the data, these technologies performed a differential analysis between source (Cycle 2 data) and target (Cycle 1 data) and instead of pushing the entire 30 Terabyte dataset through the chain, they only sent the deltas.
Even then, we faced another challenge caused by the data cleansing. Replication technologies provide good results for systems that are in steady state, but some databases were completely re-organized after cleansing. This caused the source to be significantly different than the target, thereby making the replication technologies go ‘crazy’. We overcame the anticipated hurdle by using block/disk level replication for the third hop -- Clean Room to Avaya -- which was database or application agnostic, relying on the fact that the cleansing routines would produce a fairly similar output at each cycle.
In the end, the outcome exceeded our target goal: All applications successfully made it through the three hops of the cutover process in less than 90 hours.
• 10 month project duration (Aug 2010 - May 2011)
• 2,254 tasks executed by 228 resources from 11 companies around the globe in less than 96 hours during the cut-over process
• 9,479 modifications made to the cut-over plan before baselining
• 47 applications migrated
• Complete ‘interim’ data center built and dismantled
• 178 Servers installed and configured
• 4 enterprise-class storage arrays deployed
• Thousands of lines of codes developed for ‘one-time use only’ cleansing code
• 30TB of data migrated 4 times through the entire process
• Several hundred firewall rules implemented
• 30+ Extranets (ENCs) created with 3rd parties
• Hundreds of application interfaces re-configured
• 200+ test cases executed
• 445 helpdesk scripts created
• 35 IT self-service online transactions developed
• On boarded a new outsourcer
Extra-ordinary level of commitment
The project had been a race since Day One. Its success was only possible due to an extra-ordinary level of commitment from all of the teams, who spent countless hours of work and private time to keep the project on schedule.
The migration team had direct CIO access and daily involvement. We received unconditional support from the Avaya’s IT and executive-level leadership, who were fully energized to make the project a success.
Managing people and diversity
One of the biggest challenges was to drive for results while collaborating with people from different companies, often with opposing aspirations or direction. Identified very early as a high risk, the appointment of seasoned people managers to lead the project mitigated much of the risk.
The project required a strong team and strict discipline to achieve success and Avaya’s partners hit the mark. The team listened and responded quickly when adjustments were needed.
The core project team was assisted by a small external and vendor-neutral team of executive Project Managers supported by a Management Consultant Sponsor. This model proved to be effective on numerous occasions given the extremely complex vendor landscape involving 11 companies.
Emphasis on Risk Management
Prioritizing time and schedule on a project of this magnitude has significant impacts for the level of risk. With time at a premium, it might seem more efficient to overlook creation of contingency plans. Regardless, the project team spent considerable effort ensuring a Plan B was always readily available.
The project ran for 10 months as a 24x7 initiative. Using a global team of resources doubled the team’s horse power. There wasn’t a single morning where progress had not been made since the night before.
Pierre Colin was the lead technical and cut-over architect on the Avaya team for the Nortel integration project. He is an IT Director at Avaya under the leadership of CIO Steve Gold.