Eckerd Takes It To The Next Level

By David F. Carr  |  Posted 2003-06-09 Email Print this article Print
 
 
 
 
 
 
 

When Eckerd decided in 2001 to bring technology operations in-house, Chief Information Officer Ken Petersen bet his job that he and his staff could do it better than IBM.

When Eckerd decided in 2001 to bring technology operations in-house, Chief Information Officer Ken Petersen bet his job that he and his staff could do it better than IBM.

It didn't take a lot of looking for the team to find places where it could improve upon IBM's way of doing things, after its ten years of running Eckerd's information systems.

For example, the processes IBM had in place for collecting sales data and loading it into Eckerd's Oracle-based data warehouse required "a lot of handholding," says Terri Cull, director of store systems. For instance, batch jobs for consolidating sales reports and updating the data warehouse used to analyze trends often got tangled up and stopped. Data center operators frequently had to manually restart batch jobs that stalled.

Cull says that was just one example of "manual or near-manual processes" in which computerized jobs had to be actively monitored or they wouldn't work correctly.

"That kind of speaks to what the outsourcer's motivation is," Petersen says, suggesting that labor-intensive processes weren't as big of a concern for IBM, since it was in the business of supplying technical staff. But they were a concern for Eckerd, as it worked to create a lean and effective technology department. IBM declined repeated requests for comment.

Eckerd decided to bring computing back in-house in order to cut its outsourcing fees and regain control of strategic initiatives like its Quantum Leap supply-chain improvement project.

A side benefit has been making its information processing and technology services operations more responsive. For instance, under IBM, the data available for merchandisers trying to analyze sales patterns was often two or three days old—which Petersen likens to "flying blind." Managers need current data to be able to see things like the profit margin on sales and how fast items are selling, he says. So eliminating delays in updates is a top priority.

Part of the answer turned out to be more efficient use of one of IBM's own technologies, the "Maestro" job scheduler (formally known as Tivoli Workload Scheduler).

The challenge was to unravel dependencies between jobs and ensure that the process would run without interruption. Dependencies exist wherever one job must be completed before the next job can begin. A simple example: Before pharmacy sales can be loaded into the data warehouse, they must be run through an insurance-claims processing system. So if anything goes wrong in claims processing, none of the other steps that depend on the pharmacy sales data can proceed.

Maestro works by launching computing jobs across multiple computers in the right order to accomplish a given task—in this case, extracting data from multiple operational systems, transforming it into the format the data warehouse expects, and loading it into its Oracle database.

Eckerd programmers asked Maestro to automatically monitor whether any task was taking too long. If so, the scheduler would capture a snapshot of the functions being executed and variables in memory for analysis.

By methodically analyzing these logs and whittling away at problems, they turned the process into one that would run reliably overnight, rather than taking two or three days.

A bigger problem arose when the data warehouse needed to be reorganized—a frequent requirement when the company starts selling a major, new product line in its stores. Previously, reconfiguring the data repository to work with new sales data required six weeks of planning and one week of downtime for the data warehouse.

Through more methodical work, Eckerd turned the warehouse reorganization process into one that can be planned within a week and executed with 24 hours—when the data warehouse is offline.

Given that this is a periodic requirement—something that comes up several times a year, but not constantly—that outage is considered acceptable under the current service level agreement with the merchants.

Getting it there involved writing programs to manipulate data in mainframe systems, as well as Unix scripts and Oracle stored procedures to streamline the processing of reorganizing the data warehouse.

Eckerd also improved basic field technical support to keep stores operating smoothly. Senior Director of Store Support Mike Amato says he and members of his staff who had backgrounds in store operations brought a new appreciation of what was important.

For example, a nationwide shortage of pharmacists has drug chains competing over recruiting and retention. Amato says he has seen employees quit over the steady, Chinese-water-torture drip of small but constant frustrations like having prescription label printers break down on a regular basis.

So he emphasizes preventative maintenance, using failure statistics to dictate how often printers should be serviced, for example, rather than waiting for them to break down.

"I wouldn't have had that perception of pharmacy if I had not worked behind counters and in pharmacies," Amato says. "Technology is something we want to be invisible to the user."

The effort has paid off, Amato says. Previously, he got about 180 calls per month from field personnel complaining about technical problems that hadn't been solved or hadn't been addressed quickly, he says. The support staff whittled that number to less than 10 in February. "That's a heck of a record," he says proudly.

Petersen says the improvement in support was also one of the things that got the most attention within Eckerd. "There wasn't a day that went by after we made this change that I didn't get an e-mail from the field thanking me for doing this," he says.



 
 
 
 
David F. Carr David F. Carr is the Technology Editor for Baseline Magazine, a Ziff Davis publication focused on information technology and its management, with an emphasis on measurable, bottom-line results. He wrote two of Baseline's cover stories focused on the role of technology in disaster recovery, one focused on the response to the tsunami in Indonesia and another on the City of New Orleans after Hurricane Katrina.David has been the author or co-author of many Baseline Case Dissections on corporate technology successes and failures (such as the role of Kmart's inept supply chain implementation in its decline versus Wal-Mart or the successful use of technology to create new market opportunities for office furniture maker Herman Miller). He has also written about the FAA's halting attempts to modernize air traffic control, and in 2003 he traveled to Sierra Leone and Liberia to report on the role of technology in United Nations peacekeeping.David joined Baseline prior to the launch of the magazine in 2001 and helped define popular elements of the magazine such as Gotcha!, which offers cautionary tales about technology pitfalls and how to avoid them.
 
 
 
 
 
 

Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters