Customer Cruelty

By Ericka Chickowski  |  Posted 2009-05-15 Email Print this article Print
 
 
 
 
 
 
 

Big IT projects sometimes go wrong in spectacular ways, with some common themes running through the disaster stories like fault lines.

Nothing’s worse for a company than an IT fiasco that directly hits the way customers interact with the organization.  In some extreme cases, IT may as well hand disgruntled customers over to the competition on a silver platter. Don’t believe us? Read on.

HSBC Disaster

When an IT failure wastes a financial institution’s money, that’s a headache. When it erodes the organization’s reputation, it’s a life-threatening head injury.

HSBC USA learned this lesson the hard way in August 2008, when systems malfunctions kept thousands of customers from accessing their accounts for the better part of a week. The culprit of the downtime was a disk failure within the legacy mainframe system in HSBC’s Amherst-based data center, which handled the bank’s account and transaction information. According to the company, the failure occurred during a storage system upgrade.

Most experts agreed that something as simple as a disk failure should not jeopardize a financial institution’s uptime for five days -- disaster recovery governance should cover something as mundane as hardware on the fritz. While HSBC did not disclose the intimate details of its difficulties, one unnamed, whistle-blowing insider told Bank Technology News that HSBC was running a decades-old legacy core processing system that likely suffered from complications following the disk failure.

"It's not surprising, given the age of that software and that internal code would be holding everything together," the source told BTN.

Some experts speculated that the crash of the core processing system combined with a sluggish batch recovery process could have held up IT staff in returning the system to service.

The end result was spotty service for many customers, some of whom were unable to view their account balances, bank online or even use their debit cards in some cases. Regardless of technical details, the failure shows how a simple upgrade can result in a loss of service and customer confidence if IT doesn’t have its ducks in a row before rolling up its sleeves.

Lessons Learned: Financial systems failures say bad things to customers seeking stability.

London Stock Exchange Sells Customers Short

On September 9, 2008, thousands of UK-based traders were champing at the bit to start the trading day after news broke that the U.S. government was planning to bail out mortgage-lenders Fannie Mae and Freddie Mac. Most brokers expected their trades across the London Stock Exchange that Monday to net them their most lucrative commissions of the year. Then an LSE technical glitch got in the way.

Traders were locked out of the LSE trading platform for nearly the whole day -- more than seven hours in total -- due to what LSE called “connectivity problems.” The financial papers quoted traders expressing their frustration over losing millions and millions in pounds from unearned commissions that day, a wound made more painful by LSE’s lack of communication about the specific problems that caused the shutdown. Just days before the problem it had just finished extolling the virtues of a shiny, new update to its trading platform, TradElect.

LSE was oblique about the exact technical issue, stating only that the issue was not due to an inability to handle trading volume or any problem associated with the upgrade.

If true, it goes to show that no matter how much work is spent on the bells and whistles of a new project, if the fundamental infrastructure isn’t right then all the project work is for naught.

Lessons Learned: Poor communication exacerbates customer-facing IT issues.



<123456>
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters