Gotcha! False AlarmsBy David F. Carr | Posted 2004-08-01 Email Print
Modernizing Authentication — What It Takes to Transform Secure Access
Don't be alarmed by false alerts.
Automated systems have a tendency to fill the world with alerts: fire alarms, low-inventory warnings, squawking medical devices in a hospital, and collision alarms in planes or, increasingly, automobiles. Whether you're designing the alarm-generating software or merely deciding which alarms to turn on in a packaged product, the challenge is finding the right balance between too many alarms and too few.
Problem: Too many false alarms train users to ignore valid warnings.
Resolution: You must combat a "better safe than sorry" attitude among systems engineers, says James P. Bliss, a professor at Old Dominion University who has studied this "Boy Who Cried Wolf" problem extensively in the context of aviation, air traffic control and vehicle warning systems. "We all have a limited resource called attention," Bliss says. "When a series of false alarms are generated by a system, it becomes less attention-worthy because of its history of unreliability."
False alarms can often be reduced when systems are designed to respond to multiple warning signs before they sound (for example, a smoke alarm that detects smoke, light and heat, rather than just smoke). Another way to improve user response is to provide more information about why the alarm sounded, making it easier to distinguish critical alarms from minor or false alarms, Bliss says.
Problem: Everyday applications allow themselves to become overwhelmed with alerts.
Resolution: Identify where internal applications have gone overboard and sacrificed usability.
Excessive pop-up alerts and interruptions such as legal disclaimers used to be a problem with consumer Web sites, says Bruce "Tog" Tognazzini, a computer-human interaction expert and a principal of the Nielsen Norman Group. He notes that Web retailers who didn't learn to streamline their sites went out of business. But these problems are still prevalent in many business applications for a company's internal users, he says: "The feedback loop is not there the way it is in retail."
Problem: Consulting company lawyers leads to more alerts.
Resolution: Appeal to a higher authority, even the CEO, to weed out lawyer-mandated alerts.
Lawyers tend to err on the side of too many alerts, Tognazzini says. Sometimes the lawyers have legitimate concerns about, for example, the liability of failing to warn users about the limitations of a system's security, reliability or accuracy. But do users need to click to acknowledge this warning every time they log on, or only the first time?
If you have to appeal to a greater authority, Tognazzini suggests translating the impact into dollars. "Executives tend to care about money," he says. For example, time wasted on excessive alarms can be translated into lost productivity and, ultimately, into dollars.
Problem: Critical systems often tolerate a high ratio of false warnings to legitimate threats.
Resolution: Even "unreliable" systems can be useful, according to Mark St. John, a human factors expert at Pacific Design and Engineering Group of San Diego. St. John says that the air warfare systems he is designing for the U.S. Navy detect a large number of potential threats that turn out to be false alarms. But rather than sound blaring alarms, the system color-codes targets on an operator's radar screen based on the probability that they may be real threats. "The system is telling you, 'I'm just dumb automation, but I can tell you when things are likely,'" St. John says.