ZIFFPAGE TITLEStep NoBy John Moore | Posted 2006-05-15 Email Print
Know the Risk: Digital Transformation's Impact on Your Business-Critical Applications REGISTER >
New threats to your computer infrastructure emerge every day. Baseline's Security Survival Guide provides tips and techniques to help you safeguard your organization.. 3: Enact An Incident Response Plan">
Step No. 3: Enact An Incident Response Plan
UPS subscribes to a number of such services "that allow us to understand what's going on," says Paul Abels, the company's manager of security policy strategy and business continuity planning. UPS also maintains a strategic relationship with an antivirus vendor. The relationships help UPS stay on top of the threat environment, which puts the company in a position to "react ahead of time," Abels says.
But the knowledge flows in both directions. When UPS discovered a variant of the Zotob worm, the company notified its antivirus vendor. "We were the first ones to report it to them," Abels says. Zotob achieved notoriety in August 2005 when it hit CNN and The New York Times, among others.
"Typically, [security managers] try to reach out to see if anyone else is experiencing the same thing," notes Miracle, the global security practice leader at BearingPoint.
An alert that reaches full-blown incident status triggers an organization's response plan-assuming it has one. Security experts say large enterprises typically do maintain some type of formal response plan, though Lawson, director of global services at Acumen Solutions, says incident response varies widely. Some response plans, governed by extensive steps and checklists, become so choreographed that they are "almost restrictive," he says. The other extreme is no choreography, which according to Lawson results in a "mad dance." He suggests a middle path.
Gatewood, the University of Georgia's chief information security officer, said his institution follows established incident-handling protocols, based on documentation from the National Institute of Standards and Technology (NIST) and the SANS Institute.
NIST's Computer Security Resource Center publishes a range of security policy guidelines, some of which touch on incident response. The SANS Institute, in conjunction with the Center for Internet Security, offers the Security Consensus Operational Readiness Evaluation, which seeks to provide a minimum standard for information security procedures and checklists. ISO 17799, which provides guidelines for security management, also covers incident management.
"A number of different frameworks can be used," says Payne, the president and chief operating officer at iDefense Security Intelligence Services. "Good security policy is like religion; it's more important that you have one ... than believe in any particular one."
Tim Grance, chief of the System and Network Security Group at NIST, says organizations should adapt and modify incident response guidelines to suit their needs. But he added that security groups should document what they are doing so successors will understand the approach.
Empower a Computer Incident Response Team (CIRT)
At some organizations, a computer incident response team (CIRT) puts the response plan into action. The corporate security chief generally heads the CIRT, but some companies prefer to tap an experienced outsider to manage response activity, so that one person doesn't wear two hats in a crisis.
"It's extremely difficult to coordinate all the activities that need to happen out of the security group and lead the CIRT at the same," says BearingPoint's Miracle, who has held security management positions at ADP and Charles Schwab.
He says outsourcing this responsibility is fairly common at big companies, but unusual at smaller firms.
The CIRT team consists of I.T. security specialists, either internal or from the outside, and people with other areas of expertise. "The CIRT team has to cross a lot of disciplines," Miracle explains. "The security people can't make changes on the desktops or changes in production systems. CIRT has to drive those changes."
Miracle says CIRT usually includes desktop gurus, server managers and help-desk representatives. The CIRT members' responsibilities are determined in advance. "In real time, you can't have people arguing ... that you can't shut that server down," Miracle explains. He adds that some companies hire consultants to help establish roles and get different groups across the organization to buy into the plan.
While the CIRT team may have broad influence, its physical reach may be limited. To address this issue, the University of Georgia's security group has deputized security liaisons in each of the institution's 14 colleges, Gatewood says. Each college has a different security parameter, but through the use of institutional policies, standards and processes, the university has been able to set a security baseline, Gatewood says. A security liaison also represents the university's administrative users.
Gatewood's security group trains the liaisons to know how to react as an event unfolds. I.T. personnel or business managers may serve as liaisons. The university's triage team, responsible for coordinating incident response, calls the liaisons into action when an incident affects their school or user group.
Next, the CIRT-or those empowered by the group-takes steps to isolate the affected area and remediate the problem. This could mean anything from shutting a port on a switch to removing viruses from infected workstations.
Often, an organization will shut down an entire network rather than just shutting a port or throttling bandwidth. For example, Lawson says, a multinational organization experiencing a fast-moving network attack in Brazil or Romania might completely isolate that part of the network while it embarks on remediation.
For malware cleanup, an organization may choose to reload a fresh software image rather than delete the offending code. Tillmann, vice president of security vendor Enira Technologies, says more companies choose such "brute-force methods" because they find it less arduous than potentially spending hours cleaning infected files from a system.
"Most corporations have standard computers that allow them to have a default hard drive configuration," Tillmann says. "All they need to do is wipe the hard drive and reinstall the image."
Consultant Lawson cites the example of a European bank that doesn't even wait for a specific incident to reload images; it periodically reloads the image on every front-facing server, assuming the servers will be hacked at some point.
Let the Attack Go, or Cut It Off?
Brute force or otherwise, cleanup comes to a halt when an incident calls for a forensics examination.
During an ongoing network attack, the organization must decide whether to let the incursion continue to aid its investigation or cut it off to minimize damage. Technology and business leaders must weigh whether "the investigative process outweighs the risk to the network," says Morrow, the chief security and privacy officer for Electronic Data Systems Corp.
Sometimes it's strictly a business decision. But criminal cases may involve external authorities such as the FBI, Morrow notes, and "they will weigh in with what they want to do."
Organizations may lack the specialized staff to investigate computer crime. Miracle says forensics is frequently outsourced.
Bank of New York handles most response tasks internally, but may call in a forensics specialist if an incident "looks like something that might lead to litigation," says Guerrino, the bank's head of information security. An event such as theft of service could spark a forensics investigation, but could also be treated as an employee matter if the theft occurs internally.
The bank has a retainer-like contract with a forensics services firm that gathers evidence and maintains the chain of custody, Guerrino adds.
While investigation and remediation activities continue, incident responders, ideally, keep lines of communication open with key constituencies. The CIRT team, for instance, notifies line-of-business managers of a problem so they can inform their customers.