Dos/DDoS Attacks Grow in Complexity

By Samuel Greengard

Denial of service (DoS) attacks and distributed denial of service (DDoS) attacks are a vexing problem for organizations. What’s more, as GoDaddy and Bank of America recently discovered, they’re commonplace and increasingly sophisticated. Hackers use these techniques to take down sites and damage a company’s reputation or bottom line. Unfortunately, “Every site is a potential target,” observes Tal Beery, security researcher for Imperva.

Imperva’s September Hacker Intelligence report, “Denial of Service,” provides some insight into the current state of DoS and DDoS, which are increasingly used by groups such as Anonymous and LulzSec to support their goals and promote their messages. A growing problem, the report notes, involves hackers executing DDoS attacks by analyzing the technical tools and trends deployed during several recent hacking operations.

Among other things, the report found that black hat hackers rely on white-hat DDoS testing tools such as LOIC, SlowHTTPTest and Railgun to generate attacks. What’s more, many attack tools are freely available, a growing number of groups offer DoS as a service, and DoS is moving up the stack and into the applications layer.

“Attackers view the ability to create an application layer DDoS attack as a valuable tool in their toolbox,” Beery explains. “They use DDoS tools for both hacktivist and commercial causes.”

Addressing these issues is a major undertaking. It’s increasingly crucial for business and IT executives to assess the importance of a Web application’s availability to business continuity. “If your Website is just a digital brochure, then you may be able to sustain a few minutes of downtime every now and then,” Beery says, “but if your Web application is your main business, then every second it’s down means a loss to the financial bottom line.” However, he notes that any downtime could result in negative publicity and put a dent on the brand.

The report notes that when it comes to DoS, it’s critical to fix broken code without delay, use an external device such as a Web Application Firewall (WAF) and, in a best-practice scenario, combine these two approaches. However, when it comes to DDoS, broken code isn’t the only potential problem. It’s necessary to deploy anti-DDoS solutions in various locations, including the enterprise perimeter, and offload services in the ISP or the cloud.

What’s more, because DoS is moving up the stack and into the application layer, “Application owners should verify that their DoS protection is relevant against application layer DoS attacks.” The report recommends specifically checking for SSL decryption and adopting HTTP parsing and rules-creation capabilities.

Ultimately, organizations can minimize risk by blocking known threats. “Most attack tools have some unique HTTP characteristics that can be extracted and provide a basis for detection,” Beery says. Another effective strategy is to acquire reputation data about attack sources, since many attacks originate from infected users and proxies. Finally, it’s crucial to stop automation (by detecting missing headers, for example) and put an anti-DoS rule engine in place to define rules that take repetition into account.

“Most HTTP requests appear to be benign individually,” Beery concludes. “Only by analyzing them in the context of the whole session is it possible to reveal the abnormal repetitions that constitute the attack.”