Built-in Security

 
 
By Ericka Chickowski  |  Posted 2008-10-30
 
 
 

In the business world, the general wisdom is to work smarter, not harder. Unfortunately, many IT security managers aren’t taking that advice these days. They continue to chase vulnerabilities and put out fires without really addressing the root of their problems—namely, insecure software.

Many managers blame software vendors for these issues, but the fact is that major vendors such as Microsoft and Oracle are seeing a decrease in the number and seriousness of vulnerabilities in their software. However, the number of overall application vulnerabilities and attacks worldwide continues to multiply.

How is that possible? As operating system vendors have eliminated many of their security problems, most hackers have figured out that enterprises don’t specialize in writing secure applications. So they’ve started to pound away at the application layer, including home-grown Web apps.

“We’re seeing hackers trend away from compromising Windows and moving to lower-hanging fruit,” says Michael Howard, principal security program manager at Microsoft and author of The Security Development Lifecycle and Writing Secure Code. “Unfortunately, there’s plenty of that.”

IBM Internet Security Systems X-Force’s 2008 Mid-Year Trend Statistics report illustrates this. X-Force researchers found that Web application vulnerabilities made up 51 percent of all reported vulnerabilities this year. “Over the past few years, the focus of endpoint exploitation has dramatically shifted from the operating system to the Web browser and multimedia applications,” the report stated.

Many security insiders believe that the only way organizations will be able to effectively meet the onslaught of attacks against Web and other home-spun applications is to bake security in from the get-go: implementing secure coding practices from the ground up.

Educating Programmers

This past June, the Web Application Security Consortium reported that after assessing a whopping 32,000 commercial Web sites, it found nearly 97 percent had a severe vulnerability. Many security experts believe that this problem is the result of a rampant lack of education in the typical IT department’s developer ranks.

“We have 17 million programmers in the world, and I doubt 1 percent of them have had any kind of formal or informal education in secure software development,” says Jeremiah Grossman, CTO and founder of WhiteHat Security. “There is a whole mess of code being generated every day by those who don’t know how to write securely.”

Even at the university level, computer science students are earning diplomas without ever having learned about programming with security in mind, according to Microsoft’s Howard. “I think that’s a travesty,” he says. “I see very few schools adopting what I would consider basic programming skills. There is nothing that makes security special, and yet for some reason, it is treated as this special thing. In reality, it is just part of shipping software.” 

Changing Processes

The difficulty most organizations have when implementing security in their development processes is that security is not just a single step that can be added like a special feature. It must be embedded into the entire programming cycle.

“Security is an emergent property, which means that there is no phase on the assembly line where somebody bolts the security on,” says Brian Chess, chief scientist for Fortify Security, which makes code analysis tools. “You need to look at all the different paths and processes that are involved in making software in order to make it secure.”

Doing so can take a lot of additional time, a commodity most programmers don’t have in spades. Even those senior coders who might understand security development principles will set them aside if they’re rushed by bosses who put a higher priority on features and deadlines than on shoring up the code.

“The agile development philosophy says to release early and release often, which is counterintuitive to the build-security-upfront kind of philosophy,” says WhiteHat’s Grossman.

According to Microsoft’s Howard, the only way an organization can get its programmers to write securely along the way is to prioritize that process starting at the top of the enterprise and then drive down those principles over an extended period. He speaks from experience. Howard was instrumental in developing Microsoft’s internal Security Development Lifecycle and instituting it companywide. Much of the SDL focuses on three main principles: never trusting input, fuzz testing for validation that you’re not trusting input and threat modeling.

The years-long project of instituting these principles took a lot of developer hand-holding early on, he says, but that eventually changed as programmers put the practices into their daily routines.

“All of a sudden, people have grown up,” Howard remarks. “They recognize that security is part of the job—something you’ve got to do. There’s no longer the permanent requirement from our end to keep going back to the development group to make sure they’re doing the right thing.”

Microsoft believes so strongly in the power of SDL that Howard’s boss, Steve Lipner, senior director of security engineering strategy, and the rest of the SDL team recently announced that Microsoft would be making its internal threat-modeling tool available to the industry.

Business Advantages

Though sometimes difficult to quantify, the fruits of secure coding labors can make a meaningful impact on the bottom line. “I can’t give you hard numbers, but we have definitely noticed that testing is not as expensive as it was before,” Howard says, explaining that testing times have diminished because programmers are addressing problems earlier in the cycle and fixing them along the way.

And secure coding does more than cut down on testing time. It also makes for a higher quality product, according to Fortify Security’s Chess. “A lot of times, catching security bugs early makes everything else you do more predictable, and that predictability can be a big advantage,” he says. “It also means that when you announce a ship date, you’re more likely to meet it because you’re more in control.”

Most importantly, though, organizations are mitigating potentially millions in losses by heading off security breaches. Microsoft’s Howard says he helped one colleague at a different company justify an extra $200,000 in resources for secure coding by bringing in the risk management department and showing them the value of the information the investment was intended to protect.

The expenditures that executives initially balked at seemed like chump change once the risk managers explained that it would mitigate the risk against $30 million in assets. By the time the presentation was over, Howard recalls, “They said, ‘Where do we sign?’”

Slow Implementation

Unfortunately, it isn’t just a matter of spending money and rolling out a nifty new tool. It takes a thoughtful overhaul of procedures and practices to better design and verify code through the development lifecycle.

“Clearly, you need some sort of a governance process on top of it all,” Fortify’s Chess says, “to make sure that the right people are talking to each other and that they are coordinating appropriately, because this cuts across the organization.”

However, implementing a governance plan can take time.

“We don’t see anybody changing overnight,” says Greg Hanchin, principal at DirecSec, a value-added reseller that specializes in security. “You can’t just go in and establish a secure coding process around five or 10 years of code.

“Here’s what you can do: Under the hamster wheel of people, process and technologies, you can attempt to bring technology in and get your people and processes around that mission of making software better incrementally. It probably takes a couple of years in the development cycle process to actually make a meaningful change.”

In the meantime, there is at least one shortcut organizations can take to reduce their exposure.

“I recommend starting with some modern development frameworks,” WhiteHat’s Grossman says. “A lot of time, the security is already baked into new frameworks like .NET and J2EE [Java 2 Platform, Enterprise Edition]. If you use them properly, you can develop code really, really quickly—code that also happens to be secure.”

Getting Certified

“Too often, security is bolted on at the end of the software lifecycle as a response to a threat or after an exposure,” warns Howard Schmidt, president of the Information Security Forum and a board member of (ISC)2, a not-for-profit global firm that educates and certifies information security professionals. “New applications that lack basic security controls are being developed every day, and thousands of existing vulnerabilities are being ignored.”

(ISC)2 is determined to do something about this situation. It recently announced a new certification, the Certified Secure Software Lifecycle Professional (CSSLP), to “validate secure software development practices and expertise.” The goal is to establish best practices and validate a professional’s competency in addressing security issues throughout the software lifecycle.

The CSSLP is code-language-neutral and is applicable to anyone involved in the software lifecycle, such as analysts, developers, software engineers and architects, project managers, software quality assurance testers and programmers. Areas covered by the exam include lifecycle vulnerabilities, risk, information security fundamentals and compliance.

“The CSSLP ensures that people—our first line of defense in this war—have the tools and knowledge to implement and enforce security throughout the software lifecycle,” says W. Hord Tipton, executive director for (ISC)2.