Computer Security as a Business EnablerBy John McCormick | Posted 2007-07-07 Email Print
WEBINAR: Live Date: December 14, 2017 @ 1:00 p.m. ET / 10:00 a.m. PT
Modernizing Authentication — What It Takes to Transform Secure Access REGISTER >
The executive director of Purdue University's Center for Education and Research in Information Assurance and Security (Cerias) advises CIOs to view security not as a problem, but as a tool to protect jobs and help build business.
Last month, Eugene Spafford, one of the nation's foremost computer security experts, was given the President's Award from the Association for Computing Machinery, a technical association. The ACM cited his "extensive and continuing record of service to the computing community, including major companies and government agencies."
Spafford, a professor in both the computer science and the electrical and computer engineering departments at Purdue University, is the founder and executive director of Purdue's Center for Education and Research in Information Assurance and Security (Cerias). He's been an adviser to the National Science Foundation, the National Security Agency and the Federal Bureau of Investigation, among others. And his research into software debugging, intrusion detection and secure architectures has resulted in some commercial products, including the Tripwire intrusion detection system. He spoke recently with Baseline editor-in-chief John McCormick.
What do you think is the biggest threat right now to corporate computing centers?
Probably the biggest threat is people thinking that they can buy broken things and then put patches on afterward and make it secure.
Are you talking software and operating systems?
Applications, operating systems, fundamental business processes that have all been developed and adopted based on convenience or cost, without any real thought given to overall security and safety. And then assuming that if only they buy the right set of patches and add-ons, they'll make it secure. And that's just plain wrong.
It really has to be architected in from the beginning. Otherwise, it's just not going to work. It's like building your entire factory out of gasoline-filled balsa wood. "Gee, if only you buy the right doors, the place will be fireproof." It just doesn't work that way.
People have been talking about buffer overflows and these types of problems for years. Does it surprise you as well that we're still talking about these things?
Early in my career it surprised me, because we've been talking about these for 40 years and we have solutions. And it surprised me that nobody was using them. But as time has gone on, very little really surprises me anymore. It's just a continual sort of drumbeat of a very unfortunate behavior that's driven by a number of factors. And, again, I think cost and convenience have tended to trump any kind of prudent security or safety approaches.
It may not be a good example, but reading at least one press account of one incident that occurred, somebody said, "Well, we had encryption in place." Well, that shows a fundamental lack of understanding that encryption doesn't do any good when somebody's inside your system because they can read your keys or subvert the algorithms. Security is a total-picture issue, not a set of spot problems to patch.
We hear it again and again, but it really does come down to defense in depth.
That's right. It should be integrated with everything else, and it should be viewed actually as an issue of quality. As an enabler—rather than as sort of a dreaded expense, which is what too many people view it as.
But people say they have to move quickly. Isn't the competitive pressure to move faster and faster—to keep up with what somebody else might be doing—also a factor?
It is. And it's a factor for the producers of the software as well as the consumers. And actually, quite frankly, for our own government. When the view is focused on performance next quarter, rather than the long-term reliability and robustness of the enterprise, then bad decisions get made.
Which gets back to your point that it has to be architected in. But how do you go about making sure that it's designed in from the beginning?
It's a process that you have to continually revisit and iterate on. It is not a point solution ever. And we have too many companies, and too many consultants, who are selling it that way—saying, "I'll make your system secure, if only you do this." And that's simply wrong.
It's like leading a healthy lifestyle. You aren't going to be healthy the rest of your life just because you get a vaccination against measles. And it needs to be understood in that regard.
As far as the question, "How do you architect it in and keep going?" Part of that is having informed, empowered individuals who have the appropriate training and background to be making decisions about what goes in, and that those decisions are based on an adequate understanding of risk.
So, again, how do you stay healthy your whole life? Well, you don't do it by visiting whoever offers to keep you healthy at the lowest price—like the guy down at the hardware store. It's by finding a trained professional who understands the systems, and you have an ongoing commitment to observe the protocols.
So, if you are the CIO of a large organization, what are some of the things that you should do to ensure your information is secure?
Probably the biggest failing I've seen in that environment is that first of all, there is no knowledge of where there are risks and where there are valuable components to be protected. So, whoever is in charge needs to have a comprehensive view of those things that are resources needing protection and the risk involved with those assets.
And then how do they go about getting that view? Do you need outside help? Or can it be done internally?
Many small and medium-size businesses may be able to use a pre-packaged program. Then again, they may need to go to an outside firm, depending on the size and the confidence and time available to the parties involved.
Larger organizations may have the expertise in-house, but they're not empowered to do this. And so, very often what you see in larger organizations are some individuals who are aware of some assets and some threats. And they're in charge of protecting those assets. But they don't have a global view.
What happens is that you get blindsided by problems coming in from places that either people weren't aware of or that weren't adequately protected.
So, how do you get a handle on all this? What questions do you need to start asking?
How is it we know we're running what we should be running? The whole issue of management configuration and understanding what's in place: whether patches are in place; whether users have downloaded and installed other things; whether holes have been put in the firewall. I mean, those are all major issues in management.
Another question: Do I know that I'm running the right software, the right release, and not something that wasn't approved?
Another question: How do I know that the employees involved in critical positions have not only been appropriately trained, but are using what they've been trained with?
Yet another question to ask: Who's ultimately in charge of making the decisions about new acquisitions, the architecture, as things move forward? And are they adequately integrating the risk into those decisions?
If decisions to buy new software or new machines are being made at a department level, you can't control it. There has to be some kind of centralized view of how things fit together. And which things are risky.
Those are also cost and convenience questions. I believe a lot of the problems that we have now are the result of trying to make everything too homogenized—not only operating systems and architectures, but programming languages.
Everybody using C is a dangerous thing. We have other languages that don't have buffer overflows.
But what is the longer-term cost to us as an enterprise in increased vulnerability, increased need for add-on security services or whatever else is involved?
Those kinds of questions don't get asked often enough.
So, computer security really is a process problem and a process solution. Oh, yeah. Thirty years ago, we had computer security problems. And it was a whole different set of technologies. And 30 years from now, we'll have computer security problems. And that's a whole different set of technologies.
Clearly, there is more to it than the technology. And we have to address it that way. It's not a matter of getting the next patch right or getting the best firewall or the best scanning tool. It's a much bigger issue.
Any other words of advice for CIOs?
Yes. A key concept is that security is an enabler, not a disabler. Too many people view security as a set of rules and constraints that say don't do this, don't do that, you can't conduct this transaction. And if it's presented that way, people are going to resist it. And it's really not correct. Security is an enabler. It says, you can go out and do business with confidence. You can accumulate information on customers without worrying about it being exposed. You can conduct business on the Internet without being charged back from the credit card companies because of fraud. And [it should be] presented in those terms—that security enables you to keep your job, security enables you to move into new markets, security enables you to have confidence in what you're doing.
Then that presents not only the business case, but the right approach, just as quality and customer service and attention to detail are enablers, because they enable you to do things that you wouldn't otherwise be able to do—and that your competitors [may not be] able to do.
That's a big shift in mind-set.
Yeah, huge. But those who've done that really have greater control over what they're doing.