Ira Winkler on How To Fight Pretexting

By John McCormick Print this article Print

A former Hewlett-Packard chief security strategist on how companies can combat the latest security and privacy threats.


Ira Winkler, one of the nation's leading computer security experts, is president of Internet Security Advisors Group, a security consultancy that specializes in vulnerability assessments and penetration testing services. He also sits on the board of advisers at Securify, a computer monitoring and security company, and is the author of a number of books, including Spies Among Us, which deals with business in the digital age.

Winkler is also a one-time intelligence and computer systems analyst at the National Security Agency, the former technology director at the International Computer Security Association, and the former chief security strategist at Hewlett-Packard, a post from which he resigned in 2004.

He spoke with Baseline's editor-in-chief John McCormick at the end of September.

The recent HP scandal involved pretexting, the practice of tricking someone into giving up personal information. Should this be a concern for technology executives?

This is and isn't a concern, because here's how I see what happened at HP. At HP, you have some people who [were] basically very ego-driven. And they were doing things that other people saw, but chose to ignore in one way or another.

The lesson to be learned, from a security perspective, is when you allow your users to be ego-driven, you're going to have security problems. While it might seem like ethics, the ethics in this case are creating more damage than a major computer breach ever could.

You talk about this being ego-driven. But isn't culture usually also to blame? If you have a culture that respects privacy, that's concerned about security, you're probably in much better shape no matter what your technology or management practices are.

I would agree, because what I was describing as ego-driven behavior created a culture where managers' activities are in question.

You can talk about culture, which usually starts in the corner office. But in some respects, does it also come from the CIO, because the CIO is the one setting information management practices, policies and, in many respects, setting the standards by which information is shared?

Yes and no. The CIO can place controls inside the company to say who's allowed access to specific types of information. [HP] is a combination of policies basically being compromised, not necessarily technology being compromised. But if [something like this] was done in a wholesale way, then, it's likely that it could have been detected by the CIO or some behavioral tools [such as] Securify tools.

How could that tool help?

It looks for a variety of different types of behaviors on the part of users. And if you would see people from outside HR accessing HR records or employee records, that would typically be flagged, assuming that the policies were set up correctly.

You've been quoted in the past as saying that the specific technology, when it comes to security and privacy, really doesn't matter. It's the consistency of how those technologies are put in place.

Technology is not the be-all and end-all, because too many people focus on the technology without looking at the processes of implementing the technology.

Let me give you this story. One time, I was doing a firewall assessment, and part of the assessment is to look at the firewall policy. I got a form that said, basically, all services were allowed in, and all services were allowed out. And I'm sitting there thinking their firewall is basically nothing more than an Ethernet hub.

And they're like, "Oh, well, what happened was we used to have everything turned on. And we kept getting complaints. And then, when the CEO said he wanted to access his AOL account from his desktop, we just completely gave up and let everything in and out, and we haven't received a complaint since." [What] a great firewall they had in place--with all functionality, essentially, turned off.

However, the proper security tools can be a fail-safe. For example, bad passwords are a very common type of problem. Token-based authentication could be a fail-safe to bad passwords.

Likewise, abuse of privileges. That's another example where a Securify product could be of use, where you can see people behaving outside the norm for what they should be doing.

Other security products have the same sort of effects. Again, firewalls, if they're configured properly internally, could prevent somebody, say, from the janitorial department from going into the R&D department. Properly setting up activities or properly setting up the technologies does prevent a great deal of security problems.

There seem to be more tools coming out. And there seem to be more dangers. Is security getting more complicated as time goes on?

The more things change, the more they stay the same. From a computer security perspective, we haven't seen any major issues now that we haven't seen more than two decades ago.

But what we've seen [is] different crimes being brought into technology. What are spams? Spams are essentially direct marketing through the use of computers. So, we really haven't seen any revolutionary attacks that we haven't seen before; we're just seeing different applications of the same attacks.

Some surveys suggest organizations are getting better at protecting themselves.

I would not necessarily say there is a trend downward. All you have to do is look at the headlines. A day doesn't go by without records lost. If you look at anecdotal evidence—what my clients are experiencing—[some] are experiencing multibillion-dollar losses of intellectual property.

Before the HP story broke, pretexting was not likely something that was top of mind for most people. Are there other things out there that might be lying in wait for information-technology and businesspeople?

Well, pretexting is not out of the blue. I remember seeing [it mentioned] 20 years ago. The concepts of social engineering and reversed social engineering [go] back to the late 1980s.

But are there any other threats out there lying in wait?

I think what people are not really looking at is the proliferation of spyware and how that could affect people a lot more than it is right now. What we're seeing is thousands, if not millions, of computers compromised over the years—being taken over as zombie computers. That's how spam gets sent out. That's how all these phishing attacks get sent out.

These [problems] should be able to be detected by Internet service providers. In my opinion, ISPs have got to crack down. They should look for the system behaviors and notice that systems are acting maliciously by sending out tens of thousands of spam message or phishing messages or other service attacks. And they should be responsible for taking these systems off.

And why aren't they dealing with the problem?

Because nobody's making them.

The way you stop this traffic is stopping it at the source. They could detect that traffic and say [to the offending parties], "Look, you're not getting any [Internet Protocol] addresses until you clean your system up." [But] ISPs don't police their own networks. And Congress isn't doing anything about it. Just like they haven't done anything about pretexting.

And if you take this from the corporate I.T. angle, obviously there are certain things that everybody should be doing. You want to make sure that you've taken measures such as putting firewalls in place. But what three things would you recommend CIOs or CSOs do at this point that they might not have done?

I would probably recommend that I.T. people first make sure they focus their attention inward. Insiders are already their biggest threat.

And they have to be able to start putting in intrusion detection and misuse-and-abuse detection on their internal networks. That's also a fail-safe mechanism. Because even when outsiders are able to tunnel in, they appear to be insiders.

What's next?

Once you have that, you've really got to focus on the basics of security awareness. The types of things I get into are laughable. But the only reason they're laughable is because they defy common sense. To have common sense, the users have to have common knowledge, and it should be a fundamental goal of every security program to impart common security knowledge to all employees.

And I think that they should not just [talk about how] you behave internally. They should address how you behave externally as well, because if you expect people to behave internally one way and you don't care how they behave externally, [then] they're going to bring the bad behaviors from the outside into the company.

By "common knowledge," we're talking about making sure you change your passwords, that you encrypt your laptop and such?

Yes. Attacks are only enabled because of very basic simple flaws on the part of the user.

We wouldn't have bad computers, for example, if people would have antivirus software that was constantly updated.

No matter how advanced the attacks are, they would be prevented. Likewise, if people know that they shouldn't write down their user IDs and passwords, they won't be a victim of malicious insiders.

What else?

Make sure that the basics are taken care of. Make sure that the systems are hardened. Make sure that all the updates are turned on and enabled proactively. Make sure that all your antivirus software is turned on and updated proactively, and that all of your [other] software is updated.

Because the attacks aren't coming from geniuses. Probably, there are one or two geniuses out there in any attack that find a vulnerability and create a tool for that vulnerability. Then, at that point, many morons can take that attack script and run it against anybody.

But if companies are implementing the basics properly and proactively securing their systems, they will be ahead of the curve and prevent the crimes.

This article was originally published on 2006-11-21
eWeek eWeek

Have the latest technology news and resources emailed to you everyday.