The Laws of Virtualization Security
Editor’s Note: Adoption of virtualization in the enterprise is increasing rapidly, giving rise to concerns about security risks and threats to virtual infrastructures. Burton Group analyst and noted security expert
Virtualization of clients and servers and its accompanying impact on networks and storage is a hot topic in IT today, and therefore a hot topic in security.
As with any new technology as broad and comprehensive as virtualization, such security concerns are of paramount importance. Not only that, but the technical details combined with a mix of marketing messages from vendors can create a potent cocktail of ambiguity about the real impact on risk and security of these new architectures.
Amidst all of the superficial discussions of hypervisor-based rootkits and discovery techniques lay the very real issues of allocation of information assets and the relative impact on threats and vulnerabilities. Indeed, virtualization comes with its own set of unique security considerations. The appropriate protection response to these inherent security characteristics is neither one of paranoid negativity around deployment nor delusional access for everyone. The best response is a measured approach with careful consideration of the impact on the existing IT infrastructure, a factored analysis of threats, vulnerabilities, and consequences, and an understanding of the impact on existing security solutions.
Rules of the game
There are five immutable laws of virtualization security that must be understood and used to drive security decisions:
Law 1: Attacking a virtual combination of operating system and applications is exactly the same as attacking the physical system it replicates.
The beauty of a virtual machine is that it acts just like a physical system. In most environments, that means it can be attacked in the same way. Any data on the virtual machine may be stolen, and if the virtual machine has network access it may be used as a stepping stone to attack other systems.
Law 2: A virtual machine poses a higher security risk than an identically configured physical system running the same operating system and applications.
This corollary to Law #1 accounts for additional vulnerability of a virtual system’s controlling software, known as a hypervisor. Because the hypervisor monitors and responds to a virtual machine, it’s susceptible to attack itself. It’s important to recognize the risks inherent to the virtual environment and to offset that risk in other ways.
Law 3: Virtual machines can be made more secure than similar physical systems when they separate functionality and content.
When two processes share the same memory space, an attack against one process can impact the other. One of the ways to benefit from virtualization is to separate functions and date into isolated operating environments. Such segregation helps reduce the risk added by the virtualization software in Law #2.
Law 4: A set of virtual machines aggregated on the same physical system can only be made more secure than separate physical systems by modifying the virtual machine’s configurations to offset hypervisor risk.
While separating resources reduces risk, combining resources will initially increase risk (see Law #2). So at this level of aggregation, virtual machine’s must be reconfigured to attain the same level of risk achieved through Law #3. Turning off services, adding controls, and separating content can help reduce overall risk.
Law 5: A system containing a trusted virtual machine on an untrusted host poses a greater risk than a system containing a trusted host with an untrusted virtual machine.
Attacks at lower levels have greater risk than those at higher levels since higher-level programs can be tricked into believing assertions about trust and authenticity. It’s important for deployments of trusted virtual machines in untrusted environments to consider the implications and harden the virtual machine image accordingly.
Putting the laws into practice
The answer to the question of security rarely has an absolute value. For most enterprises, the virtualization decision is about where and when to apply controls that are sufficient in the environment based on risk tolerance. Ultimately, whether virtualization is bane or boon for security depends on how the systems are configured, deployed, and managed.
To manage these new security concerns, it’s important to understand the underpinnings of today’s virtual systems.
The primary components of a virtual environment are:
· Virtual Machines (VMs) and their accompanying guest operating systems: Theses are the “core” components of the virtual architecture.
· Virtual Machine Monitor (VMM): The software component responsible for managing interactions between the VM and the physical system.
· Hypervisor and/or host operating systems: The software that handles kernel operations.
A virtualized environment consists of a VMM and one or more virtual machines. The VMs and VMM interact with either a hypervisor or a host operating system to access hardware, local I/O, and networking resources. In addition to these components, virtualization architectures leverage virtual networking, virtual storage, and terminal service capabilities to complete their architectures.
This minimum set of components comprises virtual environments in a few distinct ways:
· Type 1 virtual environments are considered “full virtualization” environments and have virtual machines running on a hypervisor that interacts with the hardware.
· Type 2 virtual environments are considered “full virtualization” as well, but work with a host operating system instead of hypervisor (though sometimes the VMM is called a hypervisor anyway).
· Paravirtualized environments make performance gains by eliminating some of the emulation that occurs in full-virtualization environments.
· Other designations include hybrid virtual machines (HVMs) and hardware-assisted techniques.
From a security perspective, a more significant risk profile exists in a Type 2 environment where a host operating system with user applications and interfaces is running outside of a virtual machine at a level lower than the other virtual machines. Because of the architecture, the Type 2 environment increases risk through its incorporation of potential attacks against the host operating system. For example, a laptop running VMware with a Linux virtual machine on a Windows XP system inherits the attack surface of both operating systems, plus the virtualization code of the VMM.
Security Benefits of Virtualization
There is growing confusion and debate about the net positive and negative security aspects of virtual environments. On one side is the notion of isolation of resources into purpose-built virtual machines that limit the consequences of attacks. On the other side are researchers involved in exploiting the technology and abusing its functionality that demonstrate significant risks.
Shared content and resources are the bane of the security professional’s existence—most of whose time is spent collecting and logically categorizing, grouping and then separating resources. Sometimes this grouping is done by business units and sometimes by other means, such as the classification of the content.
A virtual environment can provide a means for separation of program resources and content that enhances security. Shared resources also share risk at the aggregate level. Separating resources and content allows for stronger protection of higher-risk resources and reduces the overall impact of a compromise. A number of valuable use cases might come out of this. For example:
· A single application or set of applications could be run in a virtual machine guest (or compartment) separate from all other applications.
· A consultant working for two different companies could do work for each client in a separate virtual machine.
· Someone working on a personal computer could use one virtual machine for business activities and another for personal finances and homework.
User behavior can vary widely across a spectrum from strong risk tolerance to strong risk aversion. This behavior can change in a matter of minutes. Obviously, this creates a problem whereby the risk-tolerant behavior impacts the risk-averse requirements. An isolated temporary environment can provide a way to allow risk-tolerant behavior without significantly impacting the risk-sensitive resources.
One technique for virtual environments involves creating a “sandbox” virtual machine and using it for risky activities. Assuming the content being created and the changes being made are insignificant in the long term, then a user can “turn back time” to a point where the virtual machine was known good—typically reverting to the standard image. The obvious use for such a configuration is for shared systems like training systems and kiosks to allow for maximum flexibility on the user side without creating any long-term damage.
The sandbox scenario also provides an obvious case where streamlined recoverability is useful. In fact, the more frequent the reversion to a known-good state, the lower the potential for harmful consequences.
Virtual machines can also be multiplied and distributed in many different ways. This flexibility is a boon to disaster recovery specialists looking for ways to increase availability. Maintaining replicated environments that are physically separate and creating images that can be quickly recovered contributes to the overall availability of the resources.
Of course, a virtual system is not without its attack vectors. Rogue hypervisors and the virtual machine escape are two aspects of threats that should be fully evaluated.
In the past few years, much attention has been given to the use of virtualization in support of rootkits. Rootkits gain their effectiveness when they are hidden, and hypervisor rootkits that are sometimes paradoxically called virtual machine-based rootkits hide by launching a rogue hypervisor and porting the existing operating system into a virtual machine. The guest operating system within the virtual machinebelieves it is running as a traditional operating system with the corresponding control over local hardware and networking resources afforded to these systems, even though it isn’t. The hypervisor actually has control and can manipulate the activities on the system in any number of ways.
In 2006, security researcher Joanna Rutkowska introduced what she called the “blue pill,” a hypervisor rootkit that inserts itself into memory, subordinates the real operating system to virtual machine status, and gains a level of invisibility by extension. To date, the rogue hypervisor is of greater concern to security researchers than to the enterprise. In fact, using virtual systems becomes a sort of protection itself, since malware installed in a virtual machinewould not execute its payload.
Another security concern involves what is known as “escaping” the virtual machine. This ability to move malware outside the virtual machine and execute arbitrary code on the physical host is considered the Holy Grail of virtualization security. Given that the intent of virtualization is to be transparent to existing functionality, the hypervisor is the only new component that need be assessed.
So, the ability of the hypervisor to withstand attack and provide some level of isolation among virtual machines is at the root of how risk will fare in these environments. Since the hypervisor is, after all, a software program, it stands to reason that additional software initially increases the risk in any environment, simply because there is more code implemented with more complexity than with traditional IT environments.
Several researchers have demonstrated rudimentary virtual machine escape exploits and as the popularity of virtual systems increases, and the platform becomes more lucrative an attack target, the threat will continue to increase.
The Impact of Virtual Environments on Risk
Although the benefits of a virtual environment are clear, they are not always realized in every architected environment. The reality is that these various characteristics will be mixed and matched with other IT resources. Given that probable outcome, it is useful to review risk principles and apply them to a virtual environment. Burton Group defines risk as a function of threats, vulnerabilities and consequences such that an increase in any of these three elements increases overall risk.
At this stage of virtualization maturation, the likelihood that malicious attackers will target virtual environments is relatively low. That said, as more people get trained for and learn about virtualization, attackers are bound to follow. Given the adoption rate of virtualization technology, it is reasonable to assume this threat is accelerating quickly.
The vulnerability of a system is a measure of its attack surface—the nature and extent of resources on a system that are exposed. Of course, that if isolation mechanisms like firewalls or operating system access controls fail, the attack surface balloons to comprise the entire machine. The pertinent questions, then, are whether the attack surface of a system or of an enterprise IT environment as a whole increases or decreases through virtualization.
Attack surface increases with the availability of services on any IT resource. This means that the addition of a system to an enterprise environment increases attack surface, and at a more granular level, the starting of services, opening of TCP/UDP ports, and registering of remote procedure call (RPC) endpoints increases the attack surface as well. If more resources are consumed, more risk is incurred.
Most virtual environments aim to make the virtualization transparent throughout the environment. However, something new is “behind the scenes” of the systems in place—the hypervisor and virtual machine monitor. The addition of the hypervisor resource increases risk just like any other additional service does.
If everything else remains constant, the vulnerability component of risk is increased in virtual environments. Everything else does not need to remain constant, however. To whatever extent other resources can be reduced, eliminated, or isolated so that they are no longer part of the attack surface, these actions will offset the increased attack surface and reduce overall vulnerability.
The final component of risk is the impact or consequences of a successful attack. In most IT environments, the value of information assets is increasing as organizations work to squeeze out more benefits from systems. As these functions take on more mission-critical capabilities, associated losses are increased as well.
But consequences are not necessarily correlated with an increased attack surface. Given the increased flexibility of virtual systems, one of the benefits is the ability to create purpose-built appliances to support various functions. If functions that were previously combined are separated, then it is clear that the consequences may be reduced using virtual machines, which also reduces risk.
Path to Virtualization SecuritySecurity teams should take a number of steps to ensure improved protection of virtual environments, including:
- Use all existing security mechanisms: Since one of the primary goals of virtualization is transparency, all current host-based solutions should operate in exactly the same way with limited need for modifications. Existing solutions may not be optimal, but they’ll provide reasonable security.
- Get your administrative act together: The dynamic nature of the virtual machine lifecycle and the potential for virtual machine sprawl hint at an even more difficult asset-management environment in the virtual world. It is prudent to ensure that administrative procedures are ready for identifying and tracking virtual machines throughout the environment.
- Look for ways to move security out of the virtual machine: Enterprises reduce or eradicate agents from virtual machines and create separate process spaces for user activities and security functions.
- Manage virtual machines like files and systems: The portability of virtual machines makes them vulnerable to file-style attacks, and therefore they must be protected in a similar fashion. The goal of file-oriented management is recognizing the file objects and providing cryptographic and access control protection for them.
- Encrypt network traffic where possible: Encrypted communications provides some protection against local sniffing threats that may come from other virtual machines or the hypervisor.
- Practice segregation of functions: Since multiple virtual machines can be run on the same machine, it may be possible to create separate compartments for security components. Strong candidates for segregation include logging events externally, maintaining separate keys for encryption, and separating policy and configuration from the image.
Virtualized environments are poised to provide significant operational benefits to enterprises, but they are not without their risks. The introduction of a new layer of software—in the form of the hypervisor—and the new architectures that provide the benefits must be evaluated from a security perspective to understand the risk and the security impact.
Pete Lindstrom is a senior analyst at Burton Group specializing in security metrics, risk management, Web 2.0/SOA/Web services security, and securing new technologies.