Intelligent InfrastructureBy Vincent Biddlecombe | Posted 2009-07-15 Email Print
How Real-World Numbers Make the Case for SSDs in the Data Center
Converting to a virtual infrastructure helps Transplace maintain world-class functionality, hosting, availability and reliability.
Transplace is a non-asset-based third-party logistics (3PL) provider that offers manufacturers and retailers logistics technology and transportation management services. From logistics management outsourcing to intelligent transportation management systems to supply chain network planning and design—including brokerage services—the company delivers rapid ROI and consistent value to its customer base, which includes many Fortune 500 companies.
In May 2007, Transplace considered converting its technological infrastructure to a virtual environment, reports Vincent Biddlecombe, the company’s CTO. The objectives were to make more efficient use of hardware, lower database licensing costs, streamline the process for creating disaster recovery servers, allow for the creation of additional environments on demand and solve the scalability challenge.
We first considered converting our technological infrastructure to a virtual environment in May 2007. The decision came in conjunction with Transplace’s plans to move the company’s production data center to a co-location facility. We took this opportunity to refresh our data center hardware.
Because we deliver services via the software-as-a-service (SaaS) model, we needed to maintain our 24/7 world-class functionality, hosting, availability, reliability and scalability. We also wanted to improve our reliable disaster recovery process while adding more flexibility for deploying multiple environments. This includes test and development, as well as customer user acceptance, sales demos and load-testing environments.
We felt that a virtual infrastructure would provide us with all of these attributes and at a more reasonable cost than conventional infrastructure. Transplace first thought of virtualization when considering a tiered storage strategy and data replication across multiple environments. We also realized that virtualization would make it easier to replicate our production environment, as well as leverage servers to support disaster recovery and other environments in our backup data center.
Because virtualization reduces the number of CPUs we need, it also would help us manage the cost of our CPU-based Oracle licenses. This was a critical factor, since licensing costs grow when creating a near-real-time disaster recovery center. When a near-real-time data copy is used in disaster recovery, Oracle requires the same licensing model on both sides.
Lowering Costs, Increasing Capabilities
By reducing the overall number of CPUs we needed to purchase, the VMware virtual environment we deployed led to significant savings. Transplace also discovered that IBM Power6 servers combined with its AIX operating system would allow us to deploy fewer but faster processors at the database tier. This allowed us to further lower our Oracle licensing costs.
Another cost-saving feature involved AIX and the IBM p570 servers we use at the database layer. The servers have logical partitions for which we can allocate as much memory and CPU power as we want, and we can share excess capacity across the partitions. By virtualizing the logical partitions, we can scale more easily. And with the databases running on shared storage, we can create extra partitions for our other production server to fail over to, if necessary.
Virtualization let us configure our disaster recovery environment so that while we run test and development environments, we can also run smaller Oracle partitions. If we want to conduct load testing, we just reconfigure the development partitions for true load testing. If we have to run a disaster recovery process, we can use the whole box to run production.
For storage, we use NetApp devices for which we have configured classic enterprise shared storage. Additionally, we use NetApp software to create extra copies of our database for test and development.
Previously, when we wanted to refresh the test-and-development database, we had to take a full copy of production, which took 12 to 24 hours and required a lot of storage. Now, we just create a snap copy of the disaster recovery copy, which is an up-to-date production copy. It takes very little space, and the process occurs within minutes.
At the application layer, most of our code is custom-written in Java running on BEA WebLogic application servers. For hardware, we use Dell 2950 servers with two quad-core processors and 32GB of RAM. We also added extra network interface cards (NICs) to the servers for the subnets that connect the servers to our storage. All the application layer servers are virtualized using VMware.
A key virtualization feature that comes into play at the application layer is VMware’s VMotion capability, which lets us create a cluster of three or four servers on which we can run 20 to 40 virtual servers. VMotion monitors how busy each server is and automatically moves a virtual server from one physical server to another if the original physical server is too busy. This occurs in real time without any interruption to the application.
When assessing the cost of converting to a virtual environment, it’s important to realize that virtualization requires additional network storage: It takes 20GB to load the operating system of a virtual machine. You also need additional NICs for the separate network subnets between the virtual machines and the storage devices.
When you add the cost of the virtual operating system software and the extra memory you will require, your cost per server goes up. But when you consider that you can consolidate up to 10 virtual servers on one physical server, the savings are considerable—not just in up-front hardware costs, but also in terms of power and cooling costs, as well as data center space.
It’s also important to understand the licensing requirements of your software vendors. Most vendors offer CPU-based licensing, but they may factor in extra costs for virtual servers. Virtualization makes it easy to add new servers, but software vendors may want to charge extra for each server added. Use of dedicated VMware clusters can help in this regard.
Deploying a virtual infrastructure across most of our network generated several benefits for our company. Not only did we save on hardware costs by decreasing the number of CPUs we needed, but we also addressed the challenge of how to create databases when we need them. Virtualization also solved the question of how to copy to our disaster recovery server, and we can now create additional environments on demand—without having servers specifically reserved for each environment.
Our virtual machine clusters might run test virtual machines one day and load testing the next. Or they could run disaster recovery: We can choose which virtual machines we want running at any moment in time. This helps solve the challenge of scalability, since we can simply add new servers to the cluster whenever we want. Most importantly, virtualization has allowed us to maintain our 24/7 world-class functionality, hosting, availability and reliability for our customers.
Vincent Biddlecombe is the CTO of Transplace. Prior to this position, he was vice president of information technology at Ruan Transportation and director of system architecture at US Freightways. He has more than 15 years of experience in IT consulting.