By Victor Delgado
Texas A&M University (TAMU) is the sixth-largest university in the country with 50,000 students. Our Provost IT Office (PITO) maintains IT assets and services for approximately 40 different departments, including the university’s admissions, financial aid, registrar, career placement and study-abroad student services.
Because these departments represent many of the core services required to operate the university’s student-facing services, we needed to ensure that these services could continue to operate with minimal disruption in the event of hardware or software failures, security incidents or routine maintenance needs. To meet this need, PITO turned to load-balancing Web services across Web server farms that help eliminate any single point of failure.
In the past, we had deployed Web server farms behind a vendor’s load-balancing product. While this product met some of our needs, several deficiencies in the product caused efficiency issues, while other needs could not be met by the product’s features.
Another issue with our previous load-balancer solution was a Web interface that I found kludgy and slow, which meant that small reconfiguration tasks consumed minutes of time. Although one task in itself didn’t present a problem, any major reconfiguration could take upward of an hour to complete. When you need to make 100 clicks to complete a change, even a delay of one or two minutes quickly adds up.
Our hypervisor platform of choice also presented a shift in needs. Most virtual load-balancers on the market require VMware’s ESX hypervisor, but our push had been to move to the Microsoft Hyper-V platform. As such, a load-balancing solution that supported Hyper-V virtualization was a project requirement. Due to recent high-profile security concerns surrounding Sun’s Java platform, the field of possible load-balancer candidates was narrowed even further to offerings that didn’t rely on Java-based Web administration consoles.
As a result, we began searching for a price-conscious, load-balancer replacement that could deliver a speedy interface, operate on Microsoft’s Hyper-V hypervisor platform, and deliver high availability, security and simplicity without sacrificing performance. After evaluating multiple products, we chose Kemp Technologies’ Virtual LoadMaster (VLM) to manage our user traffic and deliver high availability, high performance and ease of management for our Web-based applications.
Since external security threats are a top concern for us, having an intrusion prevention system (IPS) on the load-balancing solution was a must-have. Although many load-balancing solutions offered an intrusion detection system (IDS) and/or an IPS, these features tended to be “black-box” functions that provided little or no customization and could be configured only in an “on” or “off” state.
The Kemp VLM solution offered us an industry-standard, Snort format-compatible IPS that could be customized by any administrator familiar with writing Snort rules. The IPS functionality could also be tuned to various levels. This allowed our administrators to enforce different levels of IPS behavior ranging from highly aggressive, automatic threat-blocking down to a passive, log-and-alert-only setting.
The solution also offered us a more simplified Secure Sockets Layer (SSL) certificate management process that could be easily maintained at the load-balancer. This simplified interface ended the need to maintain certificates on a vast array of Web servers and provided an easy and painless interface for installing and swapping a variety of certificate formats seamlessly and almost instantaneously (.crt, .cer, and .pfx). The result is headache-free SSL management with almost no client downtime.
The high-availability functionality of the load-balancer line provides redundancy, and our PITO staff finds it easy to set up and maintain. We have a single IP from which to maintain both load-balancers. So, during operation and maintenance in high-availability mode, there’s no need to log in to multiple load-balancers to determine which device is active. That’s another time-saving feature I appreciate.
Our staff found that the improved elements were supported by a suite of features that included customizable email alerting, a host of in-depth reporting views on server and load-balancer traffic and health statistics, flexible support for “last resort” redirection and complex URL redirection to support the decommissioning or changing of URLs, and a single-click feature that removed a physical server from service—a valuable and time-saving feature when performing physical server troubleshooting or running routine software patches.
Because of our many hosted services and applications, we needed a load-balancing solution that provided power, convenience and customization in one easy-to-use and secure solution. We did not want to trade power for convenience, nor could we sacrifice performance to save money.
With Kemp’s VLM, our TAMU Provost IT Office was able to correct the day-to-day operational issues we had experienced with previous load-balancers. The current load-balancer lets us seamlessly manage highly available and secure Web-based client services without unnecessary complexity, thus delivering time savings and increased productivity.
Victor Delgado is a systems administrator in the Provost Information Technology Office at Texas A&M University. He has more than 15 years of experience with large-scale systems architecting.