Finding the 'Sweet Spot' for Load BalancingBy Guest Author | Posted 2016-06-14 Email Print
Application delivery controllers and server load balancing will help Binghamton University keep critical applications running at top performance for years.
By Tony Poole
Binghamton University, part of the State University of New York (SUNY) system, has nearly 17,000 undergraduate and graduate students and about 3,900 faculty and staff.
Binghamton’s IT Services (ITS) team, of which I’ve been a member for more than 35 years, supports the network, computing and educational technology requirements of the campus. Our mission is to create and support an enhanced computing and educational environment for our faculty and students, and we continually upgrade our network, computing and application resources to meet that goal.
Aside from the network itself and standard productivity tools, such as those in Microsoft Office, the two most important software systems ITS offers its users are Blackboard and Banner. Blackboard provides faculty with tools to build and manage virtual classrooms in support of online and distance education.
Ellucian's Banner is a student information system that handles student enrollment, course registration, degree advising and financial aid processing. Administrators also use it for student billing and payment processing, as well as overall financial management.
To meet demand during heavy usage periods, both services use load balancers to distribute traffic to several application servers. Blackboard used an older commercial solution that had reached end of life, and Banner used an open-source solution that had a few limitations.
To ensure the continued high availability and performance of these vital applications, we determined we would need a new load balancing solution that offered current, commercial support options.
Setting Up Virtual Appliance Editions
Our team began by short-listing several load balancers (also known as application delivery controllers, or ADCs) from well-known commercial vendors. Working with local resellers, we set up several virtual appliance editions from various vendors in our labs.
In retrospect, this is one of the best moves we made. By using the virtual appliance editions, we were able to gain extensive experience with the load balancers and really put them through their paces.
We used the free trial editions (which were all fully operational and not “crippled” in any way) to eliminate any up-front costs. However, we’d always planned to use dedicated ADC appliances for the final deployment so we would gain the performance and throughput advantages they provide.
The evaluation took several months because our team had to find time between other projects. In the end, we determined that while one of the products was very feature-rich, it also came with a very high price tag. Another was much less expensive, but it was missing some features that were very important to us. Ultimately, we found that the Array Networks APV Series application delivery controllers hit the sweet spot, with the right balance of the features we needed at a price that was reasonable.
Because we’d already installed the Array virtual appliance for our proof-of-concept testing, with a little help from our reseller, iSECURE, we were able to easily port the configurations over to two APV2600 physical appliances deployed in a high-availability cluster. We also took advantage of a personalized Webinar tour of the APV features conducted by one of Array’s sales engineers.
Overall, the installation went very smoothly. We were able to get the Banner system load balanced quickly and haven’t had to touch it since. Blackboard was implemented as well, with the same success.
We did have one hiccup in the operation a few months ago, but that was quickly resolved, and we’ve since set up load balancing for our Active Directory servers. We plan to set up load balancing for our Web servers in the near future.
Application delivery controllers and server load balancing may not be as commonly deployed as some other networking technologies, but in our network ecosystem, they have made a big difference. We’re able to do front-end maintenance on our application servers without noticeably affecting overall performance.
If a server should fail, the load balancer automatically redirects traffic to healthy servers. We’ll be able to add more front-end servers easily when demand increases. Downtime has been minimized as well.
Overall, I feel it’s been a win-win for Binghamton University. By taking the extra time to do a full, in-depth evaluation, we avoided many of the pitfalls that we might otherwise have encountered and found a solution that will help keep our critical applications running at top performance for years to come.
Tony Poole has worked in IT at Binghamton University for more than 35 years. His career has evolved from supporting a traditional data processing environment on an IBM mainframe computer running MVS and CICS, to VAX/VMS systems, Sun/Solaris servers, and now Oracle Linux running under VMware. He is currently assistant director for systems programming in IT Services.