No Time for ComplacencyBy David Strom | Posted 2009-04-09 Email Print
WEBINAR: Live Date: December 14, 2017 @ 1:00 p.m. ET / 10:00 a.m. PT
Modernizing Authentication — What It Takes to Transform Secure Access REGISTER >
Today’s applications deliver content across a wide swath of Internet and local network infrastructure. Yesterday’s network tools just don’t do the job anymore.
“We’ve been able to alleviate the need to upgrade our Internet/WAN connections and get significant bandwidth savings by deploying Cisco’s WAN optimization technology and using SolarWinds’ Orion package for monitoring,” says Jeremy Gill, the vice president of IT for Michael Baker in Moon Township, Pa. “In our first six months, we realized a nearly 11 percent overall traffic reduction over our WAN, and with one of our production applications, we saw a more than 50 percent traffic reduction. IT managers need to look beyond simple load balancing and look at truly optimizing which network layer makes the most sense for their applications performance.”
Part of this process is just the normal course of events, whereby more powerful machines replace less capable ones and provide for increased performance. But this can create a false sense of complacency.
“When people centralized servers, the increased latency and protocol inefficiencies caused response times to tank,” says Joe Skorupa, research vice president of Enterprise Network Services and Infrastructure at Gartner. “With the introduction of WAN optimization tools, they had a two- or three-year honeymoon because response times improved so much that no one cared about measuring utilization or deploying quality-of-service metrics. But then the situation gets worse, and the latest rich Internet applications and peer networks drive up response times and compete with existing production applications.”
Sometimes, an offense is the best defense. EvriChart, a company that manages health information for hospitals, clinics and physician offices, hosts large quantities of scanned medical records over the Internet. Originally, it used proprietary software built around remote procedure calls that “were extremely inefficient,” says Tony Maro, the CIO of the company, which is based in White Sulphur Springs, W.Va. “After we redesigned the system to use HTTP and NFS, we continued to grow exponentially with no performance problems.”
Part of this redesign was a strategy that Maro used to put together a combination of open-source tools such as Nagios, Pound (a reverse proxy) and BandwidthD that help him monitor network latency and other statistics. “Constant monitoring has helped us solve issues before they became problems,” he says.