In the name of achieving increased IT agility, many organizations have implemented a cloud-first policy that requires as many application workloads as possible to be hosted in the cloud. The thinking is that it will be faster to deploy and provision IT resources in the cloud.
While that’s true for most classes of workloads, those applications that are latency-sensitive are staying home to run on premise.
Speaking at a recent Hybrid Cloud Summit in New York, Tom Koukourdelis, senior director for cloud architecture at Nasdaq, said there are still whole classes of high-performance applications that need to run in real time. Trying to access those applications across a wide area network (WAN) simply isn’t feasible. The fact of the matter, he added, is that there is no such thing as a one-size-fits-all cloud computing environment.
NASDAQ makes use of multiple cloud platforms managed by Verizon and Amazon Web Services (AWS). Koukourdelis said that the biggest challenge the company’s IT department faces is coming to the realization that everything is now code. As such, IT infrastructure is a programmable asset just like any other piece of software. He advised IT organizations to, almost quite literally, think outside the box more often.
That new approach, Koukourdelis pointed out, also applies to the applications. He urged IT organizations to continually challenge themselves to tear things down in ways that will make them more efficient.
As advances in software-defined infrastructure continue to become more widely available, internal IT organizations are getting better at managing private clouds running on premise. That doesn’t mean the learning curve isn’t steep. But it does mean that internal IT groups are learning how to program IT infrastructure.
In the meantime, for those IT organizations looking to maintain control of mission-critical application on premise, they can take comfort from the laws of physics, which will always be on their side.