Enterprise Cloud Adoption – the Boiling Frog Syndrome

boiling-frog[G u e s t   P o s t] There is no doubt that cloud is an important resource for the enterprise CIO to use when building or extending their data centers. The cloud can be on-premises, within your colocation, or hosted by one of the growing public cloud vendors. Over the past few years I have been deeply involved in discussions about the great changes experienced by the enterprise IT.

We came up with the idea for Ravello Systems three years ago, and since then I have continued to find myself in discussions regarding enterprise cloud adoption, including questions such as when and what to move?

When to move to the cloud?

Over the last 30 years, traditional large enterprise IT has evolved into a steady and mature organization. Yet it seems like the appearance of cloud surprised the traditional CIO with new capabilities that some (unfortunately) still don’t get. There are two common scenarios, the datacenter build-out and refresh.

Datacenter Build-Out

This is the case where the need for additional IT capacity supports the deployment of whole new environments, for example, a new IT service/app or department (test, dev). This scenario demands the expansion of the current datacenter including additional hardware, power, and real estate. Great investments, including building a new farm, naturally encourage the CIO to think outside of the IT box, meaning checking out the cloud opportunities, especially the public IaaS options.

Datacenter refresh

“The boiling frog story is a widespread anecdote describing a frog slowly being boiled alive. The premise is that if a frog is placed in boiling water, it will jump out, but if it is placed in cold water that is slowly heated, it will not perceive the danger and will be cooked to death. The story is often used as a metaphor for the inability or unwillingness of people to react to significant changes that occur gradually…” –Wikipedia

The datacenter change refresh cycle is between 3 to 5 years on average, including the ongoing maintenance of updating and replacing hardware components. Traditional IT leaders are used to working out infrastructure issues by using spares or gradually buying hardware capacity. There is no decision-making barrier for refresh, which consists of low marginal costs. As a result, people continue to relate to changes in the datacenter capacity on a “per-server basis” and not as a continuous investment, often overlooking the ways in which this gradual growth presents an opportunity for cloud adoption.
Check out IamOnDemand coverage on Ravello Systems

What to move first?

Let’s consider a scenario of a “purpose built datacenter” with a specific mission-critical enterprise application. Years of investments in actual hardware, skills and knowledge have already been made. The large enterprise IT environment contains a comprehensive custom network, storage and computing. In order to generate an enterprise-grade workload, organizations build a vertical integrated datacenter, which starts at the hosting level, continue by building it up, and end with continuous maintenance. Hundreds of man-hours go into building this enterprise datacenter with the capability to host this mission-critical application.

To move it to the cloud, there is a need to find a way to translate the on-premises layers to software that can be hosted on this new “Software as Hardware” environment, the cloud. This can be done, however the “cloud check-in” of custom-built application infrastructure requires great investments.Trying to rewrite these types of applications holds the same enormous risks as traditional IT integration and migration projects. Decisions regarding where to start and what to move should be made with utmost care and with real business clarity.

Steadiness, bursts and the ‘Proof Of Concept’ (POC)
Although cost savings are perceived to be one of the drivers of cloud adoption, it is a known fact that leasing an instance in the public clouds costs much more than owning one. Hence there is no logic in moving a well running enterprise application, where hardware investment is already in place and utilization is reasonable and steady.
Public cloud is an IT environment that has (at least in theory) an infinite amount of resources that can be consumed on demand. Applications that can be turned off should be considered as cloud fit. Applications with high cloud potential are such that their utilization of the underlying infrastructure can shrink and expand with the service demand. Another case is a “bursty” environment, for jobs such as data crunching, analytics, and scratch environments. Basically, you should keep your steady or consistent workload on-premises.

Why do they move?

The typical human behavior suffers from the bystander phenomenon, wait for the change, see someone react and only then (maybe) take an action. It seems that the bystander phenomenon also takes place when it comes to the ongoing change of the datacenter. The common bystander case of cloud adoption is where the IT leader struggles with the fact that the organization testers and developers use their own credit cards to enjoy the flexibility and the speed of provisioning capacity from Amazon cloud. The need to prove their agility and protect their position in the organization is one of the major driving factors for the IT guys to support and sometimes even initiate cloud adoption.

Today enterprises find the public cloud to be an appealing option for their non-critical environments, such as test/dev their new applications. The most likely candidates for migration to the cloud are “start from scratch environments”, this can help alleviate the pain created by rogue IT. The new application environment can be deployed in no time and the flexibility of running a POC eliminates the traditional hardware investment risks.

Final words

There’s still a significant gap between the generic cloud environment and the custom enterprise datacenter. Building the enterprise workload includes great vertical integration that generates the complexity, which is a great challenge of enterprise cloud adoption. The enterprise workloads also need to be analyzed to find the simple, distributed, bursty parts that can be quickly moved to the cloud. There are enterprise applications that can’t be moved to the cloud and writing them from scratch might not offer any real business value.

The enterprise migration to the cloud should be gradual. Take each step with a spreadsheet and make sure to prove an immediate ROI. Starting with a POC, perfectly match the cloud with the provisioning flexibility and use the pay-as-you-go options offered by the cloud. By pushing “scratch environments” test/dev to the cloud, you can experience an immediate ROI while starting the natural, gradual evolution into a modern IT organization.

Originally posted on Ravello Systems blog


About the author

rami-tamir

Rami Tamir has 15 years of experience in management of multidisciplinary software development. In 2011 Rami co-founded Ravello Systems and serves as its CEO. Earlier, Tamir was VP of engineering at Red Hat. He joined Red Hat through the acquisition of Qumranet (the company that developed the KVM hypervisor, now the standard virtualization technology in Linux) where he was the co-founder and president. Previously Tamir held senior key management positions at Cisco which he joined through the acquisition of Pentacom where he was co-founder and head of software.

Related posts