Why are enterprises moving away from public cloud?


We often hear of enterprises that move applications from their corporate data center to public cloud. This may come in the form of lift and shift. But then something happens that causes the enterprise to move it out of public cloud. This yo-yo effect and the related consequences create ongoing challenges that contribute to several of the items listed in Eight ways enterprises struggle with public cloud.

In order to better understand the problem, we need to work backwards to the root cause…and that often starts with the symptoms. For most, it starts with costs.


The number one reason why enterprises pull workloads back out of cloud has to do with economics. For public cloud, it comes in the form of a monthly bill for public cloud services. In the post referenced above, I refer to a cost differential of 4x. That is to say that public cloud services cost 4x the corporate data center alternative for the same services. These calculations include fully-loaded total cost of ownership (TCO) numbers on both sides over a period of years to normalize capital costs.

4x is a startling number and seems to fly in the face of a generally held belief that cloud computing is less expensive than the equivalent on-premises corporate data center. Does this mean that public cloud is not less expensive? Yes and no.


In order to break down the 4x number, one has to understand legacy thinking heavily influences this number. While many view public cloud as less expensive, they often compare apples to oranges when comparing public cloud to corporate data centers. And many do not consider the fully-loaded corporate data center costs that includes server, network, storage…along with power, cooling, space, administrative overhead, management, real estate, etc. Unfortunately, many of these corporate data center costs are not exposed to the CIO and IT staff. For example, do you know how much power your data center consumes and the cost for real estate? Few IT folks do.

There are five components that influence legacy thinking:

  1. 24×7 Availability: Most corporate data centers and systems are built around 24×7 availability. There is a significant amount of data center architecture that goes into the data center facility and systems to support this expectation.
  2. Peak Utilization: Corporate data center systems are built for peak utilization whether they use it regularly or not. This unused capacity sits idle until needed and only used at peak times.
  3. Redundancy: Corporate infrastructure from the power subsystems to power supplies to the disk drives is designed for redundancy. There is redundancy within each level of data center systems. If there is a hardware failure, the application ideally will not know it.
  4. Automation & Orchestration: Corporate applications are not designed with automation & orchestration in mind. Applications are often installed on specific infrastructure and left to run.
  5. Application Intelligence: Applications assume that availability is left to other systems to manage. Infrastructure manages the redundancy and architecture design manages the scale.

Now take a corporate application with this legacy thinking and move it directly into public cloud. It will need peak resources in a redundant configuration running 24×7. That is how they are designed, yet, public cloud benefits from a very different model. Running an application in a redundant configuration at peak 24×7 leads to an average of 4x in costs over traditional data center costs.

This is the equivalent of renting a car every day for a full year whether you need it or not. In this model, the shared model comes at a premium.


Is this the best way to leverage public cloud services? Knowing the details of what to expect leads one to a different approach. Can public cloud benefit corporate enterprise applications? Yes. Does it need planning and refactoring? Yes.

By refactoring applications to leverage the benefits of public cloud rather than assume legacy thinking, public cloud has the potential to be less expensive than traditional approaches. Obviously, each application will have different requirements and therefore different outcomes.

The point is to shed legacy thinking and understand where public cloud fits best. Public cloud is not the right solution for every workload. From those applications that will benefit from public cloud, understand what changes are needed before making the move.


There are other reasons that enterprises exit public cloud services beyond just cost. Those may include:

  1. Scale: Either due to cost or significant scale, enterprises may find that they are able to support applications within their own infrastructure.
  2. Regulatory/ Compliance: Enterprises may use test data with applications but then move the application back to corporate data centers when shifting into production with regulated data. Or compliance requirements may force the need to have data resources local to maintain compliance. Sovereignty issues also drive decisions in this space.
  3. Latency: There are situations where public cloud may be great on paper, but in real-life latency presents a significant challenge. Remote and time-sensitive applications are good examples.
  4. Use-case: The last catch-all is where applications have specific use-cases where public cloud is great in theory, but not the best solution in practice. Remember that public cloud is a general-purpose infrastructure. As an example, there are application use-cases that need fine-tuning that public cloud is not able to support. Other use-cases may not support public cloud in production either.

The bottom line is to fully understand your requirements, think ahead and do your homework. Enterprises have successfully moved traditional corporate applications to public cloud…even those with significant regulatory & compliance requirements. The challenge is to shed legacy thinking and consider where and how best to leverage public cloud for each application.


  1. Good article Tim but you leave out the biggest expense I have seen in the public cloud, cruft. It is so easy to spin up a new machine that people do them willy-nilly and if you aren’t careful the AWS bill goes crazy. We had a subsidiary that moved completely to AWS and this is a huge factor. With physical machines or even VM’s in a private cloud this is much less of a factor.

    Personally I was running the “free” tier and had $150 in AWS credits when I took a week long AWS Loft security class (fantastic training BTW). I thought I shut everything down at the completion of the class but I was notified a month or so later that I had burned through the $150 in credits I had and I owed Amazon $35.


  2. Hi Tim,

    Great article. Does the cost structure take into consideration “reserved instances” or is it month by month? In reality private clouds tend to end up being more costly because of the cost of IT – few IT organizations are equipped to operate clouds as smoothly as AWS/Azure.

    Another important consideration is “lock in”. Once you are on clouds like AWS and start to use a plethora of their services – RDS, SNS etc it becomes really difficult to move away.

  3. Hi Tim, interesting article and I think you are spot on with your insights. Coming from a hardware infrastructure vendor, I don’t want to appear to be throwing stones and would state that there are some very clear and compelling use cases for public cloud. My cautions are the same as yours, looking at the fine print of the public cloud contract and comparing those to the SLA’s that you are being held accountable for, is key in deciding where to place your workloads.
    The new end state for me becomes managing workloads across hybrid IT and deciding which applications should be in the public cloud and which ones should remain on traditional IT or a private cloud. Although performance, cost and control issues are all relevant, having an overall strategic plan to how to manage you’re your application in hybrid world is key. I would be interested in your thoughts.

  4. 1) Competent IT people make a big difference, but aren’t cheap. Neither is power, connectivity, etc. for a fully redundant and available private cloud. Simply setting up such a cloud is going to cost you a minimum of $150,000/year for the IT resource who’s going to set it up, configure it, and keep it running. The power bill could be $60,000/year or more. And not talking about the cost of the actual machines, switches, racks, routers, UPS’s, etc. If you look at your Amazon bill, the markup over implementing your own equivalent highly redundant and available private cloud is fairly minimal. Yes, you can save money, but there’s so many ifs involved — IF you can find the IT people needed to do it, IF you can get good pricing on equipment, IF you can get sufficient data center space with sufficient connectivity for a reasonable cost, etc. — that IMHO it’s not worth it for most applications with those kinds of availability requirements.

    2) That said, there are plenty of internal applications that don’t need the availability guarantees of the public cloud and can be done much cheaper with a private cloud that doesn’t need to meet those guarantees. For example, QA, test, and R&D infrastructures serving small development groups can tolerate outages, though the developers won’t be happy. Yes, it’s very expensive to create a cloud that is as reliable and redundant as a public cloud, but there’s plenty of consumers of computing resources who don’t need that level of reliability and redundancy, and it’s cheaper to set up an internal cloud for them than to fuss with commissioning a heterogeneous set of computing resources, which is the other thing these people would do if you didn’t give them a private cloud (since their budget simply won’t do all of that in the highly available public cloud).

    3) you don’t need that full-time person dedicated to your cloud infrastructure if you’re doing it for low-cost low volume internal applications with lower reliability requirements such as above. A consultant to set it up and occasionally from time to time do some expansion to it will cost considerably less than full time IT resources. The whole point of cloud is self-service provisioning on the part of the actual end customers of computing resources, thereby reducing the number of IT resources that you need.

    In short, there’s a place for both public and private cloud even for relatively small businesses if the need for compute resources is non-trivial. In our case we have to test our application against a grid matrix of dozens of configurations of Microsoft OS and application, and the tests are continuous in our agile process. Spinning that up in the public cloud would cost far more than it is thus far costing to run in our private cloud that was configured with significantly less redundancy and availability, because we’re not paying for the unnecessary redundancy and availability. If our local cloud falls over in the middle of the night, not a big deal. If our meat and potatoes cloud application falls over in the middle of the night, by contrast, it is a BIG DEAL for our paying customers, who are paying for 24/7 service and are *not* going to tolerate the sort of extended outage that can be tolerated by a QA test group.

Leave a Reply

Discover more from AVOA

Subscribe now to keep reading and get access to the full archive.

Continue Reading

%d bloggers like this: