Business · Cloud

Riverbed extends into the cloud

logo_riverbed_orange

One of the most critical, but often overlooked components in a system is that of the network. Enterprises continue to spend considerable amounts of money on network optimization as part of their core infrastructure. Traditionally, enterprises have controlled much of the network between applications components. Most of the time the different tiers of an application were collocated in the same data center or across multiple data centers and dedicated network connections that the enterprise had control of.

The advent of cloud changed all of that. Now, different tiers of an application may be spread across different locations, running on systems that the enterprise does not control. This lack of control provides a new challenge to network management.

In addition to applications moving, so does the data. As applications and data move beyond the bounds of the enterprise data center, so does the need to address the increasingly dispersed network performance requirements. The question is: How do you still address network performance management with you no longer control the underlying systems and network infrastructure components?

Riverbed is no stranger to Network performance management. Their products are widely used across enterprises today. At Tech Field Day’sCloud Field Day 3, I had the chance to meet up with the Riverbed team to discuss how they are extending their technology to address the changing requirements that cloud brings.

EXTENDING NETWORK PERFORMANCE TO CLOUD

Traditionally network performance management involved hardware appliances that would sit at the edges of your applications or data centers. Unfortunately, in a cloud-based world, the enterprise does not have access to the cloud data center nor network egress points.

Network optimization in cloud requires an entirely different approach. Add to this that application services are moving toward ephemeral behaviors and one can quickly see how this becomes a moving target.

Riverbed takes a somewhat traditional approach to how they address the network performance management problem in the cloud. Riverbed gives the enterprise the option to run their software as either a ‘sidecar’ to the application or as part of the cloud-based container.

EXTENDING THE DATA CENTER OR EMBRACING CLOUD?

There are two schools of thought on how one engages a mixed environment of traditional data center assets along with cloud. The first is to look at extending the existing data center so that the cloud is viewed as simply another data center. The second approach is to change the perspective where the constraints are reduced to the application…or better yet service level. The latter is a construct that is typical in cloud-native applications.

Today, Riverbed has taken the former approach. They view the cloud as another data center in your network. To this point, Riverbed’s SteelFusion product works as if the cloud is another data center in the network. Unfortunately, this only works when you have consolidated your cloud-based resources into specific locations.

Most enterprises are looking at a very fragmented approach to their use of cloud-based resources today. A given application may consume resources across multiple cloud providers and locations due to specific resource requirements. This shows up in how enterprises are embracing a multi-cloud strategy. Unfortunately, consolidation of cloud-based resources works against one of the core value propositions to cloud; the ability to leverage different cloud solutions, resources and tools.

UNDERSTANDING THE RIVERBED PORTFOLIO

During the session with the Riverbed team, it was challenging to understand how the different components of their portfolio work together to address the varied enterprise requirements. The portfolio does contain extensions to existing products that start to bring cloud into the network fold. Riverbed also discussed their Steelhead SaaS product, but it was unclear how it fits into a cloud native application model. On the upside, Riverbed is already supporting multiple cloud services by allowing their SteelConnect Manager product to connect to both Amazon Web Services (AWS) and Microsoft Azure. On AWS, SteelConnect Manager can run as an AWS VPC.

Understanding the changing enterprise requirements will become increasingly more difficult as the persona of the Riverbed buyer changes. Historically, the Riverbed customer was a network administrator or infrastructure team member. As enterprises move to cloud, the buyer changes to the developer and possibly the business user in some cases. These new personas are looking for quick access to resources and tools in an easy to consume way. This is very similar to how existing cloud resources are consumed. These new personas are not accustomed to working with infrastructure nor do they have an interest in doing so.

PROVIDING CLARITY FOR THE CHANGING CLOUD CUSTOMER

Messaging and solutions geared to these new personas of buyers need to be clear and concise. Unfortunately, the session with the Riverbed team was very much focused on their traditional customer; the Network administrator. At times, they seemed to be somewhat confused by questions that addressed cloud native application architectures.

One positive indicator is that Riverbed acknowledged that the end-user experience is really what matters, not network performance. In Riverbed parlance, they call this End User Experience Management (EUEM). In a cloud-based world, this will guide the Riverbed team well as they consider what serves as their North Star.

As enterprise embrace cloud-based architectures more fully, so will the need for Riverbed’s model that drives their product portfolio, architecture and go-to-market strategy. Based on the current state, they have made some inroads, but have a long way to go.

Further Reading: The difference between hybrid and multi-cloud for the enterprise

Business · Cloud

Oracle works toward capturing enterprise Cloud IaaS demand

4

The enterprise cloud market is still shows a widely untapped potential. A significant portion of this potential comes from the demand generated by the legacy applications that are sitting in the myriad of corporate data centers. The footprint from these legacy workloads alone is staggering. Start adding in the workloads that sit in secondary data centers that often do not get included in many metrics and one can quickly see the opportunity.

ORACLE STARTS FROM THE GROUND UP

At Tech Field Day’s Cloud Field Day 3, I had the opportunity to meet with the team from Oracle Cloud Infrastructureto discuss their Infrastructure as a Service (IaaS) cloud portfolio. Oracle is trying to attract the current Oracle customer to their cloud-based offerings. Their offerings range from IaaS up through Software as a Service (SaaS) for their core back-office business applications.

The conversation with the Oracle team was pretty rough as it was hard to determine what, exactly, that they did in the IaaS space. There were a number of buzzwords and concepts thrown around without covering what the Oracle IaaS portfolio actually offered. Eventually, it became clear during a demo, in a configuration page what the true offerings were: Virtual Machines and Bare Metal. That’s a good start for Oracle, but unfortunate in how it was presented. Oracle’s offering is hosted infrastructure that is more similar to IBM’s SoftLayer(now called IBM Cloud) than Microsoft Azure, Amazon AWSor Google Cloud.

ORACLE DATABASE AS A SERVICE

Beyond just the hardware, applications are one of the strengths of Oracle’s enterprise offerings. And a core piece of the puzzle has always been their database. One of the highlights from the conversation was their Database as a Service (DBaaS)offering. For enterprises that use Oracle DB, this is a core sticking point that keeps their applications firmly planted in the corporate data center. With the Oracle DBaaS offering, enterprises have the ability to move workloads to a cloud-based infrastructure without losing fidelity in the Oracle DB offering.

Digging deeper into the details, there were a couple interesting functions supported by Oracle’s DBaaS. A very cool feature was the ability to dynamically change the number of CPUs allocated to a database without taking an outage. This provides the ability to scale DB capacity up and down, as needed, without impact to application performance.

Now, it should be noted that while the thought of a hosted Oracle DB sounds good on paper, the actual migration will be complicated for any enterprise. That is less a statement about Oracle and more to the point that enterprise application workloads are a complicated web of interconnects and integrations. Not surprisingly, Oracle mentioned that the most common use-case that is driving legacy footprints to Oracle Cloud is the DB. This shows how much pent-up demand there is to move even the most complicated workloads to cloud. Today, Oracle’s DB offering runs on Oracle Cloud Infrastructure (OCI). It was mentioned that the other Oracle Cloud offerings are moving to run on OCI as well.

Another use-case mentioned was that of High-Performance Computing (HPC). HPC environments need large scale and low latency. Both are positive factors for Oracle’s hardware designs.

While these are two good use-cases, Oracle will need to do things that attract a broader base of use-cases moving forward.

THE CIO PERSPECTIVE

Overall, there seems to be some glimmers of light coming from the Oracle Cloud offering. However, it is hard to get into the true differentiators. Granted that Oracle is playing a bit of catch-up compared with other, more mature cloud-based offerings.

The true value appears to be focused on existing Oracle customers that are looking to make a quick move to cloud. If true and the two fundamental use-cases are DBaaS and HPC, that is a fairly limited pool of customers when there is significant potential still sitting in the corporate data center.

It will be interesting to see how Oracle evolves their IaaS messaging and portfolio to broaden the use-cases and provide fundamental services that other cloud solutions have offered for years. Oracle does have the resources to put a lot of effort toward making a bigger impact. Right now, however, it appears that the Oracle Cloud offering is mainly geared for existing Oracle customers with specific use-cases.

Business · Cloud · Data

Delphix smartly reduces the friction to access data

Screen Shot 2018-04-17 at 10.00.57 AM

Today’s CIO is looking for ways to untap the potential in their company’s data. We have heard the phrase that data is the new oil. Except that data, like oil, is just a raw material. Ultimately, we need to refine it into a finished good which is ultimately where the value resides.

At the same time, enterprises are concerned with regulatory and compliance requirements to protect data. Recent data breaches by globally-recognized companies have raised the concern around data privacy. Historically, the financial services and healthcare industries were the ones to watch when it came to regulatory and compliance requirements. Today, the regulatory net is widening with the EU’s General Data Protection Regulation(GDPR), US Government’s FedRAMPand NY State DFS Cybersecurity Requirements.

Creating greater access to data while staying in compliance and protecting data sit at opposite ends of the privacy and cybersecurity spectrum. Add to this the interest in moving data to cloud-based solutions and one can quickly see why this is one of the core challenges for today’s CIO.

DELPHIX REDUCES THE FRICTION TO DATA ACCESS

At Tech Field Day’s Cloud Field Day 3, I had the opportunity to meet with the team from Delphix.

Fundamentally, Delphix is a cloud-based data management platform that helps enterprises reduce the friction to data access through automation of data management. Today, one-third of Fortune 500 companies use Delphix.

Going back to the core issue, users have a hunger for accessing data. However, regulatory and compliance requirements often hinder that process. Today’s methods to manage data are heavily manual and somewhat archaic compared with solutions like Delphix.

Delphix’ approach is to pack up the data into, what they call, a Data Pod. Unlike most approaches that mask data when it is shared, Delphix masks the data during the intake process. The good thing about this approach is in removing the risk of accidentally sharing protected data.

In terms of sharing data, one clever part of the Delphix Dynamic Data Platform is in its ability to replicate data smartly. Considering that Delphix works in the cloud, this is a key aspect to avoiding unnecessary costs. Alternatively, enterprises would see a significant uptick in data storage as masked data is replicated to the various users. Beyond structured, transactional data, Delphix is also able to manage (and mask) databases, along with unstructured data and files.

THE CIO PERSPECTIVE

From the CIO perspective, Delphix appears to address an increasingly complicated space with a clever, yet simple approach. The three key takeaways are: a) Ability to mask data (DB, unstructured, files) at intake versus when pulling copies, b) ability to smartly replicate data and c) potential to manage data management policies. Lastly, this is not a solution that must run in the corporate data center. Delphix supports running in public cloud services including Microsoft Azureand Amazon AWS.

In Summary, Delphix appears to have decreased the friction to data access by automating the data protection and management processes. All while supporting an enterprise’s move to cloud-based resources.

Business · Cloud · Data

Microsoft empowers the developer at Connect

iagGpMj8TOWl3Br86sxsgg

This week at Microsoft Connect in New York City, Microsoft announced a number of products geared toward bringing intelligence and the computing edge closer together. The tools continue Microsoft’s support of a varied and growing ecosystem of evolving solutions. At the same time, Microsoft demonstrated their insatiable drive to woo the developer with a number of tools geared toward modern development and advanced technology.

EMBRACING THE ECOSYSTEM DIVERSITY

Microsoft has tried hard in the past several years to shed their persona of Microsoft-centricity of a .NET Windows world. Similar to their very vocal support for inclusion and diversity in culture, Microsoft brings that same perspective to the tools, solutions and ecosystems they support. The reality is that the world is diverse and it is this very diversity that makes us stronger. Technology is no different.

At the Connect conference, similar to their recent Build & Ignite conferences, .NET almost became a footnote as much of the discussion was around other tools and frameworks. In many ways, PHP, Java, Node and Python appeared to get mentioned more than .NET. Does this mean that .NET is being deprecated in favor of newer solutions? No. But it does show that Microsoft is moving beyond just words in their drive toward inclusivity.

EXPANDING THE DEVELOPER TOOLS

At Connect, Microsoft announced a number of tools aimed squarely at supporting the modern developer. This is not the developer of years past. Today’s developer works in a variety of tools, with different methods and potentially in separate locations. Yet, they need the ability to collaborate in a meaningful way. Enter Visual Studio Live Share. What makes VS Live Share interesting is how it supports collaboration between developers in a more seamless way without the cumbersome screen sharing approach previously used. The level of sophistication that VS Live Share brings is impressive in that it allows each developer to walk through code in their own way while they debug and collaborate. While VS Live Share is only in preview, other recently-announced tools are already seeing significant adoption in a short period of time that ranges in the millions of downloads.

In the same vein of collaboration and integration, DevOps is of keen interest to most enterprise IT shops. Microsoft showed how Visual Studio Team Services embraces DevOps in a holistic way. While the demonstration was impressive, the question of scalability often comes into the picture for large, integrated teams. It was mentioned that VS Team Services is currently used by the Microsoft Windows development team and their whopping 25,000 developers.

Add to scale the ability to build ‘safe code’ pipelines with automation that creates triggers to evaluate code in-process and one can quickly see how Microsoft is taking the modern, sophisticated development process to heart.

POWERING DATA AND AI IN THE CLOUD

In addition to developer tools, time was spent talking about Azure, data and Databricks. I had the chance to sit down with Databricks CEO Ari Ghodsi to talk about how Azure Databricks is bringing the myriad of data sources together for the enterprise. The combination of Databricks on Azure provides the scale and ecosystem that highlights the power of Databricks to integrate the varied data sources that every enterprise is trying to tap into.

MIND THE DEVELOPER GAP

Developing applications that leverage analytics and AI is incredibly important, but not a trivial task. It often requires a combination of skills and experience to fully appreciate the value that comes from AI. Unfortunately, developers often do not have the data science skills nor business context needed in today’s world. I spoke with Microsoft’s Corey Sanders after his keynote about how Microsoft is bridging the gap for the developer. Both Sanders & Ghodsi agree that the gap is an issue. However, through the use of increasingly sophisticated tools such as Databricks and Visual Studio, Sanders & Ghodsi believe Microsoft is making a serious attempt at bridging this gap.

It is clear that Microsoft is getting back to its roots and considering the importance of the developer in an enterprise’s digital transformation journey. While there are still many gaps to fill, it is interesting to see how Microsoft is approaching the evolving landscape and complexity that is the enterprise reality.

CIO · Cloud

The difference between Hybrid and Multi-Cloud for the Enterprise

Cloud computing still presents the single biggest opportunity for enterprise companies today. Even though cloud-based solutions have been around for more than 10 years now, the concepts related to cloud continue to confuse many.

Of late, it seems that Hybrid Cloud and Multi-Cloud are the latest concepts creating confusion. To make matters worse, a number of folks (inappropriately) use these terms interchangeably. The reality is that they are very different.

The best way to think about the differences between Hybrid Cloud and Multi-Cloud is in terms of orientation. One addresses a continuum of different services vertically while the other looks at the horizontal aspect of cloud. There are pros and cons to each and they are not interchangeable.

 

Multi-Cloud: The horizontal aspect of cloud

Multi-Cloud is essentially the use of multiple cloud services within a single delivery tier. A common example is the use of multiple Public Cloud providers. Enterprises typically use a multi-cloud approach for one of three reasons:

  • Leverage: Enterprise IT organizations are generally risk-adverse. There are many reasons for this to be discussed in a later post. Fear of taking risks tends to inform a number of decisions including choice of cloud provider. One aspect is the fear of lock-in to a single provider. I addressed my perspective on lock-in here. By using a multi-cloud approach, an enterprise can hedge their risk across multiple providers. The downside is that this approach creates complexities with integration, organizational skills and data transit.
  • Best of Breed: The second reason enterprises typically use a multi-cloud strategy is due to best of breed solutions. Not all solutions in a single delivery tier offer the same services. An enterprise may choose to use one provider’s solution for a specific function and a second provider’s solution for a different function. This approach, while advantageous in some respects, does create complexity in a number of ways including integration, data transit, organizational skills and sprawl.
  • Evaluation: The third reason enterprises leverage a multi-cloud strategy is relatively temporary and exists for evaluation purposes. This third approach is actually a very common approach among enterprises today. Essentially, it provides a means to evaluate different cloud providers in a single delivery tier when they first start out. However, they eventually focus on a single provider and build expertise around that single provider’s solution.

In the end, I find that the reasons that enterprises choose one of the three approaches above is often informed by their maturity and thinking around cloud in general. The question many ask is: Do the upsides of leverage or best of breed outweigh the downsides of complexity?

Hybrid Cloud: The vertical approach to cloud

Most, if not all, enterprises are using a form of hybrid cloud today. Hybrid cloud refers to the vertical use of cloud in multiple different delivery tiers. Most typically, enterprises are using a SaaS-based solution and Public Cloud today. Some may also use Private Cloud. Hybrid cloud does not require that a single application spans the different delivery tiers.

The CIO Perspective

The important take away from this is to understand how you leverage Multi-cloud and/or Hybrid cloud and less about defining the terms. Too often, we get hung up on defining terms more than understanding the benefits from leveraging the solution…or methodology. Even when discussing outcomes, we often still focus on technology.

These two approaches are not the same and come with their own set of pros and cons. The value from Multi-Cloud and Hybrid Cloud is that they both provide leverage for business transformation. The question is: How will you leverage them for business advantage?

Business · Cloud

One theory on Amazon interest in a second headquarters

Amazon announced that they are in search of a location for their second headquarters. The new headquarters facility is expected to create 50,000 jobs and bidders are welcome to submit their proposals to woo the Amazon opportunity. While that, in itself, sounds great, there may be more in the works than just a new headquarters. Let me share my theory on what this may indicate.

THE LOCATION SHORTLIST

First, companies like Amazon do not go into major decisions like this without already having a pretty good idea of how it will end. There is just too much risk at stake. In this specific case, the physical location of the second headquarters. Prior to making the announcement, I suspect Amazon already done their due diligence and has an internal shortlist of potential locations they would accept.

When evaluating Amazon’s two core businesses, Amazon.com and Amazon Web Services (AWS), both rely heavily on technology. Therefore, a headquarters location must have a strong technology ecosystem that can support their separate growth trajectories.

While just about any major city in the US could support a new headquarters, tech-centric locations on the shortlist may include Silicon Valley, Las Vegas, Phoenix, Austin, Atlanta, New York or Boston. One outlier may include Washington DC/ Virginia. Why? As Amazon continues their spectacular growth, innovation and acquisition of competitors, it will need stronger ties to government in-circles.

So, which location? My theory is that the process is more of a formality and the decision is between a couple of locations that will come down to local/ state tax incentives. If true, the shortlist is a few locations less than outlined above.

IS A SPLIT ON THE HORIZON?

It is not common for companies to suggest a second ‘headquarters’ location. It does happen, but not often. There may be an undercurrent driving this move. Amazon has two core businesses; Amazon.com and AWS. Almost two years ago, Amazon announced that Andy Jassy would be promoted to CEO of AWS. This may be the first market in a longer-term strategy for Amazon.

One challenge Amazon continues to face is conflict between their core Amazon.com business and Amazon Web Services (AWS). Major customers of AWS continue to flee when Amazon.com moves into a competitive role. Essentially, Amazon.com gains are negatively impacting AWS. For example, Walmart is just one of the latest customers to do so. In the enterprise space, prospective customers have expressed concern that AWS (historically) is not Amazon’s core business. The distribution business is their core. Of course, in the past few years, AWS has grown significantly. However, it still presents a challenge. Splitting Amazon into two companies with Andy Jassy taking on new AWS entity could be the solution.

SPLIT DECISIONS

But there is a potential problem with splitting AWS from Amazon. When they operate as a combined company, Amazon is not required to disclose their significant AWS customers as they are not material in revenue to their core business. However, if the two companies were to split, this disclosure could be required and would bring focus to who AWS’ material customers are…in a very public way.

Now, if none of AWS’ customers are material, or contribute a significant amount of value (individually) to their financial revenue, this issue is not relevant. However, I suspect that Amazon.com is a major consumer of AWS’ services. And there may be a couple of other major customers.

If there are significant, material customers in the mix, it could present concerns among shareholders of AWS. Today, we don’t have clarity to this issue due to the economic halo effect of the core Amazon.com business. Splitting the companies brings this potential issue to light…and may be the reason Amazon has not split the two companies yet.

IMPACT TO SEATTLE ECOSYSTEM

The last driver may be the Seattle ecosystem itself. Seattle is a vibrant, technology metropolis that supports several major technology companies like Microsoft and Amazon. In addition, major companies like Boeing and Costco consume a significant footprint too. Big companies bring great opportunities and economic growth to communities. However, they can have a downside too. Cost of living increases, risk of losing a company, limited skilled people are all risks that offset the opportunities. One can look to the SF Bay Area/ Silicon Valley to see how this is playing out, how competitive it is for talent and how hard it is to relocate someone to the Bay Area.

It is probable that with Amazon’s success and growth trajectory, they may feel that the Seattle ecosystem is starting to become limiting or incapable of handing the entirety of a company like Amazon today and moving forward. If this were the case, I suspect the shortlist of potential suitors may not include Silicon Valley, New York or Boston.

MY TAKE

All that being said, my theory is that there is an impending split on the horizon for Amazon. The move of Jassy to CEO, AWS’ continued growth and secondary factors point to this as a possible outcome. That coupled with the ability for AWS having proved it can stand on its own without the core Amazon.com business further support the perspective.

I look forward to hearing what you think. Share your thoughts in the comments below!

CIO · Cloud · Data

Why are enterprises moving away from public cloud?

IMG_6559

We often hear of enterprises that move applications from their corporate data center to public cloud. This may come in the form of lift and shift. But then something happens that causes the enterprise to move it out of public cloud. This yo-yo effect and the related consequences create ongoing challenges that contribute to several of the items listed in Eight ways enterprises struggle with public cloud.

In order to better understand the problem, we need to work backwards to the root cause…and that often starts with the symptoms. For most, it starts with costs.

UNDERSTANDING THE ECONOMICS

The number one reason why enterprises pull workloads back out of cloud has to do with economics. For public cloud, it comes in the form of a monthly bill for public cloud services. In the post referenced above, I refer to a cost differential of 4x. That is to say that public cloud services cost 4x the corporate data center alternative for the same services. These calculations include fully-loaded total cost of ownership (TCO) numbers on both sides over a period of years to normalize capital costs.

4x is a startling number and seems to fly in the face of a generally held belief that cloud computing is less expensive than the equivalent on-premises corporate data center. Does this mean that public cloud is not less expensive? Yes and no.

THE IMPACT OF LEGACY THINKING

In order to break down the 4x number, one has to understand legacy thinking heavily influences this number. While many view public cloud as less expensive, they often compare apples to oranges when comparing public cloud to corporate data centers. And many do not consider the fully-loaded corporate data center costs that includes server, network, storage…along with power, cooling, space, administrative overhead, management, real estate, etc. Unfortunately, many of these corporate data center costs are not exposed to the CIO and IT staff. For example, do you know how much power your data center consumes and the cost for real estate? Few IT folks do.

There are five components that influence legacy thinking:

  1. 24×7 Availability: Most corporate data centers and systems are built around 24×7 availability. There is a significant amount of data center architecture that goes into the data center facility and systems to support this expectation.
  2. Peak Utilization: Corporate data center systems are built for peak utilization whether they use it regularly or not. This unused capacity sits idle until needed and only used at peak times.
  3. Redundancy: Corporate infrastructure from the power subsystems to power supplies to the disk drives is designed for redundancy. There is redundancy within each level of data center systems. If there is a hardware failure, the application ideally will not know it.
  4. Automation & Orchestration: Corporate applications are not designed with automation & orchestration in mind. Applications are often installed on specific infrastructure and left to run.
  5. Application Intelligence: Applications assume that availability is left to other systems to manage. Infrastructure manages the redundancy and architecture design manages the scale.

Now take a corporate application with this legacy thinking and move it directly into public cloud. It will need peak resources in a redundant configuration running 24×7. That is how they are designed, yet, public cloud benefits from a very different model. Running an application in a redundant configuration at peak 24×7 leads to an average of 4x in costs over traditional data center costs.

This is the equivalent of renting a car every day for a full year whether you need it or not. In this model, the shared model comes at a premium.

THE SOLUTION IS IN PLANNING

Is this the best way to leverage public cloud services? Knowing the details of what to expect leads one to a different approach. Can public cloud benefit corporate enterprise applications? Yes. Does it need planning and refactoring? Yes.

By refactoring applications to leverage the benefits of public cloud rather than assume legacy thinking, public cloud has the potential to be less expensive than traditional approaches. Obviously, each application will have different requirements and therefore different outcomes.

The point is to shed legacy thinking and understand where public cloud fits best. Public cloud is not the right solution for every workload. From those applications that will benefit from public cloud, understand what changes are needed before making the move.

OTHER REASONS

There are other reasons that enterprises exit public cloud services beyond just cost. Those may include:

  1. Scale: Either due to cost or significant scale, enterprises may find that they are able to support applications within their own infrastructure.
  2. Regulatory/ Compliance: Enterprises may use test data with applications but then move the application back to corporate data centers when shifting into production with regulated data. Or compliance requirements may force the need to have data resources local to maintain compliance. Sovereignty issues also drive decisions in this space.
  3. Latency: There are situations where public cloud may be great on paper, but in real-life latency presents a significant challenge. Remote and time-sensitive applications are good examples.
  4. Use-case: The last catch-all is where applications have specific use-cases where public cloud is great in theory, but not the best solution in practice. Remember that public cloud is a general-purpose infrastructure. As an example, there are application use-cases that need fine-tuning that public cloud is not able to support. Other use-cases may not support public cloud in production either.

The bottom line is to fully understand your requirements, think ahead and do your homework. Enterprises have successfully moved traditional corporate applications to public cloud…even those with significant regulatory & compliance requirements. The challenge is to shed legacy thinking and consider where and how best to leverage public cloud for each application.