Business · Cloud

Riverbed extends into the cloud

logo_riverbed_orange

One of the most critical, but often overlooked components in a system is that of the network. Enterprises continue to spend considerable amounts of money on network optimization as part of their core infrastructure. Traditionally, enterprises have controlled much of the network between applications components. Most of the time the different tiers of an application were collocated in the same data center or across multiple data centers and dedicated network connections that the enterprise had control of.

The advent of cloud changed all of that. Now, different tiers of an application may be spread across different locations, running on systems that the enterprise does not control. This lack of control provides a new challenge to network management.

In addition to applications moving, so does the data. As applications and data move beyond the bounds of the enterprise data center, so does the need to address the increasingly dispersed network performance requirements. The question is: How do you still address network performance management with you no longer control the underlying systems and network infrastructure components?

Riverbed is no stranger to Network performance management. Their products are widely used across enterprises today. At Tech Field Day’sCloud Field Day 3, I had the chance to meet up with the Riverbed team to discuss how they are extending their technology to address the changing requirements that cloud brings.

EXTENDING NETWORK PERFORMANCE TO CLOUD

Traditionally network performance management involved hardware appliances that would sit at the edges of your applications or data centers. Unfortunately, in a cloud-based world, the enterprise does not have access to the cloud data center nor network egress points.

Network optimization in cloud requires an entirely different approach. Add to this that application services are moving toward ephemeral behaviors and one can quickly see how this becomes a moving target.

Riverbed takes a somewhat traditional approach to how they address the network performance management problem in the cloud. Riverbed gives the enterprise the option to run their software as either a ‘sidecar’ to the application or as part of the cloud-based container.

EXTENDING THE DATA CENTER OR EMBRACING CLOUD?

There are two schools of thought on how one engages a mixed environment of traditional data center assets along with cloud. The first is to look at extending the existing data center so that the cloud is viewed as simply another data center. The second approach is to change the perspective where the constraints are reduced to the application…or better yet service level. The latter is a construct that is typical in cloud-native applications.

Today, Riverbed has taken the former approach. They view the cloud as another data center in your network. To this point, Riverbed’s SteelFusion product works as if the cloud is another data center in the network. Unfortunately, this only works when you have consolidated your cloud-based resources into specific locations.

Most enterprises are looking at a very fragmented approach to their use of cloud-based resources today. A given application may consume resources across multiple cloud providers and locations due to specific resource requirements. This shows up in how enterprises are embracing a multi-cloud strategy. Unfortunately, consolidation of cloud-based resources works against one of the core value propositions to cloud; the ability to leverage different cloud solutions, resources and tools.

UNDERSTANDING THE RIVERBED PORTFOLIO

During the session with the Riverbed team, it was challenging to understand how the different components of their portfolio work together to address the varied enterprise requirements. The portfolio does contain extensions to existing products that start to bring cloud into the network fold. Riverbed also discussed their Steelhead SaaS product, but it was unclear how it fits into a cloud native application model. On the upside, Riverbed is already supporting multiple cloud services by allowing their SteelConnect Manager product to connect to both Amazon Web Services (AWS) and Microsoft Azure. On AWS, SteelConnect Manager can run as an AWS VPC.

Understanding the changing enterprise requirements will become increasingly more difficult as the persona of the Riverbed buyer changes. Historically, the Riverbed customer was a network administrator or infrastructure team member. As enterprises move to cloud, the buyer changes to the developer and possibly the business user in some cases. These new personas are looking for quick access to resources and tools in an easy to consume way. This is very similar to how existing cloud resources are consumed. These new personas are not accustomed to working with infrastructure nor do they have an interest in doing so.

PROVIDING CLARITY FOR THE CHANGING CLOUD CUSTOMER

Messaging and solutions geared to these new personas of buyers need to be clear and concise. Unfortunately, the session with the Riverbed team was very much focused on their traditional customer; the Network administrator. At times, they seemed to be somewhat confused by questions that addressed cloud native application architectures.

One positive indicator is that Riverbed acknowledged that the end-user experience is really what matters, not network performance. In Riverbed parlance, they call this End User Experience Management (EUEM). In a cloud-based world, this will guide the Riverbed team well as they consider what serves as their North Star.

As enterprise embrace cloud-based architectures more fully, so will the need for Riverbed’s model that drives their product portfolio, architecture and go-to-market strategy. Based on the current state, they have made some inroads, but have a long way to go.

Further Reading: The difference between hybrid and multi-cloud for the enterprise

CIO · Cloud · Data

Why are enterprises moving away from public cloud?

IMG_6559

We often hear of enterprises that move applications from their corporate data center to public cloud. This may come in the form of lift and shift. But then something happens that causes the enterprise to move it out of public cloud. This yo-yo effect and the related consequences create ongoing challenges that contribute to several of the items listed in Eight ways enterprises struggle with public cloud.

In order to better understand the problem, we need to work backwards to the root cause…and that often starts with the symptoms. For most, it starts with costs.

UNDERSTANDING THE ECONOMICS

The number one reason why enterprises pull workloads back out of cloud has to do with economics. For public cloud, it comes in the form of a monthly bill for public cloud services. In the post referenced above, I refer to a cost differential of 4x. That is to say that public cloud services cost 4x the corporate data center alternative for the same services. These calculations include fully-loaded total cost of ownership (TCO) numbers on both sides over a period of years to normalize capital costs.

4x is a startling number and seems to fly in the face of a generally held belief that cloud computing is less expensive than the equivalent on-premises corporate data center. Does this mean that public cloud is not less expensive? Yes and no.

THE IMPACT OF LEGACY THINKING

In order to break down the 4x number, one has to understand legacy thinking heavily influences this number. While many view public cloud as less expensive, they often compare apples to oranges when comparing public cloud to corporate data centers. And many do not consider the fully-loaded corporate data center costs that includes server, network, storage…along with power, cooling, space, administrative overhead, management, real estate, etc. Unfortunately, many of these corporate data center costs are not exposed to the CIO and IT staff. For example, do you know how much power your data center consumes and the cost for real estate? Few IT folks do.

There are five components that influence legacy thinking:

  1. 24×7 Availability: Most corporate data centers and systems are built around 24×7 availability. There is a significant amount of data center architecture that goes into the data center facility and systems to support this expectation.
  2. Peak Utilization: Corporate data center systems are built for peak utilization whether they use it regularly or not. This unused capacity sits idle until needed and only used at peak times.
  3. Redundancy: Corporate infrastructure from the power subsystems to power supplies to the disk drives is designed for redundancy. There is redundancy within each level of data center systems. If there is a hardware failure, the application ideally will not know it.
  4. Automation & Orchestration: Corporate applications are not designed with automation & orchestration in mind. Applications are often installed on specific infrastructure and left to run.
  5. Application Intelligence: Applications assume that availability is left to other systems to manage. Infrastructure manages the redundancy and architecture design manages the scale.

Now take a corporate application with this legacy thinking and move it directly into public cloud. It will need peak resources in a redundant configuration running 24×7. That is how they are designed, yet, public cloud benefits from a very different model. Running an application in a redundant configuration at peak 24×7 leads to an average of 4x in costs over traditional data center costs.

This is the equivalent of renting a car every day for a full year whether you need it or not. In this model, the shared model comes at a premium.

THE SOLUTION IS IN PLANNING

Is this the best way to leverage public cloud services? Knowing the details of what to expect leads one to a different approach. Can public cloud benefit corporate enterprise applications? Yes. Does it need planning and refactoring? Yes.

By refactoring applications to leverage the benefits of public cloud rather than assume legacy thinking, public cloud has the potential to be less expensive than traditional approaches. Obviously, each application will have different requirements and therefore different outcomes.

The point is to shed legacy thinking and understand where public cloud fits best. Public cloud is not the right solution for every workload. From those applications that will benefit from public cloud, understand what changes are needed before making the move.

OTHER REASONS

There are other reasons that enterprises exit public cloud services beyond just cost. Those may include:

  1. Scale: Either due to cost or significant scale, enterprises may find that they are able to support applications within their own infrastructure.
  2. Regulatory/ Compliance: Enterprises may use test data with applications but then move the application back to corporate data centers when shifting into production with regulated data. Or compliance requirements may force the need to have data resources local to maintain compliance. Sovereignty issues also drive decisions in this space.
  3. Latency: There are situations where public cloud may be great on paper, but in real-life latency presents a significant challenge. Remote and time-sensitive applications are good examples.
  4. Use-case: The last catch-all is where applications have specific use-cases where public cloud is great in theory, but not the best solution in practice. Remember that public cloud is a general-purpose infrastructure. As an example, there are application use-cases that need fine-tuning that public cloud is not able to support. Other use-cases may not support public cloud in production either.

The bottom line is to fully understand your requirements, think ahead and do your homework. Enterprises have successfully moved traditional corporate applications to public cloud…even those with significant regulatory & compliance requirements. The challenge is to shed legacy thinking and consider where and how best to leverage public cloud for each application.

Business · Cloud · Data

Amazon drives cloud innovation toward the enterprise

 

Amazon continues to drive forward with innovation at a blistering pace. At their annual re:Invent confab, Amazon announced dozens of products to an audience of over 30,000 attendees. There are plenty of newsworthy posts outlining the specific announcements including Amazon’s own re:Invent website. However, there are several announcements that specifically address the growing enterprise demand for cloud computing resources.

INNOVATION AT A RAPID SCALE

One thing that stuck out at the conference was the rate in which Amazon is innovating. Amazon is innovating so fast it is often hard to keep up with the changes. On one hand, it helps Amazon check the boxes when compared against other products. On the other hand, new products like Amazon Rekognition, Polly and Lex demonstrate the level of sophistication that Amazon can bring to market beyond simple infrastructure services. By leveraging their internal expertise in AI and Machine Learning, Amazon’s challenge is one of productizing the solutions.

The sheer number of new, innovative solutions is remarkable but makes it hard to keep track of the best solutions to use for different situations. In addition, it creates a bulging portfolio of services like that of its traditional corporate software competitors.

As an enterprise uses more of Amazon’s products, the fear of lock-in grows. Should this be a concern to either Amazon or potential enterprise customers? Read my post: Is the concept of enterprise lock in a red herring? Lock in is a reality across cloud providers today, not just Amazon. Building solutions for one platform does not provide for easy migration to competing solutions. Innovation is a good thing, but does come at a cost.

DRIVING TOWARD THE EDGE

There are two issues that challenge enterprises evaluating the potential of cloud computing. One challenge is the delivery mechanism. Not all applications are well suited for a centralized cloud-based delivery approach. There are use cases in just about every industry where computing is best suited at the edge. The concept of hybrid cloud computing is one way to address it. At re:Invent, Amazon announced Greengrass which moves the computing capability of Amazon’s Lambda function to a device. At the extreme, Greengrass enables the ability to embed cloud computing functions on a chip.

Moving cloud functionality to the edge is one issue. A second perspective is that it signals Amazon’s acknowledgement that not all roads end with public cloud. The reality is that most industries have use cases where centralized cloud computing is simply not an option. One example, of many, is processing at a remote location. Backhauling data to the cloud for processing is not a viable option. In addition, Internet of Things (IoT) is presenting opportunities and challenges for cloud. The combination of Greengrass and, also announced, Snowball Edge extend Amazon’s reach to the edge of the computing landscape.

AS THE SNOWBALL ROLLS DOWNHILL…

As a snowball rolls downhill, it grows in size. Last year, Amazon announced the data storage onboarding appliance, Snowball. Since last year’s re:Invent, Amazon found customers were using Snowball in numbers exceeding expectations. In addition to the sheer number of Snowball devices, customers are moving larger quantities of data onto Amazon’s cloud. Keep in mind it is still faster to move large quantities of data via truck than over the wire. To address this increase in demand, Amazon drove an 18-wheeled semi-truck and trailer on stage to announce Amazon Snowmobile. While everyone thought it was a gimmick, it is quite real. Essentially, Snowmobile is a semi-trailer that houses a massive storage-focused data center. From an enterprise perspective, this addresses one of the core challenges to moving applications to cloud: how to move the data…and lots of it.

IS AMAZON READY FOR ENTERPRISE?

With the announcements made to date, is Amazon truly ready for enterprise demand? Amazon is the clear leader for public cloud services today. They squarely captured the webscale and startup markets. However, a much larger market is still relatively untapped: Enterprises. Unlike the webscale and startup markets, the enterprise market is both exponentially larger and incredibly more complex. Many of these issues are addressed in Eight ways enterprises struggle with public cloud. For any cloud provider, understanding the enterprise is the first of several steps. A second step is in providing products and services that help enterprises with the onboarding process. As an analogy: Building a beautiful highway is one thing. When you ask drivers to build their own onramps, it creates a significant hurdle to adoption. This is precisely the issue for enterprises when it comes to public cloud. Getting from here to there is not a trivial step.

amazon-enterprise-cloud-gap

To counter the enterprise challenges, Amazon is taking steps in the direction of the enterprise. First is the fundamental design of their data centers and network. Amazon understands that enterprises are looking for data center redundancy. One way they address this is by maintaining multiple data centers in each location. After learning about the thoughts and reasons behind some of their strategic decisions, it’s clear there is quite a bit of deep thinking that goes into decisions. That will bode well for enterprises. Second, Amazon announced their partnership with VMware. I addressed my thoughts on the partnership here: VMware and Amazon AWS Partnership: An Enterprise Perspective. A third step is Amazon’s AWS Migration Acceleration Program. This program is led by a former CIO and directly targets enterprises looking to adopt Amazon’s services. In addition to their internal migration organization, Amazon is building out their partner program to increase the number of partners helping enterprises migrate their applications to Amazon.

ALL ROADS DO NOT LEAD TO PUBLIC CLOUD

Even with all the work Amazon is doing to woo enterprise customers, significant challenges exist. Many assume that all roads lead to public cloud. This statement overstates the reality of how companies need to consume computing resources. There are several paths and outcomes supporting the reality of enterprise computing environments.

How Amazon addresses those concerns will directly impact their success in the enterprise market. Amazon is closing the gap, but so are competitors like Microsoft and others.

CIO · Cloud

Is public cloud more or less expensive than corporate data center options?

 

img_1963

First a shout out to both Steve Kaplan and Jeff Sussna for encouraging this post. This post is a continuation from the post Eight ways enterprise struggle with public cloud and delves into the reasons why public cloud can be 4x the cost of corporate data centers.

Enterprises often look toward public cloud as a cost savings measure. Cost savings is the first stage of the enterprise maturity model leverage cloud. The thinking is that public cloud is less expensive than corporate data center options, right? Yes and no. Enterprises are learning the hard way that public cloud is not less expensive than corporate data center options. Why is that the case and can anything be done to change the outcome?

AN ANALOG MODEL

First, it is important to understand the differences in usage behavior between enterprise applications leveraging corporate data centers versus public cloud. An analog is the difference between buying a car versus renting it. In this analogy, the car represents infrastructure. Which is better? Which is less expensive? To answer those questions, one first needs to understand usage behaviors.

Scenario A

Assume for a minute that you were accustomed to purchasing a large car. Whether you used it every day or not, it would sit, running, ready whenever needed. Some days you only need one seat, while other days, you need all five seats plus lots of luggage space. In this model, you pay for the large car whether used or not.

Scenario B

Now, imagine those assumptions, but rather than purchasing the large car, it is simply rented. The large car is rented full time, running and ready whether used or not.

In Scenario B, a premium is paid for the ability to rent the car. Yet, the advantages of 1) renting the car only when needed and 2) renting the size of car most appropriately needed are lost. The large car is rented whether needed or not.

Like the car model, enterprise applications are built around a model that assumes the large car is available 24×7 even though you may or may not use all its capacity or every day. Enterprise applications are accustomed to purchasing the car, not renting it. Purchasing makes sense for this behavior. Yet, when moving enterprise applications to public cloud without changing the behaviors, the advantages of shifting the model from owning to renting are lost.

BEHAVIOR AND ARCHITECTURE CHANGES OUTCOMES

Changing the behaviors of the enterprise applications when moving to public cloud is the right answer to address this issue. However, that is easier said than done. Adding orchestration and automation within an application to leverage resources when needed and returning them when done often requires a significant architectural change or a complete application re-write. Both options are significant and work against any potential cost savings for public cloud.

Adding another wrinkle to the mix is that enterprise applications are architected to assume that infrastructure is always available. That means that redundancy and resiliency is maintained in the infrastructure, not the application. Public cloud infrastructure is not built with this redundancy and resiliency. Public cloud requires the application to carry the intelligence to address infrastructure issues. Shifting intelligence to the application is yet another significant architectural challenge for enterprise applications.

Note that these architectural changes bring added value beyond efficiently leveraging public cloud. Those changes include agility and responsiveness to an ever-changing business climate.

CAN ENTERPRISES STILL LEVERAGE PUBLIC CLOUD?

With these significant hurdles and those addressed in Eight ways enterprises struggle with public cloud, it is easy to see why enterprise public cloud adoption is relatively sluggish. Yet, CIOs still need to get out of the data center business and public cloud is a fine option for those applications that make sense. Between public cloud and corporate data centers, which model is ultimately better? It depends on the needs and behaviors of the applications along with the capability of the organization.

It is important to take a minute and think about the path to public cloud. It is also important to understand that not all roads lead to public cloud. Avoid the potholes, plan accordingly and leverage the benefits.

CIO · Cloud

Eight ways enterprises struggle with public cloud

img_5313

The move to public cloud is not new yet many enterprises still struggle to successfully leverage public cloud services. Public cloud services have existed for more than a decade. So, why is it that companies still struggle to effectively…and successfully leverage public cloud? And, more importantly, what can be done, if anything, to address those challenges?

There is plenty of evidence showing the value of public cloud and its allure for the average enterprise. For most CIOs and IT leaders, they understand that there is potential with public cloud. That is not the fundamental problem. The issue is in how you get from here to there. Or, in IT parlance, how you migrate from current state to future state. For many CIOs, cloud plays a critical role in their digital transformation journey.

The steps in which you take as a CIO are not as trivial as many make it out to be. The level of complexity and process is palpable and must be respected. Simply put, it is not a mindset, but rather reality. This is the very context missing from many conversations about how enterprises, and their CIO, should leverage public cloud. Understanding and addressing the challenges provides for greater resolution to a successful path.

THE LIST OF CHALLENGES

Looking across a large cross-section of enterprises, several patterns start to appear. It seems that there are six core reasons why enterprises struggle to successfully adopt and leverage public cloud.

  1. FUD: Fear, Uncertainty and Doubt still ranks high among the list of issues with public cloud…and cloud in general. For the enterprise, there is value, but also risk with public cloud. Industry-wide, there is plenty of noise and fluff that further confuses the issues and opportunities.
  2. % of Shovel Ready Apps: In the average enterprise, only 10-20% of an IT organization’s budget (and effort) is put toward new development. There are many reasons for this. However, it further limits the initial opportunity for public cloud experimentation.
  3. Cost: There is plenty of talk about how public cloud is less costly than traditional corporate data center infrastructure. However, the truth is that public cloud is 4x the cost of running the same application within the corporate data center. Yes, 4x…and that considers a fully-loaded corporate data center cost. Even so, the reasons in this list contribute to the 4x factor and therefore can be mitigated.
  4. Automation & Orchestration: Corporate enterprise applications were never designed to accommodate automation and orchestration. In many cases, the effort to change an application may range from requiring significant changes to a wholesale re-write of the application.
  5. Architectural Differences: In addition to a lack of automation & orchestration support, corporate enterprise applications are architected where redundancy lies in the infrastructure tiers, not the application. The application assumes that the infrastructure is available 24×7 regardless if it is needed for 24 hours or 5 minutes. This model flies in the face of how public cloud works.
  6. Cultural impact: Culturally, many corporate IT folks work under an assumption that the application (and infrastructure it runs on) is just down the hall in the corporate data center. For infrastructure teams, they are accustomed to managing the corporate data center and infrastructure that supports the corporate enterprise applications. Moving to a public cloud infrastructure requires changes in how the CIO leads and how IT teams operate.
  7. Competing Priorities: Even if there is good reason and ROI to move an application or service to public cloud, it still must run the gauntlet of competing priorities. Many times, those priorities are set by others outside of the CIOs organization. Remember that there is only a finite amount of budget and resources to go around.
  8. Directives: Probably one of the scariest things I have heard is board of directors dictating that a CIO must move to cloud. Think about this for a minute. You have an executive board dictating technology direction. Even if it is the right direction to take, it highlights other issues in the executive leadership ranks.

Overall, one can see how each of these eight items are intertwined with each other. Start to work on one issue and it may address another issue.

UNDERSTANDING THE RAMIFICATIONS

The bottom line is that, as CIO, even if I agree that public cloud provides significant value, there are many challenges that must be addressed. Aside from FUD and the few IT leaders that still think cloud is a fad that will pass, most CIOs I know support leveraging cloud. Again, that is not the issue. The issue is how to connect the dots to get from current state to future state.

However, not addressing the issues up front from a proactive perspective can lead to several outcomes. These outcomes are already visible in the industry today and further hinder enterprise public cloud adoption.

  1. Public Cloud Yo-Yo: Enterprises move an application to public cloud only to run into issues and then pull it back out to a corporate data center. Most often, this is due to the very issues outlined above.
  2. Public Cloud Stigma: Due to the yo-yo effect, it creates a chilling effect where corporate enterprise organizations slow or stop public cloud adoption. The reasons range from hesitation to flat out lack of understanding.

Neither of these two issues are good for enterprise public cloud adoption. Regardless, the damage is done and considering the other issues, pushes public cloud adoption further down the priority list. Yet, both are addressable with a bit of forethought and planning.

GETTING ENTERPRISES STARTED WITH PUBLIC CLOUD

One must understand that the devil is in the details here. While this short list of things ‘to-do’ may seem straight forward, how they are done and addressed is where the key is.

  1. Experiment: Experiment, experiment, experiment. The corporate IT organization needs a culture of experimentation. Experiments are mean to fail…and learned from. Too many times, the expectation is that experiments will succeed and when they don’t, the effort is abandoned.
  2. Understand: Take some time to fully understand public cloud and how it works. Bottom line: Public cloud does not work like corporate data center infrastructure. It is often best to try and forget what you know about your internal environment to avoid preconceived assumptions.
  3. Plan: Create a plan to experiment, test, observe, learn and feed that back into the process to improve. This statement goes beyond just technology. Consider the organizational, process and cultural impacts.

WRAPPING IT UP

There is a strong pull for CIOs to get out of the data center business and reduce their corporate data center footprint. Public cloud presents a significant opportunity for corporate enterprise organizations. But before jumping into the deep end, take some time to understand the issues and plan accordingly. The difference will impact the success of the organization, speed of adoption and opportunities to the larger business.

Further Reading…

The enterprise view of cloud, specifically public cloud, is confusing

The enterprise CIO is moving to a consumption-first paradigm

The three modes of enterprise cloud applications

CIO · Cloud

VMware and Amazon AWS Partnership: An Enterprise Perspective

 

logo-vmware-aws-partnership-grey_600x118

Much has already been written about the VMware and Amazon AWS Partnership that was announced October 13th, 2016 and the potential opportunities for enterprises. Unfortunately, many of the core issues for enterprises are either being glossed over or simply not addressed.

FUNDAMENTALS OF THE PARTNERSHIP

In a nutshell, the partnership provides clients with the ability to run VMware based VMs on Amazon’s AWS infrastructure. On the surface, this is a good thing for a number of reasons. Namely, it provides enterprises with the ability to move VMs from their internal corporate data centers (and infrastructure) to another provider. In essence, this moves the organization from CapEx to OpEx and gets the enterprise out of the data center business.

But if it were that simple, why isn’t everyone already doing it? Amazon’s offering is not the first of its kind. There are many other options for enterprises looking to get out of the data center business by moving their infrastructure and services to another provider. In addition to colocation options, companies like IBM and smaller, private firms have offered the ability to host VMware based VMs for years now. So, is the AWS/ VMware partnership different? And what is holding back progress?

UNDERSTANDING THE CHALLENGES

To answer that question, one needs to first understand the enterprise perspective. While the moving of a VM from one infrastructure to another may seem relatively simple on the surface, there are indeed many challenges in doing so. Many often gloss over these issues, but to the vast majority of enterprises, they are their reality. It is not a reality that one can simply will away.

The fundamental issues not addressed center into several categories:

  1. Pricing Model: One of the core issues enterprises have with VMware is their pricing. VMware is a fabulous solution that is mature and solid. But it is also very expensive. What is unclear is if the move to AWS will reduce costs, keep them the same or increase costs. Yes, all three of those are possible without fully understanding the ramifications.
  2. Ancillary Services: Most enterprise applications do not live in a silo. They interconnect with many other applications or services. Making those connections outside the confines of the corporate data center is not a trivial feat.
  3. 3rd Party Connections: One of the benefits of running your own corporate data center is that you can do things your own way. Moving to a shared infrastructure presents a number of new challenges for enterprise applications and processes.
  4. VM Management: Management of VMs is a core function within a VMware based infrastructure. This will change when you consider multiple locations (corporate data center vs AWS infrastructure). It is also unclear how much control the enterprise will maintain.
  5. Performance/ Bandwidth: Moving an enterprise application (VM or otherwise) to cloud is not trivial. In addition to the items listed above, bandwidth performance and cost will come into play. The performance of the infrastructure that VMs reside will also come into play. Within the corporate data center, an enterprise has far more control and visibility to issues for their infrastructure. Moving that to a AWS-based model will invoke many other variables.
  6. Security Constraints: Many enterprise applications are not built to live outside of the corporate data center nor protection of the security perimeter. Shifting that Amazon brings into play a number of questions yet to be answered.
  7. AWS Halo Effect: By having access to public cloud infrastructure (AWS) in the same facility as your VM-based infrastructure offers a number of advantages to those applications that integrate between the two worlds. The issue is in how many enterprise applications will take advantage of this.

 

HOW DOES THIS BENEFIT VMWARE?

In many ways, the partnership provides little benefit to VMware and its customers. There are still many questions that enterprises must answer before making the change. In addition, there are alternative approaches and offerings that may provide an advantage over the AWS offering for VM-based workloads.

HOW DOES THIS BENEFIT AMAZON?

The partnership benefits Amazon in several ways. At the most fundamental level, they now provide a solution to ingest the massive footprint of VMware based workloads onto their infrastructure. That becomes valuable, not because of the ties to AWS, but rather the fundamental weight that the data brings. The switching costs to move from one infrastructure to another are huge. As we go through time and data sets grow, the problem will only get worse over time.

THE DEVIL IS IN THE DETAILS

On the sum, is this a good or bad deal for enterprises? To answer that question, one has to do a bit of homework and understand their situation clearly. The devil is in the details. It is not as simple to wave a wand and say that the partnership is a sure thing and good for most enterprises. The world is not that simple. And neither are the enterprises that we’re talking about.

However, based on what we know today, the partnership appears to be okay on the surface. It is not something new and offers little to the vast majority of enterprises and their workloads. Compare that against the risks and outstanding questions…and the pendulum likely swings negative. As more details come out and issues addressed, we may see the opportunity shift yet again.

In the meantime, I would expect that current VMware based enterprise workloads will continue to reside within the corporate data center. You can read more about how CIOs are getting out of the data center business in other posts I’ve written on this site.

CIO · Cloud

The opportunities for enterprise and SMB cloud open up

Companies are getting out of the data center business in droves. While much of this demand is headed toward colocation facilities, there is a growing movement to cloud-based providers. A company’s move to cloud is a journey that spans a number of different modes over a period of time.

DataCenterCloudSpectrum

The workloads of a company do not reside only in one state at a time. They are often spread across most, if not all of the states at the same time. However, the general direction for workloads transits from left to right.

One of the bigger challenges in this journey was a missing piece; namely Private Cloud. Historically, the only way to build a private cloud was to do it yourself. That was easier said than done. The level of complexity compared with the value was simply not there. Fast forward a few years and there are now two core solutions on the market that fill this gap. They are:

While there are some similarities at a contextual level, the underlying technology is quite different from a user perspective. Blue Box Local is based on OpenStack while Azure Stack is based on Azure. Even so, both solutions provide continuity between their private cloud versions and their respective larger public cloud offerings.

Now that demand has reached critical mass, the supply side solutions (providers) are finally starting to mature with solutions that solve core needs for most enterprise and small-medium business (SMB) customers. The reality is that most companies are

From the provider perspective, critical mass served as an inhibitor to investment. Data center and cloud-based solutions are expensive to build. Without sufficient demand, providers might be left with assets sitting relatively idle. A costly venture for many considering a move into the cloud provider space.

Today, the economics are quite different. Not only are there a large number of companies moving to colocation and cloud-based but they are looking for solutions that support a varied and ever-changing use-case. As such, the solutions need to provide supportive and flexibility to adapt to this changing climate.

With the recent announcement of Azure Stack’s Technical Preview, a very interesting opportunity opened for MSPs looking to offer a cloud-based solution to customers. At this point, there are really only two companies that truly provide hybrid cloud solutions for both on premise and public cloud. Those are Blue Box/ IBM and Microsoft Azure.

THE MSP MARKET IS HUGE

When discussing cloud, much is discussed about the enterprise and startup/ web-scale markets with little mentioned regarding the Small-Medium Business (SMB) players. Yet, the SMB players make up a significant portion of the market. Why? The further down the scale, the harder it is to reach them. In most cases, SMB clients will leverage Managed Service Providers (MSPs)

For MSPs, the offerings were equally challenging to leverage in a meaningful way. Those days are quickly changing.

FOUR CORE OFFERINGS WITH VARIATIONS

Today, there are four core ways that cloud is offered in the service provider market.

  • Bare Metal: In this case, the client is essentially leasing hardware that is managed by the service provider. It does provide the client to bring their own software and license. But also brings an added level management requirement.
  • OpenStack: OpenStack provide a myriad of different options using an Open Source software platform. The challenge is in the ability for the client to truly support OpenStack. Unlike some may think, Open Source does not equate to free. However, there are solutions (like Blue Box/ IBM) that provide a commercial version for hybrid environments.
  • Azure: Azure comes in two flavors today that provide flexibility between requirements for on premise and public cloud. The former is served by Azure Stack while the latter is core Azure.
  • VMware: While VMware provides the greatest functionality for existing enterprise environments. In this model, companies are able to move their existing VMs to a providers VMware-based platform. Many of the companies that provide solutions in this space come for the colocation world and include QTS and PhoenixNAP.

These four categories are a quick simplification of how the different solutions map out. There are many variations of each of these solutions which makes comparison within a single category, let alone across categories, difficult.

TWO DELIVERY MODELS

MSPs looking to deliver cloud-based solutions were relegated to two options: 1) roll your own solution or 2) try to leverage an existing hosting offering and layer your services on top. Neither was particularly appealing to MSPs. Today, there are two core delivery options that are shaping up:

  • Single Tenant: In this model, a MSP would stand up a cloud solution specifically for a given client. Each client has their own instance that the MSP manages. In many ways, this is really just a simple hosted model of cloud.
  • Multi Tenant: In this model, there is a single instance of a cloud solution that the MSP manages. However, it is shared across many clients.

There are challenges and opportunities to both approaches and they need to match the capabilities of both the MSP and their clients.

As you start to map your cloud journey, the road and onramps are starting to shape up nicely. That is true for both MSPs and enterprise clients alike. There could not be a better time for enterprises to engage with cloud in a holistic way.