Business · Cloud

Riverbed extends into the cloud

logo_riverbed_orange

One of the most critical, but often overlooked components in a system is that of the network. Enterprises continue to spend considerable amounts of money on network optimization as part of their core infrastructure. Traditionally, enterprises have controlled much of the network between applications components. Most of the time the different tiers of an application were collocated in the same data center or across multiple data centers and dedicated network connections that the enterprise had control of.

The advent of cloud changed all of that. Now, different tiers of an application may be spread across different locations, running on systems that the enterprise does not control. This lack of control provides a new challenge to network management.

In addition to applications moving, so does the data. As applications and data move beyond the bounds of the enterprise data center, so does the need to address the increasingly dispersed network performance requirements. The question is: How do you still address network performance management with you no longer control the underlying systems and network infrastructure components?

Riverbed is no stranger to Network performance management. Their products are widely used across enterprises today. At Tech Field Day’sCloud Field Day 3, I had the chance to meet up with the Riverbed team to discuss how they are extending their technology to address the changing requirements that cloud brings.

EXTENDING NETWORK PERFORMANCE TO CLOUD

Traditionally network performance management involved hardware appliances that would sit at the edges of your applications or data centers. Unfortunately, in a cloud-based world, the enterprise does not have access to the cloud data center nor network egress points.

Network optimization in cloud requires an entirely different approach. Add to this that application services are moving toward ephemeral behaviors and one can quickly see how this becomes a moving target.

Riverbed takes a somewhat traditional approach to how they address the network performance management problem in the cloud. Riverbed gives the enterprise the option to run their software as either a ‘sidecar’ to the application or as part of the cloud-based container.

EXTENDING THE DATA CENTER OR EMBRACING CLOUD?

There are two schools of thought on how one engages a mixed environment of traditional data center assets along with cloud. The first is to look at extending the existing data center so that the cloud is viewed as simply another data center. The second approach is to change the perspective where the constraints are reduced to the application…or better yet service level. The latter is a construct that is typical in cloud-native applications.

Today, Riverbed has taken the former approach. They view the cloud as another data center in your network. To this point, Riverbed’s SteelFusion product works as if the cloud is another data center in the network. Unfortunately, this only works when you have consolidated your cloud-based resources into specific locations.

Most enterprises are looking at a very fragmented approach to their use of cloud-based resources today. A given application may consume resources across multiple cloud providers and locations due to specific resource requirements. This shows up in how enterprises are embracing a multi-cloud strategy. Unfortunately, consolidation of cloud-based resources works against one of the core value propositions to cloud; the ability to leverage different cloud solutions, resources and tools.

UNDERSTANDING THE RIVERBED PORTFOLIO

During the session with the Riverbed team, it was challenging to understand how the different components of their portfolio work together to address the varied enterprise requirements. The portfolio does contain extensions to existing products that start to bring cloud into the network fold. Riverbed also discussed their Steelhead SaaS product, but it was unclear how it fits into a cloud native application model. On the upside, Riverbed is already supporting multiple cloud services by allowing their SteelConnect Manager product to connect to both Amazon Web Services (AWS) and Microsoft Azure. On AWS, SteelConnect Manager can run as an AWS VPC.

Understanding the changing enterprise requirements will become increasingly more difficult as the persona of the Riverbed buyer changes. Historically, the Riverbed customer was a network administrator or infrastructure team member. As enterprises move to cloud, the buyer changes to the developer and possibly the business user in some cases. These new personas are looking for quick access to resources and tools in an easy to consume way. This is very similar to how existing cloud resources are consumed. These new personas are not accustomed to working with infrastructure nor do they have an interest in doing so.

PROVIDING CLARITY FOR THE CHANGING CLOUD CUSTOMER

Messaging and solutions geared to these new personas of buyers need to be clear and concise. Unfortunately, the session with the Riverbed team was very much focused on their traditional customer; the Network administrator. At times, they seemed to be somewhat confused by questions that addressed cloud native application architectures.

One positive indicator is that Riverbed acknowledged that the end-user experience is really what matters, not network performance. In Riverbed parlance, they call this End User Experience Management (EUEM). In a cloud-based world, this will guide the Riverbed team well as they consider what serves as their North Star.

As enterprise embrace cloud-based architectures more fully, so will the need for Riverbed’s model that drives their product portfolio, architecture and go-to-market strategy. Based on the current state, they have made some inroads, but have a long way to go.

Further Reading: The difference between hybrid and multi-cloud for the enterprise

Business · Cloud

Oracle works toward capturing enterprise Cloud IaaS demand

4

The enterprise cloud market is still shows a widely untapped potential. A significant portion of this potential comes from the demand generated by the legacy applications that are sitting in the myriad of corporate data centers. The footprint from these legacy workloads alone is staggering. Start adding in the workloads that sit in secondary data centers that often do not get included in many metrics and one can quickly see the opportunity.

ORACLE STARTS FROM THE GROUND UP

At Tech Field Day’s Cloud Field Day 3, I had the opportunity to meet with the team from Oracle Cloud Infrastructureto discuss their Infrastructure as a Service (IaaS) cloud portfolio. Oracle is trying to attract the current Oracle customer to their cloud-based offerings. Their offerings range from IaaS up through Software as a Service (SaaS) for their core back-office business applications.

The conversation with the Oracle team was pretty rough as it was hard to determine what, exactly, that they did in the IaaS space. There were a number of buzzwords and concepts thrown around without covering what the Oracle IaaS portfolio actually offered. Eventually, it became clear during a demo, in a configuration page what the true offerings were: Virtual Machines and Bare Metal. That’s a good start for Oracle, but unfortunate in how it was presented. Oracle’s offering is hosted infrastructure that is more similar to IBM’s SoftLayer(now called IBM Cloud) than Microsoft Azure, Amazon AWSor Google Cloud.

ORACLE DATABASE AS A SERVICE

Beyond just the hardware, applications are one of the strengths of Oracle’s enterprise offerings. And a core piece of the puzzle has always been their database. One of the highlights from the conversation was their Database as a Service (DBaaS)offering. For enterprises that use Oracle DB, this is a core sticking point that keeps their applications firmly planted in the corporate data center. With the Oracle DBaaS offering, enterprises have the ability to move workloads to a cloud-based infrastructure without losing fidelity in the Oracle DB offering.

Digging deeper into the details, there were a couple interesting functions supported by Oracle’s DBaaS. A very cool feature was the ability to dynamically change the number of CPUs allocated to a database without taking an outage. This provides the ability to scale DB capacity up and down, as needed, without impact to application performance.

Now, it should be noted that while the thought of a hosted Oracle DB sounds good on paper, the actual migration will be complicated for any enterprise. That is less a statement about Oracle and more to the point that enterprise application workloads are a complicated web of interconnects and integrations. Not surprisingly, Oracle mentioned that the most common use-case that is driving legacy footprints to Oracle Cloud is the DB. This shows how much pent-up demand there is to move even the most complicated workloads to cloud. Today, Oracle’s DB offering runs on Oracle Cloud Infrastructure (OCI). It was mentioned that the other Oracle Cloud offerings are moving to run on OCI as well.

Another use-case mentioned was that of High-Performance Computing (HPC). HPC environments need large scale and low latency. Both are positive factors for Oracle’s hardware designs.

While these are two good use-cases, Oracle will need to do things that attract a broader base of use-cases moving forward.

THE CIO PERSPECTIVE

Overall, there seems to be some glimmers of light coming from the Oracle Cloud offering. However, it is hard to get into the true differentiators. Granted that Oracle is playing a bit of catch-up compared with other, more mature cloud-based offerings.

The true value appears to be focused on existing Oracle customers that are looking to make a quick move to cloud. If true and the two fundamental use-cases are DBaaS and HPC, that is a fairly limited pool of customers when there is significant potential still sitting in the corporate data center.

It will be interesting to see how Oracle evolves their IaaS messaging and portfolio to broaden the use-cases and provide fundamental services that other cloud solutions have offered for years. Oracle does have the resources to put a lot of effort toward making a bigger impact. Right now, however, it appears that the Oracle Cloud offering is mainly geared for existing Oracle customers with specific use-cases.

Business · Cloud · Data

Delphix smartly reduces the friction to access data

Screen Shot 2018-04-17 at 10.00.57 AM

Today’s CIO is looking for ways to untap the potential in their company’s data. We have heard the phrase that data is the new oil. Except that data, like oil, is just a raw material. Ultimately, we need to refine it into a finished good which is ultimately where the value resides.

At the same time, enterprises are concerned with regulatory and compliance requirements to protect data. Recent data breaches by globally-recognized companies have raised the concern around data privacy. Historically, the financial services and healthcare industries were the ones to watch when it came to regulatory and compliance requirements. Today, the regulatory net is widening with the EU’s General Data Protection Regulation(GDPR), US Government’s FedRAMPand NY State DFS Cybersecurity Requirements.

Creating greater access to data while staying in compliance and protecting data sit at opposite ends of the privacy and cybersecurity spectrum. Add to this the interest in moving data to cloud-based solutions and one can quickly see why this is one of the core challenges for today’s CIO.

DELPHIX REDUCES THE FRICTION TO DATA ACCESS

At Tech Field Day’s Cloud Field Day 3, I had the opportunity to meet with the team from Delphix.

Fundamentally, Delphix is a cloud-based data management platform that helps enterprises reduce the friction to data access through automation of data management. Today, one-third of Fortune 500 companies use Delphix.

Going back to the core issue, users have a hunger for accessing data. However, regulatory and compliance requirements often hinder that process. Today’s methods to manage data are heavily manual and somewhat archaic compared with solutions like Delphix.

Delphix’ approach is to pack up the data into, what they call, a Data Pod. Unlike most approaches that mask data when it is shared, Delphix masks the data during the intake process. The good thing about this approach is in removing the risk of accidentally sharing protected data.

In terms of sharing data, one clever part of the Delphix Dynamic Data Platform is in its ability to replicate data smartly. Considering that Delphix works in the cloud, this is a key aspect to avoiding unnecessary costs. Alternatively, enterprises would see a significant uptick in data storage as masked data is replicated to the various users. Beyond structured, transactional data, Delphix is also able to manage (and mask) databases, along with unstructured data and files.

THE CIO PERSPECTIVE

From the CIO perspective, Delphix appears to address an increasingly complicated space with a clever, yet simple approach. The three key takeaways are: a) Ability to mask data (DB, unstructured, files) at intake versus when pulling copies, b) ability to smartly replicate data and c) potential to manage data management policies. Lastly, this is not a solution that must run in the corporate data center. Delphix supports running in public cloud services including Microsoft Azureand Amazon AWS.

In Summary, Delphix appears to have decreased the friction to data access by automating the data protection and management processes. All while supporting an enterprise’s move to cloud-based resources.

Cloud

Four expectations for AWS re:Invent

zD82bkYyQdKmZW4oSNxakg

This week brings Amazon Web Services’ (AWS) annual re:Invent conference where thousands will descend upon Las Vegas to learn about cloud and the latest in AWS innovations. Having attended the conference for several years now, there are a number of trends that are common at an AWS event. One of those is the sheer number of products that AWS announces. Aside from that, there are a number of specific things I am looking for at this week’s re:Invent conference.

ENTERPRISE ENGAGEMENT

AWS has done a stellar job of attracting the startup and web-scale markets to their platform. The enterprise market, however, has proven to be an elusive customer except for a (relatively) few case examples. This week, I am looking to see how things have changed for enterprise adoption of AWS. Has AWS found the secret sauce to engage the enterprise in earnest?

PORTFOLIO MANAGEMENT

Several years back, AWS made a big point of not being one of “those” companies with a very large portfolio of products and services. Yet, several years later, AWS has indeed become a behemoth with a portfolio of products and services a mile long. This is a great thing for customers, but can have a few downsides too. Customers, especially enterprise customers, tend to make decisions that last longer than the startup & web-scale customers. Therefore, service deprecation is a real concern with companies that a) do not have a major enterprise focus and b) have a very large portfolio. Unfortunately, this is where AWS is today. Similarly, to date, AWS has not done much in the way of portfolio pruning.

HYBRID CLOUD SUPPORT

For the enterprise, hybrid is their reality. In the past, AWS has taken the position that hybrid means a way to onboard customers into AWS Public Cloud. Hybrid, a combination of on-premises and cloud-based resources can be a means to onboard customers into public cloud. The question is: How is AWS evolving their thinking of hybrid cloud? In addition, how has their thinking evolved to encompass hybrid cloud from the perspective of the enterprise?

DEMOCRATIZATION OF AI & ML

Several of AWS’ competitors have done a great job of democratizing artificial intelligence (AI) and machine learning (ML) tools in a means to make them more approachable. AWS was one of the first out of the gate with a strong showing of AI & ML tools a few years back. The question is: How have they evolved in the past year to make the tools more approachable for the common developer?

BONUS ROUND

As a bonus, it would be interesting if AWS announced the location of their 2nd headquarters. Will they announce it at re:Invent versus a financial analyst call? We shall see.

In summary, AWS never fails to put on a great conference with a good showing. This year should not disappoint.

Business · Cloud

One theory on Amazon interest in a second headquarters

Amazon announced that they are in search of a location for their second headquarters. The new headquarters facility is expected to create 50,000 jobs and bidders are welcome to submit their proposals to woo the Amazon opportunity. While that, in itself, sounds great, there may be more in the works than just a new headquarters. Let me share my theory on what this may indicate.

THE LOCATION SHORTLIST

First, companies like Amazon do not go into major decisions like this without already having a pretty good idea of how it will end. There is just too much risk at stake. In this specific case, the physical location of the second headquarters. Prior to making the announcement, I suspect Amazon already done their due diligence and has an internal shortlist of potential locations they would accept.

When evaluating Amazon’s two core businesses, Amazon.com and Amazon Web Services (AWS), both rely heavily on technology. Therefore, a headquarters location must have a strong technology ecosystem that can support their separate growth trajectories.

While just about any major city in the US could support a new headquarters, tech-centric locations on the shortlist may include Silicon Valley, Las Vegas, Phoenix, Austin, Atlanta, New York or Boston. One outlier may include Washington DC/ Virginia. Why? As Amazon continues their spectacular growth, innovation and acquisition of competitors, it will need stronger ties to government in-circles.

So, which location? My theory is that the process is more of a formality and the decision is between a couple of locations that will come down to local/ state tax incentives. If true, the shortlist is a few locations less than outlined above.

IS A SPLIT ON THE HORIZON?

It is not common for companies to suggest a second ‘headquarters’ location. It does happen, but not often. There may be an undercurrent driving this move. Amazon has two core businesses; Amazon.com and AWS. Almost two years ago, Amazon announced that Andy Jassy would be promoted to CEO of AWS. This may be the first market in a longer-term strategy for Amazon.

One challenge Amazon continues to face is conflict between their core Amazon.com business and Amazon Web Services (AWS). Major customers of AWS continue to flee when Amazon.com moves into a competitive role. Essentially, Amazon.com gains are negatively impacting AWS. For example, Walmart is just one of the latest customers to do so. In the enterprise space, prospective customers have expressed concern that AWS (historically) is not Amazon’s core business. The distribution business is their core. Of course, in the past few years, AWS has grown significantly. However, it still presents a challenge. Splitting Amazon into two companies with Andy Jassy taking on new AWS entity could be the solution.

SPLIT DECISIONS

But there is a potential problem with splitting AWS from Amazon. When they operate as a combined company, Amazon is not required to disclose their significant AWS customers as they are not material in revenue to their core business. However, if the two companies were to split, this disclosure could be required and would bring focus to who AWS’ material customers are…in a very public way.

Now, if none of AWS’ customers are material, or contribute a significant amount of value (individually) to their financial revenue, this issue is not relevant. However, I suspect that Amazon.com is a major consumer of AWS’ services. And there may be a couple of other major customers.

If there are significant, material customers in the mix, it could present concerns among shareholders of AWS. Today, we don’t have clarity to this issue due to the economic halo effect of the core Amazon.com business. Splitting the companies brings this potential issue to light…and may be the reason Amazon has not split the two companies yet.

IMPACT TO SEATTLE ECOSYSTEM

The last driver may be the Seattle ecosystem itself. Seattle is a vibrant, technology metropolis that supports several major technology companies like Microsoft and Amazon. In addition, major companies like Boeing and Costco consume a significant footprint too. Big companies bring great opportunities and economic growth to communities. However, they can have a downside too. Cost of living increases, risk of losing a company, limited skilled people are all risks that offset the opportunities. One can look to the SF Bay Area/ Silicon Valley to see how this is playing out, how competitive it is for talent and how hard it is to relocate someone to the Bay Area.

It is probable that with Amazon’s success and growth trajectory, they may feel that the Seattle ecosystem is starting to become limiting or incapable of handing the entirety of a company like Amazon today and moving forward. If this were the case, I suspect the shortlist of potential suitors may not include Silicon Valley, New York or Boston.

MY TAKE

All that being said, my theory is that there is an impending split on the horizon for Amazon. The move of Jassy to CEO, AWS’ continued growth and secondary factors point to this as a possible outcome. That coupled with the ability for AWS having proved it can stand on its own without the core Amazon.com business further support the perspective.

I look forward to hearing what you think. Share your thoughts in the comments below!

CIO · Cloud · Data

Why are enterprises moving away from public cloud?

IMG_6559

We often hear of enterprises that move applications from their corporate data center to public cloud. This may come in the form of lift and shift. But then something happens that causes the enterprise to move it out of public cloud. This yo-yo effect and the related consequences create ongoing challenges that contribute to several of the items listed in Eight ways enterprises struggle with public cloud.

In order to better understand the problem, we need to work backwards to the root cause…and that often starts with the symptoms. For most, it starts with costs.

UNDERSTANDING THE ECONOMICS

The number one reason why enterprises pull workloads back out of cloud has to do with economics. For public cloud, it comes in the form of a monthly bill for public cloud services. In the post referenced above, I refer to a cost differential of 4x. That is to say that public cloud services cost 4x the corporate data center alternative for the same services. These calculations include fully-loaded total cost of ownership (TCO) numbers on both sides over a period of years to normalize capital costs.

4x is a startling number and seems to fly in the face of a generally held belief that cloud computing is less expensive than the equivalent on-premises corporate data center. Does this mean that public cloud is not less expensive? Yes and no.

THE IMPACT OF LEGACY THINKING

In order to break down the 4x number, one has to understand legacy thinking heavily influences this number. While many view public cloud as less expensive, they often compare apples to oranges when comparing public cloud to corporate data centers. And many do not consider the fully-loaded corporate data center costs that includes server, network, storage…along with power, cooling, space, administrative overhead, management, real estate, etc. Unfortunately, many of these corporate data center costs are not exposed to the CIO and IT staff. For example, do you know how much power your data center consumes and the cost for real estate? Few IT folks do.

There are five components that influence legacy thinking:

  1. 24×7 Availability: Most corporate data centers and systems are built around 24×7 availability. There is a significant amount of data center architecture that goes into the data center facility and systems to support this expectation.
  2. Peak Utilization: Corporate data center systems are built for peak utilization whether they use it regularly or not. This unused capacity sits idle until needed and only used at peak times.
  3. Redundancy: Corporate infrastructure from the power subsystems to power supplies to the disk drives is designed for redundancy. There is redundancy within each level of data center systems. If there is a hardware failure, the application ideally will not know it.
  4. Automation & Orchestration: Corporate applications are not designed with automation & orchestration in mind. Applications are often installed on specific infrastructure and left to run.
  5. Application Intelligence: Applications assume that availability is left to other systems to manage. Infrastructure manages the redundancy and architecture design manages the scale.

Now take a corporate application with this legacy thinking and move it directly into public cloud. It will need peak resources in a redundant configuration running 24×7. That is how they are designed, yet, public cloud benefits from a very different model. Running an application in a redundant configuration at peak 24×7 leads to an average of 4x in costs over traditional data center costs.

This is the equivalent of renting a car every day for a full year whether you need it or not. In this model, the shared model comes at a premium.

THE SOLUTION IS IN PLANNING

Is this the best way to leverage public cloud services? Knowing the details of what to expect leads one to a different approach. Can public cloud benefit corporate enterprise applications? Yes. Does it need planning and refactoring? Yes.

By refactoring applications to leverage the benefits of public cloud rather than assume legacy thinking, public cloud has the potential to be less expensive than traditional approaches. Obviously, each application will have different requirements and therefore different outcomes.

The point is to shed legacy thinking and understand where public cloud fits best. Public cloud is not the right solution for every workload. From those applications that will benefit from public cloud, understand what changes are needed before making the move.

OTHER REASONS

There are other reasons that enterprises exit public cloud services beyond just cost. Those may include:

  1. Scale: Either due to cost or significant scale, enterprises may find that they are able to support applications within their own infrastructure.
  2. Regulatory/ Compliance: Enterprises may use test data with applications but then move the application back to corporate data centers when shifting into production with regulated data. Or compliance requirements may force the need to have data resources local to maintain compliance. Sovereignty issues also drive decisions in this space.
  3. Latency: There are situations where public cloud may be great on paper, but in real-life latency presents a significant challenge. Remote and time-sensitive applications are good examples.
  4. Use-case: The last catch-all is where applications have specific use-cases where public cloud is great in theory, but not the best solution in practice. Remember that public cloud is a general-purpose infrastructure. As an example, there are application use-cases that need fine-tuning that public cloud is not able to support. Other use-cases may not support public cloud in production either.

The bottom line is to fully understand your requirements, think ahead and do your homework. Enterprises have successfully moved traditional corporate applications to public cloud…even those with significant regulatory & compliance requirements. The challenge is to shed legacy thinking and consider where and how best to leverage public cloud for each application.

Business · Cloud · Data

Amazon drives cloud innovation toward the enterprise

 

Amazon continues to drive forward with innovation at a blistering pace. At their annual re:Invent confab, Amazon announced dozens of products to an audience of over 30,000 attendees. There are plenty of newsworthy posts outlining the specific announcements including Amazon’s own re:Invent website. However, there are several announcements that specifically address the growing enterprise demand for cloud computing resources.

INNOVATION AT A RAPID SCALE

One thing that stuck out at the conference was the rate in which Amazon is innovating. Amazon is innovating so fast it is often hard to keep up with the changes. On one hand, it helps Amazon check the boxes when compared against other products. On the other hand, new products like Amazon Rekognition, Polly and Lex demonstrate the level of sophistication that Amazon can bring to market beyond simple infrastructure services. By leveraging their internal expertise in AI and Machine Learning, Amazon’s challenge is one of productizing the solutions.

The sheer number of new, innovative solutions is remarkable but makes it hard to keep track of the best solutions to use for different situations. In addition, it creates a bulging portfolio of services like that of its traditional corporate software competitors.

As an enterprise uses more of Amazon’s products, the fear of lock-in grows. Should this be a concern to either Amazon or potential enterprise customers? Read my post: Is the concept of enterprise lock in a red herring? Lock in is a reality across cloud providers today, not just Amazon. Building solutions for one platform does not provide for easy migration to competing solutions. Innovation is a good thing, but does come at a cost.

DRIVING TOWARD THE EDGE

There are two issues that challenge enterprises evaluating the potential of cloud computing. One challenge is the delivery mechanism. Not all applications are well suited for a centralized cloud-based delivery approach. There are use cases in just about every industry where computing is best suited at the edge. The concept of hybrid cloud computing is one way to address it. At re:Invent, Amazon announced Greengrass which moves the computing capability of Amazon’s Lambda function to a device. At the extreme, Greengrass enables the ability to embed cloud computing functions on a chip.

Moving cloud functionality to the edge is one issue. A second perspective is that it signals Amazon’s acknowledgement that not all roads end with public cloud. The reality is that most industries have use cases where centralized cloud computing is simply not an option. One example, of many, is processing at a remote location. Backhauling data to the cloud for processing is not a viable option. In addition, Internet of Things (IoT) is presenting opportunities and challenges for cloud. The combination of Greengrass and, also announced, Snowball Edge extend Amazon’s reach to the edge of the computing landscape.

AS THE SNOWBALL ROLLS DOWNHILL…

As a snowball rolls downhill, it grows in size. Last year, Amazon announced the data storage onboarding appliance, Snowball. Since last year’s re:Invent, Amazon found customers were using Snowball in numbers exceeding expectations. In addition to the sheer number of Snowball devices, customers are moving larger quantities of data onto Amazon’s cloud. Keep in mind it is still faster to move large quantities of data via truck than over the wire. To address this increase in demand, Amazon drove an 18-wheeled semi-truck and trailer on stage to announce Amazon Snowmobile. While everyone thought it was a gimmick, it is quite real. Essentially, Snowmobile is a semi-trailer that houses a massive storage-focused data center. From an enterprise perspective, this addresses one of the core challenges to moving applications to cloud: how to move the data…and lots of it.

IS AMAZON READY FOR ENTERPRISE?

With the announcements made to date, is Amazon truly ready for enterprise demand? Amazon is the clear leader for public cloud services today. They squarely captured the webscale and startup markets. However, a much larger market is still relatively untapped: Enterprises. Unlike the webscale and startup markets, the enterprise market is both exponentially larger and incredibly more complex. Many of these issues are addressed in Eight ways enterprises struggle with public cloud. For any cloud provider, understanding the enterprise is the first of several steps. A second step is in providing products and services that help enterprises with the onboarding process. As an analogy: Building a beautiful highway is one thing. When you ask drivers to build their own onramps, it creates a significant hurdle to adoption. This is precisely the issue for enterprises when it comes to public cloud. Getting from here to there is not a trivial step.

amazon-enterprise-cloud-gap

To counter the enterprise challenges, Amazon is taking steps in the direction of the enterprise. First is the fundamental design of their data centers and network. Amazon understands that enterprises are looking for data center redundancy. One way they address this is by maintaining multiple data centers in each location. After learning about the thoughts and reasons behind some of their strategic decisions, it’s clear there is quite a bit of deep thinking that goes into decisions. That will bode well for enterprises. Second, Amazon announced their partnership with VMware. I addressed my thoughts on the partnership here: VMware and Amazon AWS Partnership: An Enterprise Perspective. A third step is Amazon’s AWS Migration Acceleration Program. This program is led by a former CIO and directly targets enterprises looking to adopt Amazon’s services. In addition to their internal migration organization, Amazon is building out their partner program to increase the number of partners helping enterprises migrate their applications to Amazon.

ALL ROADS DO NOT LEAD TO PUBLIC CLOUD

Even with all the work Amazon is doing to woo enterprise customers, significant challenges exist. Many assume that all roads lead to public cloud. This statement overstates the reality of how companies need to consume computing resources. There are several paths and outcomes supporting the reality of enterprise computing environments.

How Amazon addresses those concerns will directly impact their success in the enterprise market. Amazon is closing the gap, but so are competitors like Microsoft and others.