Business · Cloud

Riverbed extends into the cloud

logo_riverbed_orange

One of the most critical, but often overlooked components in a system is that of the network. Enterprises continue to spend considerable amounts of money on network optimization as part of their core infrastructure. Traditionally, enterprises have controlled much of the network between applications components. Most of the time the different tiers of an application were collocated in the same data center or across multiple data centers and dedicated network connections that the enterprise had control of.

The advent of cloud changed all of that. Now, different tiers of an application may be spread across different locations, running on systems that the enterprise does not control. This lack of control provides a new challenge to network management.

In addition to applications moving, so does the data. As applications and data move beyond the bounds of the enterprise data center, so does the need to address the increasingly dispersed network performance requirements. The question is: How do you still address network performance management with you no longer control the underlying systems and network infrastructure components?

Riverbed is no stranger to Network performance management. Their products are widely used across enterprises today. At Tech Field Day’sCloud Field Day 3, I had the chance to meet up with the Riverbed team to discuss how they are extending their technology to address the changing requirements that cloud brings.

EXTENDING NETWORK PERFORMANCE TO CLOUD

Traditionally network performance management involved hardware appliances that would sit at the edges of your applications or data centers. Unfortunately, in a cloud-based world, the enterprise does not have access to the cloud data center nor network egress points.

Network optimization in cloud requires an entirely different approach. Add to this that application services are moving toward ephemeral behaviors and one can quickly see how this becomes a moving target.

Riverbed takes a somewhat traditional approach to how they address the network performance management problem in the cloud. Riverbed gives the enterprise the option to run their software as either a ‘sidecar’ to the application or as part of the cloud-based container.

EXTENDING THE DATA CENTER OR EMBRACING CLOUD?

There are two schools of thought on how one engages a mixed environment of traditional data center assets along with cloud. The first is to look at extending the existing data center so that the cloud is viewed as simply another data center. The second approach is to change the perspective where the constraints are reduced to the application…or better yet service level. The latter is a construct that is typical in cloud-native applications.

Today, Riverbed has taken the former approach. They view the cloud as another data center in your network. To this point, Riverbed’s SteelFusion product works as if the cloud is another data center in the network. Unfortunately, this only works when you have consolidated your cloud-based resources into specific locations.

Most enterprises are looking at a very fragmented approach to their use of cloud-based resources today. A given application may consume resources across multiple cloud providers and locations due to specific resource requirements. This shows up in how enterprises are embracing a multi-cloud strategy. Unfortunately, consolidation of cloud-based resources works against one of the core value propositions to cloud; the ability to leverage different cloud solutions, resources and tools.

UNDERSTANDING THE RIVERBED PORTFOLIO

During the session with the Riverbed team, it was challenging to understand how the different components of their portfolio work together to address the varied enterprise requirements. The portfolio does contain extensions to existing products that start to bring cloud into the network fold. Riverbed also discussed their Steelhead SaaS product, but it was unclear how it fits into a cloud native application model. On the upside, Riverbed is already supporting multiple cloud services by allowing their SteelConnect Manager product to connect to both Amazon Web Services (AWS) and Microsoft Azure. On AWS, SteelConnect Manager can run as an AWS VPC.

Understanding the changing enterprise requirements will become increasingly more difficult as the persona of the Riverbed buyer changes. Historically, the Riverbed customer was a network administrator or infrastructure team member. As enterprises move to cloud, the buyer changes to the developer and possibly the business user in some cases. These new personas are looking for quick access to resources and tools in an easy to consume way. This is very similar to how existing cloud resources are consumed. These new personas are not accustomed to working with infrastructure nor do they have an interest in doing so.

PROVIDING CLARITY FOR THE CHANGING CLOUD CUSTOMER

Messaging and solutions geared to these new personas of buyers need to be clear and concise. Unfortunately, the session with the Riverbed team was very much focused on their traditional customer; the Network administrator. At times, they seemed to be somewhat confused by questions that addressed cloud native application architectures.

One positive indicator is that Riverbed acknowledged that the end-user experience is really what matters, not network performance. In Riverbed parlance, they call this End User Experience Management (EUEM). In a cloud-based world, this will guide the Riverbed team well as they consider what serves as their North Star.

As enterprise embrace cloud-based architectures more fully, so will the need for Riverbed’s model that drives their product portfolio, architecture and go-to-market strategy. Based on the current state, they have made some inroads, but have a long way to go.

Further Reading: The difference between hybrid and multi-cloud for the enterprise

Business · Cloud · Data

Rubrik continues their quest to protect the enterprise

Screen Shot 2018-04-23 at 9.56.47 AM

Data protection is all the rage right now. With data moving beyond the corporate data center to multiple locations including cloud, the complexity has increased significantly. What is Data Protection? It generally covers a combination of backup, restore, disaster recovery (DR) and business continuity (BC). While not new, most enterprises have struggled for decades to effectively backup their systems in a way that ensures that a) the data is protected, b) can be restored if needed and c) can be restored in a timely fashion when needed. Put a different way: BC/DR is still one of most poorly managed parts of an IT operation. Add cloud to this and one can see where the wheels start to fall off.

The irony is that, while not new, enterprises struggle to effectively balance the needs of DR/BC in a meaningful way. The reasons for this are longer than this blog will permit. This is an industry screaming for disruption. Enter Rubrik.

RUBRIK BRINGS A FRESH PERSPECTIVE TO AN OLD PROBLEM

A couple of weeks back, I caught up with the Rubrik team at Tech Field Day’s Cloud Field Day 3. Rubrik came into the market a few years back and has continued their drive to solve this old, but growing problem.

Unlike traditional solutions, Rubrik takes a modern approach to their architecture. Everything that Rubrik does calls an API. By using this API-centric architecture, Rubrik provides modularity and flexibility to their approach. API-centric architectures are a must in a cloud-based world.

At Cloud Field Day, the Rubrik team went through their new SaaS-based solution called Polaris. Knowing that enterprise data is increasingly being spread across multiple data centers and cloud providers, they need a cohesive way to visually manage their data. Polaris is a SaaS-based solution that does just that. Polaris becomes the overarching management platform in which to effectively manage the growing complexity.

COMPLEXITY DRIVES THE NEED FOR A NEW APPRAOCH

There are two dynamics that are driving these changes: 1) the explosion in data growth and 2) the need to effective management data. As applications and their data move to a myriad of different solutions, so does the need to effectively manage the underlying data.

An increase in compliance and regulatory requirements are just adding further complexity to data management. As the complexity grows, so does the need for systemic automation. No longer are we able to simply throw more resources at the problem. It is time to turn the problem on its head and leverage new approaches.

DATA PROTECTION IS NOT IMPORTANT…UNTIL IT IS

During the discussion, Rubrik’s Chief Technologist Chris Wahl made a very key observation that everyone in IT painfully understands: Data protection is not important…until it is. To many enterprises, the concept of data protection is seen as an insurance policy that you hopefully will not need. However, in today’s world of increasingly regulated and highly complicated architectures with data spreading out at scale, the risks are simply too great to ignore.

While data protection may have been less important in the past, today it is critical.

GOING BEYOND SIMPLY BACKUP AND RECOVERY

If the story about Rubrik were to stop with just backup and recovery, it would still be impressive. However, Rubrik is venturing into the complexity that comes with integration into other systems and processes. One of the first areas is their integration with ServiceNow. Rubrik integrates with ServiceNow by ingesting CMDB data into the system. By doing so, it provides a cohesive look at the underlying components that Rubrik has visibility into.

Looking into the crystal ball, one can start to see how Rubrik is fully understanding that backup and recovery is just the start. The real opportunity comes from full integration into business processes. However, in order to do that, integrations like ServiceNow are needed. Expect to see more as Rubrik continues their quest to provide a solid foundation to the enterprise when they need it most.

CIO · Cloud

3 ways enterprises can reduce their cybersecurity risk profile

IMG_5834

If you are an executive (CIO, CISO, CEO) or board member, cybersecurity is top of mind. One of the top comments I often hear is: “I don’t want our company (to be) on the front page of the Wall Street Journal.” Ostensibly, the comments are in the context of a breach. Yet, many gaps still exist between avoiding this situation and reality. Just saying the words is not enough.

The recent Equifax breach brings to light many conversations with enterprises and executive teams about shoring up their security posture. The sad reality is that cybersecurity spending often happens immediately after a breach happens. Why is that? Let us delve into several of the common reasons why and what can be done.

ENTERPRISE SECURITY CHALLENGES

There are a number of reasons why enterprises are challenged with cybersecurity issues. Much of it stems from the perspective of what cybersecurity solutions provide. To many, the investment in cybersecurity teams and solutions is seen as an insurance policy. In order to better understand the complexities, let us dig into a few of the common issues.

Reactive versus Proactive

The first issue is how enterprises think about cybersecurity. There are two aspects to consider when looking at how cybersecurity is viewed. The first is that enterprises often want to be secure, but are unwilling or unable to provide the funding to match. That is, until a breach occurs. This has created a behavior within IT organizations where they leverage breaches to gain cybersecurity funding.

Funding for Cybersecurity Initiatives

Spending in cybersecurity is often seen in a similar vein as insurance and comes back to risk mitigation. Many IT organizations are challenged to get adequate funding to appropriately protect the enterprise. It should be noted that no enterprise will be fully secured and to do so creates a level of complexity and cost that would greatly impact the operations and bottom line of the enterprise. Therefore, a healthy balance is called for here. Any initiatives should follow a risk mitigation approach, but also consider the business impact.

Shifting to Cybersecurity as part of the DNA

Enterprises often think of cybersecurity as an afterthought to a project or core application. The problem with this approach is that, as an afterthought, the project or application is well on its way to production. Any required changes would be ancillary and rarely get granular in how they could be applied. More mature organizations are shifting to cybersecurity as part of their core DNA. In this culture, cybersecurity becomes part of the conversation early and often…and at each stage of the development. By making it part of the DNA, each member of the process is encouraged to consider how to secure their part of the project.

Cybersecurity Threats are getting more Sophisticated

The level of sophistication from cybersecurity threats is growing astronomically. No longer are the traditional tools adequate to protect the enterprise. Enterprises are fighting an adversary that is gaining ground exponentially faster than they are. In essence, no one enterprise is able to adequately protect themselves and must rely on the expertise of others that specialize in this space.

Traditional thinking need not apply. The level of complexity and skills required is growing at a blistering clip. If your organization is not willing or able to put the resources behind staying current and actively engaged, the likelihood of trouble is not far way.

THREE WAYS TO REDUCE CYBERSECURITY RISK

While the risks are increasing, there are steps that every enterprise large and small can invoke to reduce their risk profile. Sadly, many of these are well known, yet not as well enacted. The first step is to change your paradigm regarding cybersecurity. Get proactive and do not assume you know everything.

Patch, Patch, Patch

Even though regular patching is a requirement for most applications and operating systems, enterprises are still challenged to keep up. There are often two reasons for this: 1) disruption to business operations and 2) resources required to update the application or system. In both cases, the best advice is to get into a regular rhythm to patch systems. When you make something routine, it builds muscle memory into the organization that increases the accuracy, lessens the disruption and speeds up the effort.

Regular Validation from Outsiders

Over time, organizations get complacent with their operations. Cybersecurity is no different. A good way to avoid this is to bring in a trusted, outside organization to spot check and ‘tune up’ your cybersecurity efforts. They can more easily spot issues without being affected by your blind spots. Depending on your situation, you may choose to leverage a third-party to provide cybersecurity services. However, each enterprise will need to evaluate their specific situation to best leverage the right approach for them.

Challenge Traditional Thinking

I still run into organizations that believe perimeter protections are the best actions. Another perspective is to conduct security audits with some frequency. Two words: Game Over. While those are both required, security threats today are constant and unrelenting. Constant, evolving approaches are required today.

As we move to a more complicated approach to IT services (SaaS, Public Cloud, Private Cloud, On Premises, Edge Computing, Mobile, etc), the level of complexity grows. Now layer in that the data that we view as gold is spread across those services. The complexity is growing and traditional thinking will not protect the enterprise. Leveraging outsiders is one approach to infuse different methods to address this growing complexity.

 

One alternative is to move to a cloud-based alternative. Most cloud-based alternatives have methods to update their systems and applications without disrupting operations. This does not absolve the enterprise from responsibility, but does offer an approach to leverage more specialized expertise.

The bottom line is that our world is getting more complex and cybersecurity is just one aspect. The rate of complexity and sophistication from cybersecurity attacks is only growing and more challenging for enterprises to keep up. Change is needed, the risks are increasing and now is the time for action.

CIO · Cloud

The difference between Hybrid and Multi-Cloud for the Enterprise

Cloud computing still presents the single biggest opportunity for enterprise companies today. Even though cloud-based solutions have been around for more than 10 years now, the concepts related to cloud continue to confuse many.

Of late, it seems that Hybrid Cloud and Multi-Cloud are the latest concepts creating confusion. To make matters worse, a number of folks (inappropriately) use these terms interchangeably. The reality is that they are very different.

The best way to think about the differences between Hybrid Cloud and Multi-Cloud is in terms of orientation. One addresses a continuum of different services vertically while the other looks at the horizontal aspect of cloud. There are pros and cons to each and they are not interchangeable.

 

Multi-Cloud: The horizontal aspect of cloud

Multi-Cloud is essentially the use of multiple cloud services within a single delivery tier. A common example is the use of multiple Public Cloud providers. Enterprises typically use a multi-cloud approach for one of three reasons:

  • Leverage: Enterprise IT organizations are generally risk-adverse. There are many reasons for this to be discussed in a later post. Fear of taking risks tends to inform a number of decisions including choice of cloud provider. One aspect is the fear of lock-in to a single provider. I addressed my perspective on lock-in here. By using a multi-cloud approach, an enterprise can hedge their risk across multiple providers. The downside is that this approach creates complexities with integration, organizational skills and data transit.
  • Best of Breed: The second reason enterprises typically use a multi-cloud strategy is due to best of breed solutions. Not all solutions in a single delivery tier offer the same services. An enterprise may choose to use one provider’s solution for a specific function and a second provider’s solution for a different function. This approach, while advantageous in some respects, does create complexity in a number of ways including integration, data transit, organizational skills and sprawl.
  • Evaluation: The third reason enterprises leverage a multi-cloud strategy is relatively temporary and exists for evaluation purposes. This third approach is actually a very common approach among enterprises today. Essentially, it provides a means to evaluate different cloud providers in a single delivery tier when they first start out. However, they eventually focus on a single provider and build expertise around that single provider’s solution.

In the end, I find that the reasons that enterprises choose one of the three approaches above is often informed by their maturity and thinking around cloud in general. The question many ask is: Do the upsides of leverage or best of breed outweigh the downsides of complexity?

Hybrid Cloud: The vertical approach to cloud

Most, if not all, enterprises are using a form of hybrid cloud today. Hybrid cloud refers to the vertical use of cloud in multiple different delivery tiers. Most typically, enterprises are using a SaaS-based solution and Public Cloud today. Some may also use Private Cloud. Hybrid cloud does not require that a single application spans the different delivery tiers.

The CIO Perspective

The important take away from this is to understand how you leverage Multi-cloud and/or Hybrid cloud and less about defining the terms. Too often, we get hung up on defining terms more than understanding the benefits from leveraging the solution…or methodology. Even when discussing outcomes, we often still focus on technology.

These two approaches are not the same and come with their own set of pros and cons. The value from Multi-Cloud and Hybrid Cloud is that they both provide leverage for business transformation. The question is: How will you leverage them for business advantage?

CIO · Cloud

Microsoft Azure Stack fills a major gap for enterprise hybrid cloud

Azure Stack is billed as a version of Azure that runs in your corporate data center. Originally announced on May 4, 2015, Microsoft Azure Stack presented a significant change to the enterprise cloud spectrum of options. Prior to Azure Stack, enterprises looking for a private cloud option were left to build their own. While possible, not a trivial feat for most enterprises.

Today, Microsoft announced availability of their Technical Preview 1 (TP1); a series of planned technical previews leading up to Azure Stack’s General Availability later in 2016.

 

Azure Map

A PRIVATE CLOUD FOR ENTERPRISES

Azure Stack represents an on premise version of Microsoft’s Azure public cloud that runs in your corporate data center. If you are familiar with Azure, you are already familiar with Azure Stack.

Unlike many solutions that start small and scale up, Microsoft was challenged with the opposite problem; to scale down Azure. Azure Stack is essentially a scaled down version of Azure and the code between the two versions of Azure is remarkably similar. The further up the stack, the more similar the code base gets. For developers, this means a more consistent experience between Azure and Azure Stack.

Many enterprise customers are hesitant or incapable of making the leap from workloads on premise to full-on public cloud. Those reasons range from cultural resistance to regulatory considerations. Azure Stack provides a solution that fills the gap between full-on public cloud (Azure) and the prospect of creating a private cloud from scratch. Moreover, because of the consistent experience, customers are able to develop applications on Azure Stack and then move them fairly seamlessly to Azure. Many of the services are similar between the solutions, however, there are some obvious differences inherent to public vs. private cloud.

For now, Microsoft has drawn the line with Azure to focus on the Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) cloud tiers. That may be true for Azure, however, Microsoft continues to grow their SaaS based solutions such as Microsoft Office 365. Microsoft stated that they are in the process of moving Microsoft 365 to Azure. It is anticipated that traditional enterprise core services such as Microsoft SQL server in addition to newer solutions like Internet of Things (IoT) will move to Azure in the form of a deployable ‘Template’.

It should not be minimized that the effort to move existing enterprise applications from their legacy footings is not a trivial effort. This is true for applications moving to Azure Stack, Azure, along with cloud solutions including Amazon AWS, Blue Box, and others.

 

Enterprise Cloud Map

THE KEY TO AZURE STACK LOCAL TARGET

The beauty of a local version of public cloud solutions is in its ability to sidestep many of the challenges that public cloud presents. In the case of regulatory or data privacy issues, Azure Stack provides the ability to leverage the benefits of cloud, while adhering to local regulatory issues surrounding location of data.

In the most simplistic form, one could consider Azure Stack another ‘Region’ in which to deploy applications. Microsoft’s management application, Azure Resource Manager (ARM) is able to deploy directly to Azure Stack as another target Region just as one would deploy to West US or East US. In the case of Azure Stack, the Region is Local. Customers do have the option to deploy internal (Local) Regions in a single zone or in separate zones.

DEVELOPING ON AZURE

One of the core benefits to Azure Stack is in the ability to build applications for Azure Stack (or Azure) and deploy them to either solution. Microsoft Visual Studio already has the ability to update locations in real-time from Azure and Azure Stack. The core of Azure deployments come in the form of a Template. There are already a number of Templates on GitHub for immediate download:

Quick Start ARM templates that deploy on Azure Stack: http://aka.ms/AzureStackGitHub

The Software Development Kit (SDK) for Azure Stack supports both PowerShell and Command Line Interface (CLI) just like Azure. In addition, deployment tools such as Chef and Puppet are supported via the ARM API to Azure Stack.

GETTING STARTED WITH AZURE STACK

While the download for Azure Stack TP1 will not be available until January 29th, there are a number of minimum requirements to get started. Keep in mind that this is the first Technical Preview of Azure Stack. As such, there is quite a bit of code to optimize for local use vs. the full Azure cloud. With Azure, the minimum configuration covered a full 20 racks! With Azure Stack, the minimum footprint has shrunk to a cluster of four systems with a maximum of 63 systems per cluster. Jeffrey Snover (Chief Architect, Microsoft Azure and Technical Fellow at Microsoft) outlined the minimum and recommended requirements in his blog post last month.

One may notice the Windows Server certification requirement. That is due to Azure Stack running on a base of Microsoft Windows Server. However, the Microsoft team believes that this will evolve over time. The memory requirements may also evolve. When running Azure Stack, the components take up approximately 24GB of RAM per system. While this may get optimized over time, additional components (such as clustering) may increase the memory consumption.

One may express concern at the very mention of a local cloud based on Windows Server. If anything, for the purposes of patching processes. Azure Stack is built to evacuate workloads off resources prior to patching. But Microsoft is looking at a wholly different approach to patching. Instead of applying the traditional Windows Server patches, Microsoft is looking to complete redeploy a new copy of Windows Server for the Azure Stack underpinnings. It will be interesting to see how this plays out.

There are two ways to get started with Azure Stack:

  1. Do It Yourself: Leverage reference architectures from Dell, HP and others that list the parts needed to support Azure Stack.
  2. Integrated Systems: Purchase a fully assembled, standardized solution.

 

IN SUMMARY

Azure Stack presents a significant game changer for Microsoft and the Enterprise cloud spectrum by filling a gap long-since needed. There are a number of other benefits to both enterprises and Managed Service Providers (MSPs) that Azure Stack brings. We will leave those for a later post.

 

UPDATE: The download for Azure Stack TP1 is live. You can get it here.

Business · CIO · Cloud · Data

Are the big 5 enterprise IT providers making a comeback?

Not long ago, many would have written off the likes of the big five large enterprise IT firms as slow, lethargic, expensive and out of touch. Who are the big five? IBM (NYSE: IBM), HP (NYSE: HPQ), Microsoft (NASDAQ: MSFT), Oracle (NYSE: ORCL) and Cisco (NASDAQ: CSCO). Specifically, they are companies that provide traditional enterprise IT software, hardware and services.

Today, most of the technology innovation is coming from startups, not the large enterprise providers. Over the course of 2015, we have seen two trends pick up momentum: 1) Consolidation in the major categories (software, hardware, and services) and 2) Acquisitions by the big five. Each of them are making huge strides in different ways.

Here’s a quick rundown of the big five.

IBM guns for the developer

Knowing that the developer is the start of the development process, IBM is shifting gears toward solutions that address the new developer. Just look at the past 18 months alone.

  • February 2014: Dev@Pulse conference showed a mix of Cobol developers alongside promotion of Bluemix. The attendees didn’t resemble your typical developer conference. More details here.
  • April 2014: Impact conference celebrated 50 years of the mainframe. Impact also highlighted the SoftLayer acquisition and brought the integration of mobile and cloud.
  • October 2014: Insight conference goes further to bring cloud, data and Bluemix into the fold.
  • February 2015: InterConnect combines a couple of previous conferences into one. IBM continues the drive with cloud, SoftLayer and Bluemix while adding their Open Source contributions specifically around OpenStack.

SoftLayer (cloud), Watson (analytics) and Bluemix are strengths in the IBM portfolio. And now with IBM’s recent acquisition of BlueBox and partnership with Box, it doesn’t appear they are letting up on the gas. Add their work with Open Source software and it creates an interesting mix.

There are still significant gaps for IBM to fill. However, the message from IBM supports their strengths in cloud, analytics and the developer. This is key for the enterprise both today and tomorrow.

HP’s cloudy outlook

HP has long had a diverse portfolio that addresses the needs of the enterprise today and into the future. Of all big five providers, HP has one of the best matched to the enterprise needs today and in the future.

  • Infrastructure: HP’s portfolio of converged infrastructure and components is solid. Really solid. Much of it is geared for the traditional enterprise. One curious point is that their server components span the enterprise and service provider market. However, their storage products are squarely targeting the enterprise to the omission of the service providers. You can read more here.
  • Software: I have long since felt that HP’s software group has a good bead on the industry trends. They have a strong portfolio of data analytics tools with Vertica, Autonomy and HAVEn (being rebranded). HP’s march to support the Idea Economy is backed up by the solutions they’re putting in place. You can read more here.
  • Cloud: I have said that HP’s cloud strategy is an enigma. Unfortunately, discussions with the HP Cloud team at Discover this month further cemented that perspective. There is quite a bit of hard work being done by the Helion team, but the results are less clear. HP’s cloud strategy is directly tied to OpenStack and their contributions to the projects support this move.

HP will need to move beyond operating in silos and support a more integrated approach that mirrors the needs of their customers. While HP Infrastructure and Software are humming along, Helion cloud will need a renewed focus to gain relevance and mass adoption.

Microsoft’s race to lose

Above all other players, Microsoft still has the broadest and deepest relationships across the enterprise market today. Granted, much of those relationships are built upon their productivity apps, desktop and server operating systems, and core applications (Exchange, SQL, etc). There is no denying that Microsoft probably has relationships with more organizations than any of the others.

Since Microsoft Office 365 hit its stride, enterprises are starting to take a second look at Azure and Microsoft’s cloud-based offerings. This still leaves a number of gaps for Microsoft; specifically around data analytics and open standards. Moving to open standards will require a significant cultural shift for Microsoft. Data analytics could come through the acquisition of a strong player in the space.

Oracle’s comprehensive cloud

Oracle has long been seen as a strong player in the enterprise space. Unlike many other players that provide the building blocks to support enterprise applications, Oracle provides the blocks and the business applications.

One of Oracle’s key challenges is that the solutions are heavy and costly. As enterprises move to a consumption-based model by leveraging cloud, Oracle found itself flat-footed. Over the past year or so, Oracle has worked to change that position with their cloud-based offerings.

On Monday, Executive Chairman, CTO and Founder Larry Ellison presented Oracle’s latest update in their race for the enterprise cloud business. Oracle is now providing the cloud building blocks from top to bottom (SaaS PaaS IaaS). The message is strong: Oracle is out to support both the developer and business user through their transformation.

Oracle’s strong message to go after the entire cloud stack should not go unnoticed. In Q4 alone, Oracle cloud cleared $426M. That is a massive number. Even if they did a poor job of delivering solutions, one cannot deny the sheer girth of opportunity that overshadows others.

Cisco’s shift to software

Cisco has long since been the darling of the IT infrastructure and operations world. Their challenge has been to create a separation between hardware and software while advancing their position beyond the infrastructure realms.

In general, networking technology is one of the least advanced areas when compared with advances in compute and storage infrastructure. As cloud and speed become the new mantra, the emphasis on networking becomes more important than ever.

As the industry moves to integrate both infrastructure and developers, Cisco will need to make a similar shift. Their work in SDN with ACI and around thought-leadership pieces is making significant inroads with enterprises.

Summing it all up

Each is approaching the problem in their own ways with varying degrees of success. The bottom line is that each of them is making significant strides to remain relevant and support tomorrow’s enterprise. Equally important is how quickly they’re making the shift.

If you’re a startup, you will want to take note. No longer are these folks in your dust. But they are your potential exit strategy.

It will be interesting to watch how each evolves over the next 6-12 months. Yes, that is a very short timeframe, but echoes the speed in which the industry is evolving.