Business · Cloud

Oracle works toward capturing enterprise Cloud IaaS demand

4

The enterprise cloud market is still shows a widely untapped potential. A significant portion of this potential comes from the demand generated by the legacy applications that are sitting in the myriad of corporate data centers. The footprint from these legacy workloads alone is staggering. Start adding in the workloads that sit in secondary data centers that often do not get included in many metrics and one can quickly see the opportunity.

ORACLE STARTS FROM THE GROUND UP

At Tech Field Day’s Cloud Field Day 3, I had the opportunity to meet with the team from Oracle Cloud Infrastructureto discuss their Infrastructure as a Service (IaaS) cloud portfolio. Oracle is trying to attract the current Oracle customer to their cloud-based offerings. Their offerings range from IaaS up through Software as a Service (SaaS) for their core back-office business applications.

The conversation with the Oracle team was pretty rough as it was hard to determine what, exactly, that they did in the IaaS space. There were a number of buzzwords and concepts thrown around without covering what the Oracle IaaS portfolio actually offered. Eventually, it became clear during a demo, in a configuration page what the true offerings were: Virtual Machines and Bare Metal. That’s a good start for Oracle, but unfortunate in how it was presented. Oracle’s offering is hosted infrastructure that is more similar to IBM’s SoftLayer(now called IBM Cloud) than Microsoft Azure, Amazon AWSor Google Cloud.

ORACLE DATABASE AS A SERVICE

Beyond just the hardware, applications are one of the strengths of Oracle’s enterprise offerings. And a core piece of the puzzle has always been their database. One of the highlights from the conversation was their Database as a Service (DBaaS)offering. For enterprises that use Oracle DB, this is a core sticking point that keeps their applications firmly planted in the corporate data center. With the Oracle DBaaS offering, enterprises have the ability to move workloads to a cloud-based infrastructure without losing fidelity in the Oracle DB offering.

Digging deeper into the details, there were a couple interesting functions supported by Oracle’s DBaaS. A very cool feature was the ability to dynamically change the number of CPUs allocated to a database without taking an outage. This provides the ability to scale DB capacity up and down, as needed, without impact to application performance.

Now, it should be noted that while the thought of a hosted Oracle DB sounds good on paper, the actual migration will be complicated for any enterprise. That is less a statement about Oracle and more to the point that enterprise application workloads are a complicated web of interconnects and integrations. Not surprisingly, Oracle mentioned that the most common use-case that is driving legacy footprints to Oracle Cloud is the DB. This shows how much pent-up demand there is to move even the most complicated workloads to cloud. Today, Oracle’s DB offering runs on Oracle Cloud Infrastructure (OCI). It was mentioned that the other Oracle Cloud offerings are moving to run on OCI as well.

Another use-case mentioned was that of High-Performance Computing (HPC). HPC environments need large scale and low latency. Both are positive factors for Oracle’s hardware designs.

While these are two good use-cases, Oracle will need to do things that attract a broader base of use-cases moving forward.

THE CIO PERSPECTIVE

Overall, there seems to be some glimmers of light coming from the Oracle Cloud offering. However, it is hard to get into the true differentiators. Granted that Oracle is playing a bit of catch-up compared with other, more mature cloud-based offerings.

The true value appears to be focused on existing Oracle customers that are looking to make a quick move to cloud. If true and the two fundamental use-cases are DBaaS and HPC, that is a fairly limited pool of customers when there is significant potential still sitting in the corporate data center.

It will be interesting to see how Oracle evolves their IaaS messaging and portfolio to broaden the use-cases and provide fundamental services that other cloud solutions have offered for years. Oracle does have the resources to put a lot of effort toward making a bigger impact. Right now, however, it appears that the Oracle Cloud offering is mainly geared for existing Oracle customers with specific use-cases.

Business · CIO · Cloud

IBM Interconnect expectations, a CIOs perspective

IMG_2882

This week is IBM’s annual cloud conference in Las Vegas. Quite a bit has changed in the past year for IBM and at this year’s IBM Interconnect there are a few things I’m looking for. Each of them centers in the mainstream of enterprise demand. Here’s the quick rundown:

IBM CLOUD CURRENT STATE AND DIRECTION

Over the past several years, IBM made strategic acquisitions that feed directly into IBM’s core cloud strategy. Those include Softlayer and Bluebox Cloud. Since last year’s Interconnect conference, I’m looking to hear how things have progressed and how it impacts their direction. Both are key attributes to enterprise engagement.

UNDERSTANDING THE IBM CUSTOMER

IBM is well known for catering to their existing customer base. As enterprises evolve, I’m looking for indications on how non-IBM enterprise customers are choosing to engage IBM. Is most of the demand still coming from existing IBM customers? Or have others started to gravitate toward IBM…and why?

In addition, how has the recent partnership announcement with Salesforce changed this engagement? Granted, the ink is still wet on the agreement, but there may be a few tidbits to glean here.

PORTFOLIO HALO EFFECTS

IBM’s Watson provides an interesting opportunity for enterprises looking to engage analytics, machine learning (ML) and artificial intelligence (AI). Watson, along with the strides IBM has made with Internet of Things (IoT) provides some interesting opportunities for both existing and prospective IBM customers. I’m looking to see if these are creating a halo effect into IBM’s cloud business…and if so, how and where.

LEADERSHIP CHANGES

Finally, IBM is changing up the leadership team. Longtime IBM’er Robert LeBlanc has departed from leading the IBM Cloud division and changes are afoot in marketing. How will these changes impact how IBM approaches cloud and how IBM is perceived in the broader enterprise market?

 

Overall, IBM is clamoring to be a leader in the enterprise cloud space, but faces some stiff competition. Cloud has been a key element in IBM’s enterprise portfolio for some time. This week should provide greater insights on their current state and path moving forward.

Cloud · IoT

Understanding the five tiers of IoT core architecture

Internet of Things (IoT) is all the rage today. Just tagging something as belonging to the IoT family brings quite a bit of attention. However, this tagging has also created quite a bit of noise in the industry for organizations trying to sort through how best to leverage IoT. Call it IoT marketing overload. Or IoT-washing.

That being said, just about every single industry can leverage IoT in a meaningful way today. But where does one begin? There are many ways to consider where to start your IoT journey. The first is to understand the basic fundamentals of how IoT solutions are architected. The five tiers of IoT core architecture are: Applications, Analytics, Data, Gateway and Devices. Using this architecture, one can determine where any given IoT solution fits…and the adjacent components required to compete the solution.

THE FIVE TIERS OF IOT CORE ARCHITECTURE

  • DEVICE TIER

The device tier is the physical device that collects data. The device is a piece of hardware that collects telemetry (data) about a given situation. Devices can range from small sensors to wearables to large machines. The data itself may be presented in many forms from electrical signals to IP-data.

The device may also display information (see Application tier).

  • GATEWAY TIER

The sheer number of devices and interconnection options creates a web of complexity to connect the different devices and their data streams. Depending on the streams, they may come in such diverse forms as mechanical signals or IP-based data streams. On the surface, these streams are completely incompatible. However, when correlating data, a common denominator is needed. Hence, the need for a gateway to collect and homogenize the streams into manageable data.

  • DATA TIER

The data tier is where data from gateways is collected and managed. Depending on the type of data, different structures may be called for. The management, hygiene and physical storage of data is a whole classification onto itself simply due to the four V’s of data (Volume, Variety, Velocity, Veracity).

  • ANALYTICS TIER

Simply managing the sheer amount of data coming from IoT devices creates a significant hurdle when converting data into information. Analytics are used to automate the process for two reasons: Manageability and Speed. The combination of these two present insights to the varied complexity of data coming from devices. As the number and type of devices vary and become increasingly more complex, so will the demand for analytics.

  • APPLICATION TIER

Applications may come in multiple forms. In many cases, the application is the user interface that leverages information coming from the analytics tier and presented to the user in a meaningful way. In other cases, the application may be an automation routine that interfaces with other applications as part of a larger function.

Interestingly, the application may reside on the device itself (ie: wearable).

IoT Architecture

 

Today, many IoT solutions cover one or more of the tiers outlined above. It is important to understand which tiers are covered by any given IoT solution.

CLOUD-BASED IOT SOLUTIONS

Several major cloud providers are developing IoT solutions that leverage their core cloud offering. One thing that is great about these solutions is that they help shorten the IoT development time by providing fundamental offerings that cover many of the tiers outlined above. Most of the solutions focus on the upper tiers to manage the data coming from devices. Three such platforms are: Amazon AWS IoT, IBM Watson IoT, and Microsoft Azure IoT Suite. Each of these emphasize on a different suite of ancillary solutions. All three allow a developer to shorten the development time for and IoT solution by eliminating the need to develop for all five tiers.

THE SECURITY CONUNDRUM

One would be remiss to discuss IoT without mentioning security. Security of devices, data elements and data flows are an issue today that needs greater attention. Instead of a one-off project or add-on solution, security needs to be part of the DNA infused in each tier of a given solution. Based on the current solutions today, there is a long way to go with this aspect.

That being said, IoT has a promising and significant future.

Business · CIO · Cloud · Data

HPE clarifies their new role in the enterprise

IMG_3755

Last week, Hewlett Packard Enterprise (HPE) held their annual US-based Discover conference in Las Vegas. HPE has seen quite a bit of change in the past year with the split of HP into HPE & HP Inc. They shut down their Helion Public Cloud offering and announced the divestiture of their Enterprise Services (ES) business to merge with CSC into a $26B business. With all of the changes and 10,000 people in attendance, HPE sought to clarify their strategy and position in the enterprise market.

WHAT IS IN AND WHAT IS OUT?

Many of the questions attendees were asking circled around the direction HPE was taking considering all of the changes just in the past year alone. Two of the core changes (shutting down Helion Public Cloud and splitting off their ES business) have raised many eyebrows wondering if HPE might be cutting off their future potential.

While HPE telegraphs that their strategy is to support customers with their ‘digital transformation’ journey, the statement might be a bit overreaching. That is not to say that HPE is not capable of providing value to enterprises. It is to say that there are specific aspects that they do provide value and yet a few significant gaps. We are talking about a traditional hardware-focused company shifting more and more toward software. Not a trivial task.

There are four pillars that support the core HPE offering for enterprises. Those include Infrastructure, Analytics, Cloud and Software.

INFRASTRUCTURE AT THE CORE

HPE’s strength continues to rest on their ability to innovate in the infrastructure space. I wrote about their Moonshot and CloudSystem offerings three years ago here. Last year, HPE introduced their Synergy technology that supports composability. Synergy, and the composable concept, is one of the best opportunities to address the evolving enterprise’s changing demands. I delve a bit deeper into the HPE composable opportunity here.

Yet, one thing is becoming painfully clear within the industry. The level of complexity for infrastructure is growing exponentially. For any provider to survive, there needs to be a demonstrable shift toward leveraging software that manages the increasingly complex infrastructure. HPE is heading in that direction with their OneView platform.

Not to be outdone in supporting the ever-changing software platform space, HPE also announced that servers will come ready to support Docker containers. This is another example of where HPE is trying to bridge the gap between traditional infrastructure and newer application architectures including cloud.

CLOUD GOES PRIVATE

Speaking of cloud, there is quite a bit of confusion where cloud fits in the HPE portfolio of solutions. After a number of conversations with members of the HPE team, their solutions are focused on one aspect of cloud: Private Cloud. This makes sense considering HPE’s challenges to reach escape velocity with their Helion Public Cloud offering and core infrastructure background. Keep in mind that HPE’s private cloud solutions are heavily based on OpenStack. This will present a challenge for those considering a move from their legacy VMware footprint. But does open the door to new application architectures that are specifically looking for an OpenStack-based Private Cloud. However, there is already competition in this space from companies like IBM (BlueBox) and Microsoft (AzureStack). And unlike HPE, both IBM & Microsoft have established Public Cloud offerings that complement their Private Cloud solutions (BlueBox & Azure respectively).

One aspect in many of the discussions was how HPE’s Technical Services (TS) are heavily involved in HPE Cloud deployments. At first, this may present a red flag for many enterprises concerned with the level of consulting services required to deploy a solution. However, when considering that the underpinnings are OpenStack-based, it makes more sense. OpenStack, unlike traditional commercial software offerings, still requires a significant amount of support to get it up and running. This could present a challenge to broad appeal of HPE’s cloud solutions except for those few that understand, and can justify, the value proposition.

It does seem that HPE’s cloud business is still in a state of flux and finding the best path to take. With the jettison of Helion Public Cloud and HPE’s support of composability, there is a great opportunity to appeal to the masses and leverage their partnership with Microsoft to support Azure & AzureStack on a Synergy composable stack. Yet, the current focus appears to still focus on OpenStack based solutions. Note: HPE CloudSystem does support Synergy via the OneView APIs.

SOFTWARE

At the conference, HPE highlighted their security solutions with a few statistics. According to HPE, they “secure nine of the top 10 software companies, all 10 telcos and all major branches of the US Department of Defense (DoD).” While those are interesting statistics, one should delve a bit further to determine how extensive this applies.

Security sits alongside the software group’s Application Lifecycle Management (ALM), Operations and BigData software solutions. As time goes on, I would hope to see HPE mature the significance of their software business to meet the changing demands from enterprises.

THE GROWTH OF ANALYTICS

Increasingly, enterprise organizations are growing their dependence on data. A couple of years back, HP (prior to the HPE/ HP Inc split) purchased Autonomy and Vertica. HPE continues to mature their combined Haven solution beyond addressing BigData into the realm of Machine Learning. That that end, HPE now is offering Haven On-Demand (http://www.HavenOnDemand.com) for free. Interestingly, the solution leverages HPE’s partnership with Microsoft and is running on Microsoft’s Azure platform.

IN SUMMARY

HPE is bringing into focus those aspects they believe they can do well. The core business is still focused on infrastructure, but also supporting software (mostly for IT focused functions), cloud (OpenStack focused) and data analytics. After the dust settles on the splits and shifts, the largest opportunities for HPE appear to come from infrastructure (and related software), and data analytics. The other aspects of the business, while valuable, support a smaller pool of prospective customers.

Ultimately, time will tell how this strategy plays out. I still believe there is an untapped potential from HPE’s Synergy composable platform that will appeal to the masses of enterprises, but is often missed. Their data analytics strategy appears to be gaining steam and moving forward. These two offerings are significant, but only provide for specific aspects in an enterprises digital transformation.

CIO · Cloud

The opportunities for enterprise and SMB cloud open up

Companies are getting out of the data center business in droves. While much of this demand is headed toward colocation facilities, there is a growing movement to cloud-based providers. A company’s move to cloud is a journey that spans a number of different modes over a period of time.

DataCenterCloudSpectrum

The workloads of a company do not reside only in one state at a time. They are often spread across most, if not all of the states at the same time. However, the general direction for workloads transits from left to right.

One of the bigger challenges in this journey was a missing piece; namely Private Cloud. Historically, the only way to build a private cloud was to do it yourself. That was easier said than done. The level of complexity compared with the value was simply not there. Fast forward a few years and there are now two core solutions on the market that fill this gap. They are:

While there are some similarities at a contextual level, the underlying technology is quite different from a user perspective. Blue Box Local is based on OpenStack while Azure Stack is based on Azure. Even so, both solutions provide continuity between their private cloud versions and their respective larger public cloud offerings.

Now that demand has reached critical mass, the supply side solutions (providers) are finally starting to mature with solutions that solve core needs for most enterprise and small-medium business (SMB) customers. The reality is that most companies are

From the provider perspective, critical mass served as an inhibitor to investment. Data center and cloud-based solutions are expensive to build. Without sufficient demand, providers might be left with assets sitting relatively idle. A costly venture for many considering a move into the cloud provider space.

Today, the economics are quite different. Not only are there a large number of companies moving to colocation and cloud-based but they are looking for solutions that support a varied and ever-changing use-case. As such, the solutions need to provide supportive and flexibility to adapt to this changing climate.

With the recent announcement of Azure Stack’s Technical Preview, a very interesting opportunity opened for MSPs looking to offer a cloud-based solution to customers. At this point, there are really only two companies that truly provide hybrid cloud solutions for both on premise and public cloud. Those are Blue Box/ IBM and Microsoft Azure.

THE MSP MARKET IS HUGE

When discussing cloud, much is discussed about the enterprise and startup/ web-scale markets with little mentioned regarding the Small-Medium Business (SMB) players. Yet, the SMB players make up a significant portion of the market. Why? The further down the scale, the harder it is to reach them. In most cases, SMB clients will leverage Managed Service Providers (MSPs)

For MSPs, the offerings were equally challenging to leverage in a meaningful way. Those days are quickly changing.

FOUR CORE OFFERINGS WITH VARIATIONS

Today, there are four core ways that cloud is offered in the service provider market.

  • Bare Metal: In this case, the client is essentially leasing hardware that is managed by the service provider. It does provide the client to bring their own software and license. But also brings an added level management requirement.
  • OpenStack: OpenStack provide a myriad of different options using an Open Source software platform. The challenge is in the ability for the client to truly support OpenStack. Unlike some may think, Open Source does not equate to free. However, there are solutions (like Blue Box/ IBM) that provide a commercial version for hybrid environments.
  • Azure: Azure comes in two flavors today that provide flexibility between requirements for on premise and public cloud. The former is served by Azure Stack while the latter is core Azure.
  • VMware: While VMware provides the greatest functionality for existing enterprise environments. In this model, companies are able to move their existing VMs to a providers VMware-based platform. Many of the companies that provide solutions in this space come for the colocation world and include QTS and PhoenixNAP.

These four categories are a quick simplification of how the different solutions map out. There are many variations of each of these solutions which makes comparison within a single category, let alone across categories, difficult.

TWO DELIVERY MODELS

MSPs looking to deliver cloud-based solutions were relegated to two options: 1) roll your own solution or 2) try to leverage an existing hosting offering and layer your services on top. Neither was particularly appealing to MSPs. Today, there are two core delivery options that are shaping up:

  • Single Tenant: In this model, a MSP would stand up a cloud solution specifically for a given client. Each client has their own instance that the MSP manages. In many ways, this is really just a simple hosted model of cloud.
  • Multi Tenant: In this model, there is a single instance of a cloud solution that the MSP manages. However, it is shared across many clients.

There are challenges and opportunities to both approaches and they need to match the capabilities of both the MSP and their clients.

As you start to map your cloud journey, the road and onramps are starting to shape up nicely. That is true for both MSPs and enterprise clients alike. There could not be a better time for enterprises to engage with cloud in a holistic way.

CIO · Cloud

Microsoft Azure Stack fills a major gap for enterprise hybrid cloud

Azure Stack is billed as a version of Azure that runs in your corporate data center. Originally announced on May 4, 2015, Microsoft Azure Stack presented a significant change to the enterprise cloud spectrum of options. Prior to Azure Stack, enterprises looking for a private cloud option were left to build their own. While possible, not a trivial feat for most enterprises.

Today, Microsoft announced availability of their Technical Preview 1 (TP1); a series of planned technical previews leading up to Azure Stack’s General Availability later in 2016.

 

Azure Map

A PRIVATE CLOUD FOR ENTERPRISES

Azure Stack represents an on premise version of Microsoft’s Azure public cloud that runs in your corporate data center. If you are familiar with Azure, you are already familiar with Azure Stack.

Unlike many solutions that start small and scale up, Microsoft was challenged with the opposite problem; to scale down Azure. Azure Stack is essentially a scaled down version of Azure and the code between the two versions of Azure is remarkably similar. The further up the stack, the more similar the code base gets. For developers, this means a more consistent experience between Azure and Azure Stack.

Many enterprise customers are hesitant or incapable of making the leap from workloads on premise to full-on public cloud. Those reasons range from cultural resistance to regulatory considerations. Azure Stack provides a solution that fills the gap between full-on public cloud (Azure) and the prospect of creating a private cloud from scratch. Moreover, because of the consistent experience, customers are able to develop applications on Azure Stack and then move them fairly seamlessly to Azure. Many of the services are similar between the solutions, however, there are some obvious differences inherent to public vs. private cloud.

For now, Microsoft has drawn the line with Azure to focus on the Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) cloud tiers. That may be true for Azure, however, Microsoft continues to grow their SaaS based solutions such as Microsoft Office 365. Microsoft stated that they are in the process of moving Microsoft 365 to Azure. It is anticipated that traditional enterprise core services such as Microsoft SQL server in addition to newer solutions like Internet of Things (IoT) will move to Azure in the form of a deployable ‘Template’.

It should not be minimized that the effort to move existing enterprise applications from their legacy footings is not a trivial effort. This is true for applications moving to Azure Stack, Azure, along with cloud solutions including Amazon AWS, Blue Box, and others.

 

Enterprise Cloud Map

THE KEY TO AZURE STACK LOCAL TARGET

The beauty of a local version of public cloud solutions is in its ability to sidestep many of the challenges that public cloud presents. In the case of regulatory or data privacy issues, Azure Stack provides the ability to leverage the benefits of cloud, while adhering to local regulatory issues surrounding location of data.

In the most simplistic form, one could consider Azure Stack another ‘Region’ in which to deploy applications. Microsoft’s management application, Azure Resource Manager (ARM) is able to deploy directly to Azure Stack as another target Region just as one would deploy to West US or East US. In the case of Azure Stack, the Region is Local. Customers do have the option to deploy internal (Local) Regions in a single zone or in separate zones.

DEVELOPING ON AZURE

One of the core benefits to Azure Stack is in the ability to build applications for Azure Stack (or Azure) and deploy them to either solution. Microsoft Visual Studio already has the ability to update locations in real-time from Azure and Azure Stack. The core of Azure deployments come in the form of a Template. There are already a number of Templates on GitHub for immediate download:

Quick Start ARM templates that deploy on Azure Stack: http://aka.ms/AzureStackGitHub

The Software Development Kit (SDK) for Azure Stack supports both PowerShell and Command Line Interface (CLI) just like Azure. In addition, deployment tools such as Chef and Puppet are supported via the ARM API to Azure Stack.

GETTING STARTED WITH AZURE STACK

While the download for Azure Stack TP1 will not be available until January 29th, there are a number of minimum requirements to get started. Keep in mind that this is the first Technical Preview of Azure Stack. As such, there is quite a bit of code to optimize for local use vs. the full Azure cloud. With Azure, the minimum configuration covered a full 20 racks! With Azure Stack, the minimum footprint has shrunk to a cluster of four systems with a maximum of 63 systems per cluster. Jeffrey Snover (Chief Architect, Microsoft Azure and Technical Fellow at Microsoft) outlined the minimum and recommended requirements in his blog post last month.

One may notice the Windows Server certification requirement. That is due to Azure Stack running on a base of Microsoft Windows Server. However, the Microsoft team believes that this will evolve over time. The memory requirements may also evolve. When running Azure Stack, the components take up approximately 24GB of RAM per system. While this may get optimized over time, additional components (such as clustering) may increase the memory consumption.

One may express concern at the very mention of a local cloud based on Windows Server. If anything, for the purposes of patching processes. Azure Stack is built to evacuate workloads off resources prior to patching. But Microsoft is looking at a wholly different approach to patching. Instead of applying the traditional Windows Server patches, Microsoft is looking to complete redeploy a new copy of Windows Server for the Azure Stack underpinnings. It will be interesting to see how this plays out.

There are two ways to get started with Azure Stack:

  1. Do It Yourself: Leverage reference architectures from Dell, HP and others that list the parts needed to support Azure Stack.
  2. Integrated Systems: Purchase a fully assembled, standardized solution.

 

IN SUMMARY

Azure Stack presents a significant game changer for Microsoft and the Enterprise cloud spectrum by filling a gap long-since needed. There are a number of other benefits to both enterprises and Managed Service Providers (MSPs) that Azure Stack brings. We will leave those for a later post.

 

UPDATE: The download for Azure Stack TP1 is live. You can get it here.

Business · CIO · Cloud · Data

Are the big 5 enterprise IT providers making a comeback?

Not long ago, many would have written off the likes of the big five large enterprise IT firms as slow, lethargic, expensive and out of touch. Who are the big five? IBM (NYSE: IBM), HP (NYSE: HPQ), Microsoft (NASDAQ: MSFT), Oracle (NYSE: ORCL) and Cisco (NASDAQ: CSCO). Specifically, they are companies that provide traditional enterprise IT software, hardware and services.

Today, most of the technology innovation is coming from startups, not the large enterprise providers. Over the course of 2015, we have seen two trends pick up momentum: 1) Consolidation in the major categories (software, hardware, and services) and 2) Acquisitions by the big five. Each of them are making huge strides in different ways.

Here’s a quick rundown of the big five.

IBM guns for the developer

Knowing that the developer is the start of the development process, IBM is shifting gears toward solutions that address the new developer. Just look at the past 18 months alone.

  • February 2014: Dev@Pulse conference showed a mix of Cobol developers alongside promotion of Bluemix. The attendees didn’t resemble your typical developer conference. More details here.
  • April 2014: Impact conference celebrated 50 years of the mainframe. Impact also highlighted the SoftLayer acquisition and brought the integration of mobile and cloud.
  • October 2014: Insight conference goes further to bring cloud, data and Bluemix into the fold.
  • February 2015: InterConnect combines a couple of previous conferences into one. IBM continues the drive with cloud, SoftLayer and Bluemix while adding their Open Source contributions specifically around OpenStack.

SoftLayer (cloud), Watson (analytics) and Bluemix are strengths in the IBM portfolio. And now with IBM’s recent acquisition of BlueBox and partnership with Box, it doesn’t appear they are letting up on the gas. Add their work with Open Source software and it creates an interesting mix.

There are still significant gaps for IBM to fill. However, the message from IBM supports their strengths in cloud, analytics and the developer. This is key for the enterprise both today and tomorrow.

HP’s cloudy outlook

HP has long had a diverse portfolio that addresses the needs of the enterprise today and into the future. Of all big five providers, HP has one of the best matched to the enterprise needs today and in the future.

  • Infrastructure: HP’s portfolio of converged infrastructure and components is solid. Really solid. Much of it is geared for the traditional enterprise. One curious point is that their server components span the enterprise and service provider market. However, their storage products are squarely targeting the enterprise to the omission of the service providers. You can read more here.
  • Software: I have long since felt that HP’s software group has a good bead on the industry trends. They have a strong portfolio of data analytics tools with Vertica, Autonomy and HAVEn (being rebranded). HP’s march to support the Idea Economy is backed up by the solutions they’re putting in place. You can read more here.
  • Cloud: I have said that HP’s cloud strategy is an enigma. Unfortunately, discussions with the HP Cloud team at Discover this month further cemented that perspective. There is quite a bit of hard work being done by the Helion team, but the results are less clear. HP’s cloud strategy is directly tied to OpenStack and their contributions to the projects support this move.

HP will need to move beyond operating in silos and support a more integrated approach that mirrors the needs of their customers. While HP Infrastructure and Software are humming along, Helion cloud will need a renewed focus to gain relevance and mass adoption.

Microsoft’s race to lose

Above all other players, Microsoft still has the broadest and deepest relationships across the enterprise market today. Granted, much of those relationships are built upon their productivity apps, desktop and server operating systems, and core applications (Exchange, SQL, etc). There is no denying that Microsoft probably has relationships with more organizations than any of the others.

Since Microsoft Office 365 hit its stride, enterprises are starting to take a second look at Azure and Microsoft’s cloud-based offerings. This still leaves a number of gaps for Microsoft; specifically around data analytics and open standards. Moving to open standards will require a significant cultural shift for Microsoft. Data analytics could come through the acquisition of a strong player in the space.

Oracle’s comprehensive cloud

Oracle has long been seen as a strong player in the enterprise space. Unlike many other players that provide the building blocks to support enterprise applications, Oracle provides the blocks and the business applications.

One of Oracle’s key challenges is that the solutions are heavy and costly. As enterprises move to a consumption-based model by leveraging cloud, Oracle found itself flat-footed. Over the past year or so, Oracle has worked to change that position with their cloud-based offerings.

On Monday, Executive Chairman, CTO and Founder Larry Ellison presented Oracle’s latest update in their race for the enterprise cloud business. Oracle is now providing the cloud building blocks from top to bottom (SaaS PaaS IaaS). The message is strong: Oracle is out to support both the developer and business user through their transformation.

Oracle’s strong message to go after the entire cloud stack should not go unnoticed. In Q4 alone, Oracle cloud cleared $426M. That is a massive number. Even if they did a poor job of delivering solutions, one cannot deny the sheer girth of opportunity that overshadows others.

Cisco’s shift to software

Cisco has long since been the darling of the IT infrastructure and operations world. Their challenge has been to create a separation between hardware and software while advancing their position beyond the infrastructure realms.

In general, networking technology is one of the least advanced areas when compared with advances in compute and storage infrastructure. As cloud and speed become the new mantra, the emphasis on networking becomes more important than ever.

As the industry moves to integrate both infrastructure and developers, Cisco will need to make a similar shift. Their work in SDN with ACI and around thought-leadership pieces is making significant inroads with enterprises.

Summing it all up

Each is approaching the problem in their own ways with varying degrees of success. The bottom line is that each of them is making significant strides to remain relevant and support tomorrow’s enterprise. Equally important is how quickly they’re making the shift.

If you’re a startup, you will want to take note. No longer are these folks in your dust. But they are your potential exit strategy.

It will be interesting to watch how each evolves over the next 6-12 months. Yes, that is a very short timeframe, but echoes the speed in which the industry is evolving.