Are the big 5 enterprise IT providers making a comeback?

Not long ago, many would have written off the likes of the big five large enterprise IT firms as slow, lethargic, expensive and out of touch. Who are the big five? IBM (NYSE: IBM), HP (NYSE: HPQ), Microsoft (NASDAQ: MSFT), Oracle (NYSE: ORCL) and Cisco (NASDAQ: CSCO). Specifically, they are companies that provide traditional enterprise IT software, hardware and services.

Today, most of the technology innovation is coming from startups, not the large enterprise providers. Over the course of 2015, we have seen two trends pick up momentum: 1) Consolidation in the major categories (software, hardware, and services) and 2) Acquisitions by the big five. Each of them are making huge strides in different ways.

Here’s a quick rundown of the big five.

IBM guns for the developer

Knowing that the developer is the start of the development process, IBM is shifting gears toward solutions that address the new developer. Just look at the past 18 months alone.

  • February 2014: Dev@Pulse conference showed a mix of Cobol developers alongside promotion of Bluemix. The attendees didn’t resemble your typical developer conference. More details here.
  • April 2014: Impact conference celebrated 50 years of the mainframe. Impact also highlighted the SoftLayer acquisition and brought the integration of mobile and cloud.
  • October 2014: Insight conference goes further to bring cloud, data and Bluemix into the fold.
  • February 2015: InterConnect combines a couple of previous conferences into one. IBM continues the drive with cloud, SoftLayer and Bluemix while adding their Open Source contributions specifically around OpenStack.

SoftLayer (cloud), Watson (analytics) and Bluemix are strengths in the IBM portfolio. And now with IBM’s recent acquisition of BlueBox and partnership with Box, it doesn’t appear they are letting up on the gas. Add their work with Open Source software and it creates an interesting mix.

There are still significant gaps for IBM to fill. However, the message from IBM supports their strengths in cloud, analytics and the developer. This is key for the enterprise both today and tomorrow.

HP’s cloudy outlook

HP has long had a diverse portfolio that addresses the needs of the enterprise today and into the future. Of all big five providers, HP has one of the best matched to the enterprise needs today and in the future.

  • Infrastructure: HP’s portfolio of converged infrastructure and components is solid. Really solid. Much of it is geared for the traditional enterprise. One curious point is that their server components span the enterprise and service provider market. However, their storage products are squarely targeting the enterprise to the omission of the service providers. You can read more here.
  • Software: I have long since felt that HP’s software group has a good bead on the industry trends. They have a strong portfolio of data analytics tools with Vertica, Autonomy and HAVEn (being rebranded). HP’s march to support the Idea Economy is backed up by the solutions they’re putting in place. You can read more here.
  • Cloud: I have said that HP’s cloud strategy is an enigma. Unfortunately, discussions with the HP Cloud team at Discover this month further cemented that perspective. There is quite a bit of hard work being done by the Helion team, but the results are less clear. HP’s cloud strategy is directly tied to OpenStack and their contributions to the projects support this move.

HP will need to move beyond operating in silos and support a more integrated approach that mirrors the needs of their customers. While HP Infrastructure and Software are humming along, Helion cloud will need a renewed focus to gain relevance and mass adoption.

Microsoft’s race to lose

Above all other players, Microsoft still has the broadest and deepest relationships across the enterprise market today. Granted, much of those relationships are built upon their productivity apps, desktop and server operating systems, and core applications (Exchange, SQL, etc). There is no denying that Microsoft probably has relationships with more organizations than any of the others.

Since Microsoft Office 365 hit its stride, enterprises are starting to take a second look at Azure and Microsoft’s cloud-based offerings. This still leaves a number of gaps for Microsoft; specifically around data analytics and open standards. Moving to open standards will require a significant cultural shift for Microsoft. Data analytics could come through the acquisition of a strong player in the space.

Oracle’s comprehensive cloud

Oracle has long been seen as a strong player in the enterprise space. Unlike many other players that provide the building blocks to support enterprise applications, Oracle provides the blocks and the business applications.

One of Oracle’s key challenges is that the solutions are heavy and costly. As enterprises move to a consumption-based model by leveraging cloud, Oracle found itself flat-footed. Over the past year or so, Oracle has worked to change that position with their cloud-based offerings.

On Monday, Executive Chairman, CTO and Founder Larry Ellison presented Oracle’s latest update in their race for the enterprise cloud business. Oracle is now providing the cloud building blocks from top to bottom (SaaS PaaS IaaS). The message is strong: Oracle is out to support both the developer and business user through their transformation.

Oracle’s strong message to go after the entire cloud stack should not go unnoticed. In Q4 alone, Oracle cloud cleared $426M. That is a massive number. Even if they did a poor job of delivering solutions, one cannot deny the sheer girth of opportunity that overshadows others.

Cisco’s shift to software

Cisco has long since been the darling of the IT infrastructure and operations world. Their challenge has been to create a separation between hardware and software while advancing their position beyond the infrastructure realms.

In general, networking technology is one of the least advanced areas when compared with advances in compute and storage infrastructure. As cloud and speed become the new mantra, the emphasis on networking becomes more important than ever.

As the industry moves to integrate both infrastructure and developers, Cisco will need to make a similar shift. Their work in SDN with ACI and around thought-leadership pieces is making significant inroads with enterprises.

Summing it all up

Each is approaching the problem in their own ways with varying degrees of success. The bottom line is that each of them is making significant strides to remain relevant and support tomorrow’s enterprise. Equally important is how quickly they’re making the shift.

If you’re a startup, you will want to take note. No longer are these folks in your dust. But they are your potential exit strategy.

It will be interesting to watch how each evolves over the next 6-12 months. Yes, that is a very short timeframe, but echoes the speed in which the industry is evolving.

What to watch for at HP Discover this week

This week marks HP’s annual Discover conference in Las Vegas. HP has come a long way in the past couple of years and this year should prove interesting in a number of ways. Here is a list of items to watch in the coming couple of days:

Announcements: There are a couple of significant announcements planned this week. While the announcement itself is interesting, the long term impact should prove a more interesting opportunity for HP’s strategy post-split. Watch the keynotes for more details Tuesday and Wednesday.

Split Update: News about the HP split into two companies is not new. Look for more details on the progress of the split and what it means for each of the two entities. On the surface and through a number of ‘hallway conversations’ I’ve had, it seems that the split is bringing greater focus to the enterprise teams. This is good for HP and for customers.

Software: The HP Software team is a large and diverse bunch. The areas I’m particularly interested in are the progress around HAVEn, Vertica and Autonomy. Initial conversations point to some really interesting progress for customers. As BigData, Analytics and data (in general) become front-and-center for organizations, look for this area to explode. We have only scratched the surface with more opportunities abound. I’m looking at ways HP is educating customers on the value opportunities in a way they can consume. While there are themes, we are moving to a ‘market of one‘.

Cloud: The HP Helion Cloud has a number of things happening at the conference. I’m particularly interested in the progress they’ve made around commercial offerings of OpenStack and private cloud. Overall, cloud adoption is still very anemic (not just for HP). I’m looking for ways HP is creating the onramps to cloud to reduce apprehension and increase adoption rates. Many of the challenges span greater than the technology itself. Look for ways HP is engaging customers in new and different ways. In addition, watch for changes in how the solutions are shifting from supporting enterprises directly to supporting service providers. Bridging the gap here is key and the needs are very different.

Infrastructure: Many enterprise customers still maintain a large infrastructure presence. Even if their strategy is to shift toward a cloud-first methodology, there are reasons to support internal infrastructure. Look for ways HP is evolving their infrastructure offerings to support today’s enterprise along with its evolution to a cloud-first model. As the sophistication of data increases, so will storage solutions to meet the ever-changing requirements. Similarly, the complexity from networking that solutions like Software Defined Networking (SDN) address will be interesting to watch for.

Wild Cards: There are a number of wild cards to watch for as well. The first is DevOps. DevOps is critical to today’s IT organization and moving forward. It applies differently to different orgs. Watch for the subject addressed in Keynotes. The second wild card is an update from HP Labs. HP Labs has a number of really interesting…and innovative solutions in the works. Look for an update on where things stand and how HP sees innovation changing.

Finally, I have a number of video interviews scheduled over the next couple of days where I dive deeper into each of these areas. Plus, will cover an update on the state of the CIO. Look for links to those either using the #HPDiscover hashtag or on the blog after the show.

As always, feel free to comment and join the conversation on Twitter. The hashtag to follow is: #HPDiscover

The enterprise view of cloud, specifically Private Cloud, is confusing

Enterprise organizations are actively looking for ways to leverage cloud computing. Cloud presents the single-largest opportunity for CIOs and the organizations they lead. The move to cloud is often part of a larger strategy for the CIO moving to a consumption-first paradigm. As the CIO charts a path to cloud along the cloud spectrum, Private Cloud provides a significant opportunity.

Adoption of private cloud infrastructure is anemic at best. Looking deeper into the problem, the reason becomes painfully clear. The marketplace is heavily fractured and quite confusing even to the sophisticated enterprise buyer. After reading this post, one could question the feasibility of private cloud. The purpose of this post is not to present a case to avoid private cloud, but rather expose the challenges to adoption to help build awareness towards solving the issues.

Problem statement

Most enterprises have a varied strategy with cloud adoption. Generally there are two categories of applications and services:

  1. Existing enterprise applications: These may include legacy and custom applications. The vast majority was never designed for virtualization let alone cloud. Even if there is an interest to move to cloud, the cost and risk to move (read: re-write) these applications to cloud is extreme.
  2. Greenfield development: New applications or those modified to support cloud-based architectures. Within the enterprise, greenfield development represents a small percentage compared with existing applications. On the other hand, web-scale and startup organizations are able to leverage almost 100% greenfield development.

 

Private Cloud Market Mismatch

The disconnect is that most cloud solutions in the market today suit greenfield development, but not existing enterprise applications. Ironically, from a marketing perspective, most of the marketing buzz today is geared toward solutions that service the greenfield development leaving existing enterprise applications in the dust.

Driving focus to private cloud

For the average enterprise organization, they are faced with a cloud conundrum. Cloud, theoretically, is a major opportunity for enterprise applications. Yet the private cloud solutions are a mismatched potpourri of offerings, which make it difficult to compare. In addition, private cloud may take different forms.

 

Private Cloud Models

Keep in mind that within the overall cloud spectrum, this is only private cloud. At the edges of private cloud, colocation and public cloud present a whole new set of criteria to consider.

Within the private cloud models, it would be easy if the only criteria were compute, storage and network requirements. The reality is that a myriad of other factors are the true differentiators.

The hypervisor and OpenStack phenomenon

The defacto hypervisor in enterprises today is VMware. Not every provider supports VMware. Private cloud providers may support VMware along with other hypervisors such as Hyper-V, KVM and Zen. Yes, it is possible to move enterprise workloads from one hypervisor to another. That is not the problem. The problem is the amount of work required to address the intricacies of the existing environment. Unwinding the ball of yarn is not a trivial task and presents yet another hurdle. On the flipside, there are advantages to leveraging other hypervisors + OpenStack.

Looking beyond the surface of selection criteria

There are about a dozen different criteria that often show up when evaluating providers. Of those, hypervisor, architecture, location, ecosystem and pricing models are just some of the top-line criteria.

In order to truly evaluate providers, one must delve further into the details of each to understand the nuances of each component. It is those details that can make the difference between success and failure. And each nuance is unique to the specific provider. As someone recent stated, “Each provider is like a snowflake.” No two are alike.

The large company problem

Compounding the problem is a wide field of providers trying to capture a slice of the overall pie. Even large, incumbent companies are failing miserably to deliver private cloud solutions. There are a number of reasons companies are failing.

Time to go!

With all of these reasons, one may choose to hold off considering private cloud solutions. That would be a mistake. Sure, there are a number of challenges to adopting private cloud solutions today. Yes, the marketplace is highly fractured and confusing. However, with work comes reward.

The more enterprise applications and services move to private cloud solutions, the more opportunities open for the CIO. The move to private cloud does not circumvent alternatives from public cloud and SaaS-based solutions. It does, however, help provide greater agility and focus for the IT organization compared to traditional infrastructure solutions.

Originally published @ Gigaom Research 2/16/2015

http://research.gigaom.com/2015/02/the-enterprise-view-of-cloud-specifically-private-cloud-is-confusing/

Time’s up! Changing core IT principles

There is a theme gaining ground within IT organizations. In truth, there are a number of examples that support a common theme coming up for IT organizations. And this very theme will change the way solutions are built, configured, sold and used. Even the ecosystems and ancillary services will change. It also changes how we think, organize, lead and manage IT organizations. The theme is:

Just because you (IT) can do something does not mean you should.

Ironically, there are plenty of examples in the history of IT where the converse of this principle served IT well. Well, times have changed and so must the principles that govern the IT organization.

Take it to the customization of applications and you get this:

Just because IT can customize applications to the nth degree does not mean they necessarily should.

A great example of this is in the configuration and customization of applications. Just because IT could customize the heck out of it, should they have? Now, the argument often made here is that it provides some value, somewhere, either real or (more often) perceived. However, the reality is that it comes at a cost, sometimes, a very significant and real cost.

Making it real

Here is a real example that has played out time and time again. Take application XYZ. It is customized to the nth degree for ACME Company. Preferences are set, not necessarily because they should be, but rather because they could. Fast-forward a year or two. Now it is time to upgrade XYZ. The costs are significantly higher due to the customizations done. It requires more planning, more testing, more work all around. Were those costs justified by the benefit of the customizations? Typically not.

Now it is time to evaluate alternatives for XYZ. ACME builds a requirements document based on XYZ (including the myriad of customizations). Once the alternatives are matched against the requirements, the only solution that really fits the need is the incumbent. This approach actually gives significant weight to the incumbent solution therefore limiting alternatives.

These examples are not fictitious scenarios. They are very real and have played out in just about every organization I have come across. The lesson here is not that customizations should be avoided. The lesson is to limit customizations to only those necessary and provide significant value.

And the lesson goes beyond just configurations to understanding what IT’s true value is based on what they should and should not do.

Leveraging alternative approaches

Much is written about the value of new methodologies and technologies. Understanding IT’s true core value opportunity is paramount. The value proposition starts with understanding how the business operates. How does it make money? How does it spend money? Where are the opportunities for IT to contribute to these activities?

Every good strategy starts with a firm understanding of the ecosystem of the business. That is, how the company operates and it’s interactions. A good target that many are finding success with sits furthest away from the core company operations and therefore hardest to explain true business value…in business terms. For many, it starts with the data center and moves up the infrastructure stack. For a bit more detail: CIOs are getting out of the data center business.

Preparing for the future today

Is your IT organization ready for today? How prepared is your organization, processes and systems to handle real-time analytics? As companies consider how to engage customers from a mobile platform in real-time, the shift from batch-mode to real-time data analytics quickly takes shape. Yet many of the core systems and infrastructure are nowhere ready to take on the changing requirements.

Beyond data, are the systems ready to respond to the changing business climate? What is IT’s holistic cloud strategy? Is a DevOps methodology engaged? What about container-based architectures?

These are only a few of the core changes in play today…not in the future. If organizations are to keep up, they need to start making the evolutionary turn now.

Originally posted @ Gigaom Research 1/26/2015

http://research.gigaom.com/2015/01/times-up-changing-core-it-principles/

HP charts a course for the enterprise CIO from the inside out

Last week, HP (NYSE: HPQ) held their Discover Conference in Barcelona, Spain and the first since announcing their split into two major technology companies. Post split, HP Enterprise, the half focused on enterprise-class solutions, will need to demonstrate a strong leadership position to remain relevant in the dynamic and ever-changing enterprise space. Not a short order for such a large incumbent as HP. The split, however, brings into focus a renewed vigor to go after the enterprise CIO.

Looking inside to look outside

Over the past two years, HP assembled a powerhouse of CIO talent. The talent is not an advisory council, but rather executive leadership within the HP machine. In August 2012, HP went outside to hire Ramon Baez as their Global CIO. Previously, Baez was Vice President and CIO at Kimberly Clark. Then, in July 2014, HP made two other significant CIO hires. Former Clorox SVP & CIO Ralph Loura joined HP as CIO of HP’s Enterprise Group. At the same time, HP hired Paul Chapman as CIO of HP Software. Paul was formerly VP of Global Infrastructure & Cloud Operations at VMware. All three are highly respected among both their CIO peers and fellow executive colleagues. And one only needs to spend a few minutes with each to see how their thinking aligns with HP’s vision of the New Style of IT.

In their former roles, all three individuals accomplished many of the very activities that HP is helping their customers with today. For HP as a provider of products, solutions and services, it only needs to look internally to gain insight on which direction to take. Think of it as having the inside track on the transformational CIO.

On day one of the conference, I had the opportunity to join Paul Chapman and Paul Muller, VP of Strategic Marketing, HP Software to discuss The Evolving CIO.

Emphasis on cloud and big data

At Discover Barcelona, HP’s Helion cloud solutions and Haven data solutions were front-and-center at the front of each exhibit hall.

FullSizeRender-1 FullSizeRender-2

HP’s Helion cloud division continued their beat toward an OpenStack based ecosystem. The group, soon to be lead by former Eucalyptus CEO Marten Mickos, is placing a strong showing behind the OpenStack platform with solutions that address the enterprise challenges with Private Cloud to Public Cloud solutions.

Even so, there is still quite a bit of work to be done by both HP and their customers. Enterprises are still, in large part, working out how best to leverage cloud-based solutions. In addition, OpenStack has its own set of challenges to become a viable product for the masses. HP’s intent is to bridge the gap between what the enterprise needs and the current state of the technology. Mickos’ new position heading up the Helion division is already starting to turn a battleship in great need to a significant course correction.

On the big data front, HP made a splash in June 2013 with their HAVEn set of core technologies. The idea was to bring the best of both worlds with their acquisitions of Vertica and Autonomy. Since the announcement, the products were perceived to be a grouping of parts rather than a cohesive solution. At Discover Barcelona, HP unveiled their updated branding to Haven that signifies the integration of the products into a more comprehensive solution.

While the marketing is coming together, it is unclear that customers are resonating with the broader appeal of Haven beyond just that of each component. Haven is, however, moving to a Helion application offered in the cloud or on-premises, which could appeal more broadly to enterprise CIOs.

Infrastructure incredibly important

At the conference, HP made it clear that infrastructure remains incredibly important. And from the size of the crowds around their Converged Systems areas, it would seem customers are resonating with the same view. Anecdotally, the hardware areas were the most crowded sections of the exhibit floor.

IMG_0363

Packed within the Converged Systems group is HP’s OneView management platform. Today, OneView presents a management platform for the broader infrastructure platform. However, the real value will come from the ecosystem HP is building around the platform.

A comprehensive management platform is one area that will become increasingly more important for the CIO facing a potpourri of different vendors, providers and solutions.

Devil in the details

Ultimately, for HP, the devil is in the details. For the enterprise CIO, however, HP presents some interesting potential in their portfolio. They do have some formidable challenges ahead as they split in two and bring focus to the enterprise of tomorrow. Neither is easy, but will be interesting to see how HP fares moving forward.

Originally posted @ Gigaom Research 12/8/2014

http://research.gigaom.com/2014/12/hp-charts-a-course-for-the-enterprise-cio-from-the-inside-out/

Think you are ready to build a cloud? Think again.

There is plenty of banter about what it truly takes to play in cloud. Do you think you are ready to jump into the deep end? Is your team ready? What about your processes? And finally, is the technology ready to take the leap? Read on before you answer.

Two months ago, I wrote “8 Reasons Not to Move to Cloud” to address the common reasons why organizations are hesitant to move to cloud. This post is geared to address the level one must play if they want to build their own clouds. Having built cloud services myself, I can say from experience that it is not for the faint of heart.

Automation and agility

Traditional corporate infrastructure is typically not automated or agile. Meaning, a change in the requirements or demands may constitute a change in architecture, which in turn, requires a manual change in the configurations. All of this takes time, which works against the real-time expectations of cloud provisioning.

Cloud-based solutions must have some form of automation and agility to address the changing demands coming from customers. Customers expect real-time provisioning of their resources. Speed is key here and only possible with automation. And a prerequisite for automation is standardization.

Standardization is key

The need for standardization is key when building cloud-based solutions. In order to truly enable automation, there must be a level of assumption around hardware configurations, architecture, and logical network. Even relatively small things such as BIOS version, NIC model and patch level can throw havoc into cloud automation. From a corporate perspective, even the same model of server hardware could have different versions of BIOS, NIC and patches.

Add logical configurations such as network topology, and the complexities start to mount. Where are the switches? How is the topology configured? Which protocols are in play for which sections of the network? One can quickly see how a very small hiccup can throw things into whack pretty quickly.

For the average corporate environment, managing physical and logical configurations at this level is challenging. Even for those operating at scale, it is a challenge. This is one reason why those at scale build their own systems; so they can control the details.

The scale problem

At scale, however, the challenge is more than just numbers. Managing at scale requires a different mode of thinking. In a traditional corporate environment, when a server fails, an alert goes off to dispatch someone to replace the failed component. In parallel, the impacted workload is moved or changed to limit the impact to users. These issues can range from a small fire to a three-alarm inferno.

At scale, those processes simply collapse under the stress. This is where operating at scale requires a different mode of thinking. Manual intervention for every issue is not an option.

The operations math problem

First, the cloud architecture must endure multiple hardware failures. Single points of failure must come out of the equation as much as possible. I wrote about this back in 2011 with my post “Clouds, Failure and Other Things That Go Bump in the Night.” This is where we revert to probability and statistics. There will be hardware failures. Even entire data centers will fail. The challenge is how to change out operational thinking to assuming failure. I detail this a bit further in my post “Is the cloud unstable and what can we do about it?

Discipline, discipline, discipline

All of these lead to a required chance in discipline. No longer is one able to simply run into the data center to fix something. No longer are humans able to fix everything manually. In fact, the level of discipline goes way up with cloud. Even the smallest mistake can have cataclysmic consequences. Refer to the November Microsoft Azure outage that was caused by a ‘performance update’. Process, operations, configurations and architectures must all raise their level of discipline.

Consider the consequences

Going full-circle, the question is: Should an enterprise or corporate entity consider building private clouds? For the vast majority of organizations, the answer should be no. But there are exceptions. Refer to my post way back in 2009 on the “Importance of Private Clouds.” Internal private clouds may present challenges for some, but hosted private clouds provide an elegant alternative.

In the end, building clouds are hard and complicated. Is it plausible for an enterprise to build their own cloud? Yes. Typically, it may come as a specific solution to a specific problem. But the hurdle is pretty high…and getting higher every day. Consider the consequences before making the leap.

 

Originally posted @ Gigaom Research 12/1/2014

http://research.gigaom.com/2014/12/think-you-are-ready-to-build-a-cloud-think-again/

CIOs are getting out of the Data Center business

More than three years ago, a proclamation was made (by myself, Mark Thiele and Jan Wiersma) that the data center was dead. Ironically, all three of us come from an IT background of running data centers within IT organizations.

At the time, it was an event in London, England where the attendees were utterly dumbfounded by such a statement. Keep in mind that this event was also a data center specific event. To many this statement was an act of heresy.

But the statement had truth and the start of a movement already at foot. Ironically, companies in leading roles were already starting down this path. It was just going to take some time before the concept became common thinking. Even today, it still is not common thinking. But the movement is starting to gain momentum. Across the spectrum of industries from healthcare to financial services, CIO’s and their contemporaries are generally making a move away from data centers. Specifically, moving away from managing their own, dedicated corporate data centers.

Enterprise data center assets

And by data center, we are talking about the physical critical facility and equipment that is the data center. We are not talking about the servers, storage or network equipment within the data center facility. These data center assets (the facility) are moving into a new phase of maturity. While still needed, companies are realizing that they no longer need to manage their own critical facilities in order to provide the level of service required.

Moving along the spectrum

As companies look at alternatives to operating their own data centers, there are a number of options available today. For many years, colocation, or the renting of data center space, was the only viable option for most. Today, the options for colocation vary widely as do alternatives in the form of cloud computing. Companies are moving along a spectrum from traditional corporate data centers to public cloud infrastructure.

 

DataCenterCloudSpectrum

It is important to note that companies will not move entire data centers (en masse) from one mode to another. The specific applications, or workloads, will spread across the spectrum with some number leveraging most if not all of the modes. Over time, the distribution of workloads shifts toward the right and away from corporate data centers.

In addition, even those moving across the spectrum may find that they are not able to completely reduce their corporate data center footprint to zero for some time. There may be a number of reasons for this, but none should preclude an effort to reduce the corporate data center footprint.

Additional cloud models

For clarity sake, Platform as a Service (PaaS) and Software as a Service (SaaS) are intentionally omitted. Yet, both ultimately play a role in the complexity of planning a holistic cloud strategy. As with any strategy discussion, one needs to consider many factors beyond simply the technology, or in this case facility, when making critical decisions.

Starting with colocation

Last year, I wrote a blog titled “Time to get on the colocation train before it is too late.” The premise was foretelling the impending move from corporate data centers to colocation facilities. While colocation facilities are seeing an uptick in interest, the momentum is only now starting to build.

For many IT organizations, the first step along the spectrum is in moving from their corporate data center to colocation. Moving infrastructure services from one data center to another is not a trivial step, but a very important one. Moving a data center will test an IT organization in many ways that highlight opportunities for improvement in their quest to ultimately leverage cloud computing. One of those is in their ability to fully operate ‘lights out’ or without the ability to physically enter the data center. The reason is that unlike the corporate data center that was down the hall, a colocation facility may physically be 1,000 miles away or more!

Where to go from here

Plan, plan, and plan. Moving a data center can take months to years even with aggressive planning. Start by thinking about what is strategic, differentiating and supports the corporate strategy. Consider the options that exist in both the colocation and cloud marketplace. You might be surprised how far the colocation marketplace has evolved in just the past few years! And that is just the start.

The opportunities for CIO’s and their IT organization are plentiful today. Getting out of the data center business is just one of the first moves that more and more CIO’s are starting to take. Move where it makes sense and seize the opportunity. For those already down this path, the results can be quite liberating!

 

Originally posted @ Gigaom Research 11/10/14

http://research.gigaom.com/2014/11/cios-are-getting-out-of-the-data-center-business/

There comes a point when it is not just about storage space

Is the difference between cloud storage provides about free space? In a word, no. I wrote about the cloud storage wars and potential bubble here:

The cloud storage wars heat up

http://avoa.com/2014/04/29/the-cloud-storage-wars-heat-up/

4 reasons cloud storage is not a bubble about to pop

http://avoa.com/2014/03/24/4-reasons-cloud-storage-is-not-a-bubble-about-to-pop/

Each of the providers is doing their part to drive value into their respective solutions. To some, value includes the amount of ‘free’ disk space included. Just today, Microsoft upped the ante by offering unlimited free space for their OneDrive and OneDrive for Business solutions.

Is there value in the amount of free space? Maybe, but only to a point. Once they offer an amount above the normal needs (or unlimited), the value becomes a null. I do not have statistics, but would hazard a venture that ‘unlimited’ is more marketing leverage where most users only consume less than 50GB each.

Looking beyond free space

Once a provider offers unlimited storage, one needs to look at the feature/ functionality of the solution. Not all solutions are built the same nor offer similar levels. Enterprise features, integration, ease of use and mobile access are just a few of the differentiators. Even with unlimited storage, if the solution does not offer the feature you need, storage value is greatly diminished.

The big picture

For most, cloud storage is about replacing a current solution. On the surface the amount of free storage is a quick pickup. However, the real issue is in the compatibility and value beyond just the amount of free storage. Does the solution integrate with existing solutions? How broad is their ecosystem? What about Single Sign On (SSO) support? How much work will it take to implement and train users? These are just a few of the factors that must be considered.

 

Originally posted @ Gigaom Research 10/27/14

http://research.gigaom.com/2014/10/there-comes-a-point-when-it-is-not-just-about-storage-space/

Is the cloud unstable and what can we do about it?

Originally posted @ Gigaom Research 9/29/2014

http://research.gigaom.com/2014/09/is-the-cloud-instable-and-what-can-we-do-about-it/

 

The recent major reboots of cloud-based infrastructure by Amazon and Rackspace has resurfaced the question about cloud instability. Days before the reboot, both Amazon and Rackspace noted that the reboots were due to a vulnerability with Zen. Barb Darrow of Gigaom covered this in detail here. Ironically, all of this came less than a week before the action took place, leaving many flat-footed.

Outages are not new

First, let us admit that outages (and reboots) are not unique to cloud-based infrastructure. Traditional corporate data centers face unplanned outages and regular system reboots. For Microsoft-based infrastructure, reboots may happen monthly due to security patch updates. Back in April 2011, I wrote a piece Amazon Outage Concerns are Overblown. Amazon had just endured another outage of their Virginia data center that very day. In response, customers and observers took shots at Amazon. However, is Amazon’s outage really the problem? In the piece, I suggested that customers were misunderstanding the problem when they think about cloud-based infrastructure services.

Cloud expectations are misguided

As with the piece back in 2011, the expectations of cloud-based infrastructure have not changed much for enterprise customers. The expectation has been (and still is) that cloud-based infrastructure is resilient just like that within the corporate data center. The truth is very different. There are exceptions, but the majority of cloud-based infrastructure is not built for hardware resiliency. That’s by design. The expectation by service providers is that application/ service resiliency rests further up the stack when you move to cloud. That is very different than traditional application architectures found in the corporate data center where infrastructure provides the resiliency.

Time to expect failure in the cloud

Like many of the web-scale applications using cloud-based infrastructure today, enterprise applications need to rethink their architecture. If the assumption is that infrastructure will fail, how will that impact architectural decisions? When leveraging cloud-based infrastructure services from Amazon or Rackspace, this paradigm plays out well. If you lose the infrastructure, the application keeps humming away. Take out a data center, and users are still not impacted. Are we there yet? Nowhere close. But that is the direction we must take.

Getting from here to there

Hypothetically, if an application were built with the expectation of infrastructure failure, the recent failures would not have impacted the delivery to the user. Going further, imagine if the application could withstand a full data center outage and/ or a core intercontinental undersea fiber cut. If the expectation were for complete infrastructure failure, then the results would be quite different. Unfortunately, the reality is just not there…yet.

The vast majority of enterprise applications were never designed for cloud. Therefore, they need to be tweaked, re-architected or worse, completely rewritten. There’s a real cost to do so! Just because the application could be moved to cloud does not mean the economics are there to support it. Each application needs to be evaluated individually.

Building the counterargument

Some may say that this whole argument is hogwash. So, let us take a look at the alternative. If one does build cloud-based infrastructure to be resilient like that of its corporate brethren, it would result in a very expensive venture at a minimum. Infrastructure is expensive. Back in the 1970’s a company called Tandem Computers had a solution to this with their NonStop system. In the 1990’s, the Tandem NonStop Himalayan class systems were all the rage…if you could afford them. NonStop was particularly interesting for financial services organizations that 1) could not afford the downtime and 2) had the money to afford the system. Consequently, Tandem was acquired by Compaq who in turn was acquired by HP. NonStop is now owned by HP as part of their Integrity NonStop products. Aside from Tandem’s solutions, even with all of the infrastructure redundancy, many are still just a data center outage away of impacting an application. The bottom line is: It is impossible to build a 100% resilient infrastructure. That is true either due to 1) it is cost prohibitive and 2) becomes a statistical probability problem. For many, the value comes down to the statistic probably of an outage compared with the protections taken.

Making the move

Over the past five years or so, companies have looked at the economics to build redundancy (and resiliency) at the infrastructure layer. The net result is a renewed focus on moving away from infrastructure resiliency and toward low-cost hardware. The thinking is: infrastructure is expensive and resiliency needs to move up the stack. The challenge is changing the paradigm of how application redundancy is handled by developers of corporate applications.

Seven Things the CIO should consider when adopting a holistic cloud strategy

Originally posted @ Gigaom Research 8/25/14

http://research.gigaom.com/2014/08/seven-things-the-cio-should-consider-when-adopting-a-holistic-cloud-strategy/

 

As conversations about cloud computing continues to focus on IT’s inability at holistic adoption, organizations outside of IT continue their cloud adoption trek outside the prevue of IT. While many of these efforts are considered Shadow IT efforts and frowned upon by the IT organization, they are simply a response to a wider problem.

The IT organization needs to adopt a holistic cloud strategy. However, are CIOs really ready for this approach? Michael Keithley, Creative Artists Agency’s CIO just returned from CIO Magazine’s CIO 100 Symposium which brings together the industry’s best IT leaders. In his blog post, he notes that “(he) was shocked to find that even among this elite group of CIOs there were still a significant amount of CIOs who where resisting cloud.” While that perspective is widely shared, it does not represent all CIOs. There are still a good number of CIOs that have moved to a holistic cloud strategy. The problem is that most organizations are still in a much earlier state of adoption.

In order to develop a holistic cloud strategy, it is important to follow a well-defined process. The four steps are straightforward and fit just about any organization:

  1. Assess: Provide a holistic assessment of the entire IT organization, applications and services that is business focused, not technology focused. For the CIO, they are a business leader that happens to have responsibility for technology. Understand what is differentiating and what is not.
  2. Roadmap: Use the options and recommendations from the assessment to provide a roadmap. The roadmap outlines priority and valuations that ultimately drive the alignment of IT.
  3. Execute: This is where the rubber hits the road. IT organizations will learn more about themselves through action. For many, it is important to start small (read: lower risk) and ramp up quickly.
  4. Re-Assess & Adjust: As the IT organization starts down the path of execution, lessons are learned and adjustments needed. Those adjustments will span technology, organization, process and governance. Continual improvement is a key hallmark to staying in tune with the changing demands.

For many, following this process alone is not enough to develop a holistic cloud strategy. In order to successfully leverage a cloud-based solution, several things need to change that may contradict current norms. Today, cloud is leveraged in many ways from Software as a Service (SaaS) to Infrastructure as a Service (IaaS). However, it is most often a very fractured and disjointed approach to leveraging cloud. Yet, the very applications and services in play require that organizations consider a holistic approach in order to work most effectively.

When considering a holistic cloud strategy, there are a number of things the CIO needs to consider including these six:

  1. Challenge the Status Quo: This is one of the hardest changes as the culture within IT developed over decades. One example is changing the mindset that ‘critical systems may not reside outside your own data center’ is not trivial. On the other hand, leading CIOs are already “getting out of the data center business.” Do not get trapped by the cultural norms and the status quo.
  2. Differentiation: Consider which applications and services are true differentiators for your company. Focus on the applications and services that provide strategic value and shift more common functions (ie: email) to alternative solutions like Microsoft Office 365 or Google Apps.
  3. Align with Business Strategy: Determine how IT can best enable and catapult the company’s business strategy. If IT is interested in making a technology shift, consider if it will bring direct positive value to the business strategy. If it does not, one should ask a number of additional questions determining the true value of the change. With so much demand on IT, focus should be on those changes that bring the highest value and align with the business strategy.
  4. Internal Changes: Moving to cloud changes how organizations, processes and governance models behave. A simple example is how business continuity and disaster recovery processes will need to change in order to accommodate the introduction of cloud-based services. For organizations, cloud presents both an excitement of something new and a fear from loss of control and possible job loss. CIOs need to ensure that this area is well thought out before proceeding.
  5. Vendor Management: Managing a cloud provider is not like every other existing vendor relationship. Vendor management comes into sharp focus with the cloud provider that spans far more than just the terms of the Service Level Agreement (SLA).
  6. Exit Strategy: Think about the end before getting started. Exiting a cloud service can happen for good or bad reasons. Understand what the exit terms are and in what for your data will exist. Exporting a flat file could present a challenge if the data is in a structured database. However, that may be the extent of the provider’s responsibility. When considering alternative providers, recognize that shifting workloads across providers is not necessarily as trivial as it might sound. It is important to think this through before engaging.
  7. Innovation: Actively seek out ways to adopt new solutions and methodologies. For example, understand the value from Devops, OpenStack, Containers and Converged Infrastructure. Each of these may challenge traditional thinking, which is ok.

Those are seven of the top issues that often come up in the process of setting a holistic cloud strategy. Cloud offers the CIO, the IT organization and the company as a whole one of the greatest opportunities today. Cloud is significant, but only the tip of the iceberg. For the CIO and their organization, there are many more opportunities beyond cloud today that are already in the works.