Seven Things the CIO should consider when adopting a holistic cloud strategy

Originally posted @ Gigaom Research 8/25/14

http://research.gigaom.com/2014/08/seven-things-the-cio-should-consider-when-adopting-a-holistic-cloud-strategy/

 

As conversations about cloud computing continues to focus on IT’s inability at holistic adoption, organizations outside of IT continue their cloud adoption trek outside the prevue of IT. While many of these efforts are considered Shadow IT efforts and frowned upon by the IT organization, they are simply a response to a wider problem.

The IT organization needs to adopt a holistic cloud strategy. However, are CIOs really ready for this approach? Michael Keithley, Creative Artists Agency’s CIO just returned from CIO Magazine’s CIO 100 Symposium which brings together the industry’s best IT leaders. In his blog post, he notes that “(he) was shocked to find that even among this elite group of CIOs there were still a significant amount of CIOs who where resisting cloud.” While that perspective is widely shared, it does not represent all CIOs. There are still a good number of CIOs that have moved to a holistic cloud strategy. The problem is that most organizations are still in a much earlier state of adoption.

In order to develop a holistic cloud strategy, it is important to follow a well-defined process. The four steps are straightforward and fit just about any organization:

  1. Assess: Provide a holistic assessment of the entire IT organization, applications and services that is business focused, not technology focused. For the CIO, they are a business leader that happens to have responsibility for technology. Understand what is differentiating and what is not.
  2. Roadmap: Use the options and recommendations from the assessment to provide a roadmap. The roadmap outlines priority and valuations that ultimately drive the alignment of IT.
  3. Execute: This is where the rubber hits the road. IT organizations will learn more about themselves through action. For many, it is important to start small (read: lower risk) and ramp up quickly.
  4. Re-Assess & Adjust: As the IT organization starts down the path of execution, lessons are learned and adjustments needed. Those adjustments will span technology, organization, process and governance. Continual improvement is a key hallmark to staying in tune with the changing demands.

For many, following this process alone is not enough to develop a holistic cloud strategy. In order to successfully leverage a cloud-based solution, several things need to change that may contradict current norms. Today, cloud is leveraged in many ways from Software as a Service (SaaS) to Infrastructure as a Service (IaaS). However, it is most often a very fractured and disjointed approach to leveraging cloud. Yet, the very applications and services in play require that organizations consider a holistic approach in order to work most effectively.

When considering a holistic cloud strategy, there are a number of things the CIO needs to consider including these six:

  1. Challenge the Status Quo: This is one of the hardest changes as the culture within IT developed over decades. One example is changing the mindset that ‘critical systems may not reside outside your own data center’ is not trivial. On the other hand, leading CIOs are already “getting out of the data center business.” Do not get trapped by the cultural norms and the status quo.
  2. Differentiation: Consider which applications and services are true differentiators for your company. Focus on the applications and services that provide strategic value and shift more common functions (ie: email) to alternative solutions like Microsoft Office 365 or Google Apps.
  3. Align with Business Strategy: Determine how IT can best enable and catapult the company’s business strategy. If IT is interested in making a technology shift, consider if it will bring direct positive value to the business strategy. If it does not, one should ask a number of additional questions determining the true value of the change. With so much demand on IT, focus should be on those changes that bring the highest value and align with the business strategy.
  4. Internal Changes: Moving to cloud changes how organizations, processes and governance models behave. A simple example is how business continuity and disaster recovery processes will need to change in order to accommodate the introduction of cloud-based services. For organizations, cloud presents both an excitement of something new and a fear from loss of control and possible job loss. CIOs need to ensure that this area is well thought out before proceeding.
  5. Vendor Management: Managing a cloud provider is not like every other existing vendor relationship. Vendor management comes into sharp focus with the cloud provider that spans far more than just the terms of the Service Level Agreement (SLA).
  6. Exit Strategy: Think about the end before getting started. Exiting a cloud service can happen for good or bad reasons. Understand what the exit terms are and in what for your data will exist. Exporting a flat file could present a challenge if the data is in a structured database. However, that may be the extent of the provider’s responsibility. When considering alternative providers, recognize that shifting workloads across providers is not necessarily as trivial as it might sound. It is important to think this through before engaging.
  7. Innovation: Actively seek out ways to adopt new solutions and methodologies. For example, understand the value from Devops, OpenStack, Containers and Converged Infrastructure. Each of these may challenge traditional thinking, which is ok.

Those are seven of the top issues that often come up in the process of setting a holistic cloud strategy. Cloud offers the CIO, the IT organization and the company as a whole one of the greatest opportunities today. Cloud is significant, but only the tip of the iceberg. For the CIO and their organization, there are many more opportunities beyond cloud today that are already in the works.

The number of 9’s don’t matter but business metrics do

Originally posted @ Gigaom Research 8/11/14

http://research.gigaom.com/2014/08/the-number-of-9s-dont-matter-but-business-metrics-do/

Information Technology (IT) organizations across the globe use a number of metrics to measure their success, failure and standing. One of the more popular metrics is the ‘number of 9’s’ as a measure of system uptime. Why use 9’s? It is relatively easy for technology organizations to measure system performance. Unfortunately, it does not matter outside of IT.

What are 9s?

The number of 9’s refers to the percentage of system uptime. Typically, we hear about three 9’s, four 9’s or five 9’s. Three 9’s refers to 99.9% uptime, or .1% downtime whereas five 9’s refers to an ever-illusive 99.999% uptime or a mere .001% downtime.

These metrics have been used for a very long time; from internal IT organizations reporting status to Service Level Agreements (SLAs) from service providers. The number of 9’s is used as a metric to set performance targets…and measure progress toward them. The problem is, they are technology focused. When looking at the inverse as a function of downtime, it equates to the following table:

Downtime TableEven at four-9’s, that equates to a maximum of only 52.56 minutes of downtime per year. Unfortunately, this means very little if the company is in retail and those 52 minutes of downtime came during Black Friday or Cyber Monday. In addition, the number may be artificially low as other factors may not be included in the calculation.

The Fallacy of Planned vs. Unplanned Downtime

First, it is important to differentiate between scheduled downtime and unplanned downtime (outages). Most measure their system performance based on the amount of unplanned downtime and exclude any scheduled downtime from the calculations. There has been an ongoing debate for years whether to include scheduled downtime.

Arguably, if a system is down (planned or unplanned), it is still down and unavailable. In today’s world of 24×7, 100% uptime expectations, planned downtime must be considered. Ironically, the inclusion of planned downtime causes uptime figures to drop and may cause a rethinking of how applications and services are architected.

Technology Metrics

In today’s world, do these metrics even make sense anymore? They are not business metrics…unless you are a service provider that makes your business about uptime. For the majority of IT organizations, these metrics are just ‘technology’ metrics that have little to no relevance to the business at hand. Just ask a line of business owner what five-9’s means to their line of business. For IT, it is hard to connect the dots between percentage uptime and true business impact. And by business impact, this refers to business impact measured in dollars.

Business Metrics

If not 9’s, what business metrics should IT be focused on? Most companies use a common set of metrics to gauge business progress. Those may include Cost to Acquire a Customer (CAC), Lifetime Value of Customer (LVC) and Gross Margin. Customer engagement is a key area of focus that includes customer acquisition, retention & churn. For IT, these metrics may seem very foreign. However, to a company, they are very real. Increasingly so, IT must connect the dots between that new technology and the value it brings to business metrics. As IT evolves to a business focused organization, so should their metrics of success.

The Role of the CIO

The CIO, above all others, is best positioned to take the lead in this transformation. Instead of looking for ways to express technological impact, look for ways to express business impact. It may seem like a subtle change in nomenclature, but the impact is huge. Business metrics provide a single view that all parts of a company can directly work toward improving.

A good starting point is to understand how the company makes money. Start with reading the income statement, balance sheet and cash flow statement. Are there any hotspots that IT can contribute to? And what (business) metrics should IT use to measure their progress.

Not only will this shift IT thinking to be business focused, it will also highlight better alignment with other business leaders across the company.

Death of the Data Center

Back in 2011, Mark Thiele (@mthiele10), Jan Wiersma (@jmwiersma) and I shared the stage at a conference in London, England for a panel discussion on the future of data centers. The three of us are founding board members with Data Center Pulse; an industry association of data center owners and operators with over 6,000 members that span the globe.

Our common theme for the panel: Death of the Data Center. Our message was clear and poignant. After decades of data center growth, a significant change was both needed and on the horizon. And this change was about to turn the entire industry in its head. The days of building and operating data centers of all shapes, sizes and types throughout the world was about to end. The way data centers are consumed has changed.

Fast forward the clock to 2014, a different conference (ECF/ DCE) and a different city (Monte Carlo, Monaco). The three of us shared the stage once again to touch on a variety of subjects ranging from SMAC to DCIM to the future of data centers. During my opening keynote presentation on the first day, I referred back to our statement from three years earlier professing “Death of the Data Center.”

Of course, making this statement at a Cloud and Data Center conference might have bordered on heresy. But the point still needed to be made. And it was more important today than ever. The tectonic shift we discussed three years in London was already starting to play out. Yet, the industry as a whole was still trying to ignore the fact that evolution was taking over. And by industry I’m referring to both internal IT organizations along with data center and service providers. How we look at data centers was changing and neither side was ready to admit change was afoot.

The Tectonic Data Center Evolution

During the economic downturn in 2008 and 2009, a shift in IT spending took place. At the same time, cloud computing was truly making its own entrance. Companies of all sizes (and their IT organizations) were pulling back their spending and rethinking what ‘strategic spending’ really meant. Coming into focus was the significant costs associated with owning and operating data centers. The common question: Do we still really needed our own data center?

This is a tough question to consider for those that always believed that data, applications, and systems needed to be in their own data center in order to be 1) manageable and 2) secure. Neither of those hold true today. In fact, by many accounts, the typical enterprise data center is less secure than the alternatives (colocation or cloud).

The reality is: This shift has already started, but we are still in the early days. Colocation is not new, but the options and maturity of the alternatives is getting more and more impressive. The cloud solutions that are part of a data center’s ecosystem are equally impressive.

Data Center Demand

Today, there is plenty of data center capacity. However, there is not much new capacity being built by data center providers due to the fear of over capacity and idle resources. The problem is, when the demand from enterprises starts to ramp up. It takes years to bring a new data center facility online. We know the demand is coming, but when. And when it does, it will create a constraint on data center capacity until new capacity is built. I wrote about this in my post Time to get on the Colocation Train Before it is Too Late.

Are Data Centers Dying?

In a word, are data centers going away? No. However, if you are an enterprise running your own data center, expect a significant shift. At a minimum, the size of your existing data center is shrinking if not completely going away. And if you are in an industry with regulatory or compliance requirements, the changes still apply. I have worked with companies some of the most regulated and sensitive industries including Healthcare, Financial Services and Government Intelligence Communities. All of which are considering some form of colocation and cloud today.

Our point was not to outline a general demise of data centers, but to communicate an impending shift in how data centers are consumed. To some, there was indeed a demise of data centers coming. However, to others, it would generate significant opportunity. The question where are you in this equation and are you prepared for the impending shift?

HP Launches Helion to Address Enterprise Cloud Adoption

Today, HP takes a huge step forward to address the broad and evolving enterprise cloud demand through their HP Helion announcement. HP Helion presents HP’s strategy to provide a comprehensive cloud portfolio. As HP’s CEO Meg Whitman mentioned, “HP is in it to win.” HP is investing over $1b in their cloud-based solutions. It’s clear that HP is working hard to win the new enterprise game.

Traditional IT demand is not going away, but the demand for cloud is increasing. Most enterprises struggle to leverage traditional IT while adopting Transformational IT. Providers, such as HP, need to address this complex and hybrid approach. With Helion, HP ups the ante in addressing this demand.

Today, HP launches their Helion brand encompassing their entire cloud portfolio. The formerly know HP Cloud solution is now part of the Helion branding. But the key change isn’t the branding change. It’s the end-to-end products that address an enterprise’s needs regardless of their state of cloud adoption.

Open Source Software Part of HP’s Strategy

HP’s commitment to OpenStack is not new. They have two board members as part of the OpenStack Foundation. And their further commitment to embrace OpenStack as part of their core cloud offerings furthers both HP and the OpenStack movement as a whole. OpenStack is a key opportunity for enterprises and service providers alike. However, open source software, and specifically OpenStack has presented significant challenges for enterprise adoption.

One of the first solutions from HP is their OpenStack Community Edition (OCE). OCE is intended for entry-level use up to 30 nodes. OCE is an approachable way for enterprises interested in OpenStack to get started. For enterprises interested in going beyond 30 nodes, HP’s commercial solution bridges the gap.

OCE is not only open source, but supported by HP. It’s also one of the first distributions based on the OpenStack Icehouse release. HP intends to ship updates every six weeks, which will keep the distribution fresh. HP OCE is available today as a free download.

Also announced today was HP’s commitment to Cloud Foundry. Cloud Foundry presents an additional opportunity for enterprises to embrace cloud through PaaS. For many enterprises, PaaS presents the solution between a core infrastructure solution and SaaS solutions. Plus, PaaS provides portability for applications based on a specific platform.

In Summary

HP Helion presents one of the most comprehensive end-to-end solutions for enterprises today. OpenStack is very interesting for enterprises, but difficult to consume. Helion lowers the bar and gives enterprises options they’ve been clamoring for.

First Impressions of EMC World

EMC World, EMC’s core annual conference is this week in Las Vegas and there are a number of very core things to watch out for. EMC’s presence in the enterprise space is legendary. However the enterprise space is gaining momentum in the enterprise IT evolution. The question is: Is EMC in a position to support these changes and continue to provide the leadership they’re known for. Bottom line: Companies are moving to the cloud. On the surface, this could present disaster for EMC. Key will be EMC’s ability to shift and help customers embrace the cloud.

Importance of Storage

Storage has grown up. No longer are the days where storage is just a place to store data and files. Storage is now key to the success of any given application. EMC clearly understands this and needs to evolve to this change. This is new! But it provides a radical shift in opportunity for companies like EMC. Look for EMC to make the connection between applications and storage.

Partnerships & Ecosystem Development

EMC provides leadership to enables IT to provide greater business value. The key is to evolve quickly and provide solutions that are needed both today and moving forward.

One could argue that no one company can (or should) be everything to everyone. Even very large enterprise providers such as EMC, need to embrace this shift. One example of EMC’s recent shift is their partnership with SAP. Frankly, this is a great sign of maturity on the part of EMC. Similarly, HP recently started providing their ‘Shark’ solutions for SAP’s HANA. Look for EMC to embrace this relationship and look to other key relationships between EMC and key enterprise players.

Open Source Software Integration

It is clear that open source software (like OpenStack) is changing the way enterprise solutions are built and consumed within a completely new economic model. The more mature enterprise-class providers will acknowledge this shift and embrace it. Look for EMC to provide greater integration with open source solutions.

Enterprise to Service Provider Shifts

Historically, enterprise-class providers create solutions specifically for enterprises…not service providers. Service provider requirements are quite different from that of their enterprise counterparts. At the same time, the shift in demand from enterprise to service provider happens over time, not all at once. Look for EMC to acknowledge this shift in terms of integration between solutions and changes in their management tools. The impact of general-purpose storage solutions also changes the paradigm for EMC. EMC needs to demonstrate value beyond the underlying physical hardware.

The VMware and Pivotal Impact

A constant question for EMC is how VMware and Pivotal play a role in EMC’s future. Both companies provide solutions that support the evolving changes within the enterprise. But potentially create a loggerhead for openness. Can EMC embrace the changes and innovation from both VMware and Pivotal, but still maintain flexibility in their open approach to alternative solutions? Look for indications of this through their partnerships and reference architectures.

Timing is Everything

EMC provides core storage solutions for key enterprise applications. In many ways, these are the very applications that are both sensitive to enterprises and harder to move. In both cases, this translates to risk. Enterprise customers have been hesitant to make the shift from traditional storage solutions to alternative approaches. That attitude is changing. Change is no longer an option it is a requirement. How is EMC taking a leadership role to help existing enterprise customers make this shift? Look for EMC to provide examples of flexibility beyond the traditional enterprise constraints.

In Summary

This year, more than any in the past, is a watershed year for EMC. This year, the stars are aligning where customers are open for change, looking for help and ready to get started. The traditional enterprise sacred cows are up for grab. Now is the time for EMC to demonstrate how they can make this shift and continue to provide leadership to the enterprise customer.

Initial Impressions from IBM Impact

This week is IBM’s Impact Conference in Las Vegas. In past years, IBM conveyed components of different strategies around Mobile and Cloud. However, they have since moved to an integrated approach. This integrated approach is great, but offers a few challenges for an incumbent such as IBM. Here are some things to watch for this week:

Hardware is King

Many of the conversations at Impact have mentioned IBM’s heritage and leadership in the hardware space. This year, IBM celebrates 50 years of the mainframe. And there is plenty of innovative work IBM is doing in the hardware space.

The question is not about IBM’s leadership in hardware. It is more around their longer-term vision. IBM is a company challenged with keeping existing customers engaged (many of which are hardware customers), while engaging an even strong software and services story. The days of the general purpose processor that supports a myriad of applications is less important than specific infrastructure geared toward highly specialized workloads that run at scale.

The Shift in Enterprise Demand

Enterprises are still buying hardware today. But the demand for hardware is shifting from enterprises to service providers. As such, providers like IBM must evolve their software, management and tools to support the change in customers. This impacts the usability for enterprises and service providers alike. And vendors like IBM need to both acknowledge these shifts…and have an answer to the demand.

The Converged Story

Many want to talk about mobile and cloud in specific silos. IBM has been no different in the past. However, at Impact this week, IBM is talking the converged story around both mobile and cloud. This is a key shift in thinking that mirrors the holistic thinking any enterprise should take.

The SoftLayer Parlay

IBM’s acquisition of SoftLayer presented a brilliant opportunity to build a platform for the future. IBM needs to continue innovating and leveraging the SoftLayer platform in a myriad of ways that accommodates the varied requirements of customers (both current and potential).

OpenPOWER Foundation

This week, IBM is promoting their OpenPOWER Foundation pretty heavily. While this is a great move in the right direction, the branding might be off-putting for potential new customers looking for an ecosystem that is less tied to IBM’s hardware heritage. Look for further distinctions to be made in this space as IBM evolves.

Hybrid & Holistic

Finally, moving away from a silo approach, look for IBM to take a holistic approach to embracing both hybrid cloud and mobile strategies. Again, this mirrors where enterprises today need to go. Not necessarily where they are today. But that provides opportunity for IBM to take a leadership position in the industry.

4 Reasons Cloud Storage is Not a Bubble About to Pop

With the recent S-1 filing by Box for their Initial Public Offering (IPO) the question of a Cloud Storage Bubble is raised once again. But is it really a bubble? And should enterprise customers take note and run for the hills? There is more at stake than what appears on the surface.

Box Files Form S-1 IPO

By Box filing their S-1, their financials are put on display for all to scrutinize. Within those figures, we learn that their 34k+ paying customers contribute $124m in revenue that offsets operational costs to the tune of a $169m loss last fiscal year. Over the past four years of reporting, Box reported an increase in the loss trend. But is this enough to consider impending doom?

Cloud Storage Startup Landscape

In 2013, Nirvanix (another cloud storage startup) closed up shop and sent their customers scrambling. Dropbox is another of the closest competitors to Box and announced their intent to IPO as well. Could Box and Dropbox be following in Nirvanix’ footsteps? Enterprise storage is expensive. Yes, there are economies of scale and tricks you can play to maximize the efficiency, but storage infrastructure is expensive.

So, let’s take a look at some potential hypothesis on what may be occurring:

Hypothesis One: There is a minimum amount of capital required to achieve profitability.

Nirvanix only took on $70m while Box and Dropbox took on $414m and $607m respectively. Consider that enterprises need stability in their cloud storage provider, a substantial number of enterprise features (ie: auth, security) and a solid ecosystem for integration. It is probable that $70m is not enough to reach ‘escape velocity’ in this space. It is possible that $400-600m may not be enough either. It is also likely that scale plays a significant role too. It will be interesting to see Dropbox’ figures when they file their S-1.

Hypothesis Two: The real value for cloud storage is not in unstructured file storage.

Sure, the ability to store, share and collaborate on files online is valuable. However, is there greater value in the meta-data that comes from understanding the behaviors of those files? Plus, similar to the problem email systems and enterprise storage vendors addressed years ago with data de-duplication, there is value to managing files at scale. Not to mention that the meta-data around that data could be repurposed for other functions.

Hypothesis Three: Unstructured file storage is simply a loss leader.

There are many directions a company like Box or Dropbox could take based on their current service offerings. Of course there are many directions this could take, but that is for a future discussion.

Hypothesis Four: The shifting enterprise storage paradigm will not allow cloud storage failure.

It is simple enough to treat all storage the same, but in reality it is not that easy. Traditional methods for storing files on internal storage sub-systems is cumbersome at best when we move into a SMAC (social, mobile, analytics, cloud) based world. Enterprises are already shifting toward cloud-based storage to alleviate the pressure and shift their paradigm. The thought of having to move back to traditional methods would break many apps and services. In the end, enterprises really need to move forward and are not able to go back.

Consider the Options

On the surface, it may appear that Box (and ostensibly Dropbox) may be losing money today, there is much more at stake. Enterprises know they need to make a shift to a SMAC based world too. The cards appear to point favorably in the direction of additional options beyond the currently cloud storage portfolio offering. I would look more toward the future opportunities of the space through one of the four hypotheses and less on the impending implosion.

The Shark of HP Converged Systems

The story of Converged Infrastructure (CI) continues to gain steam within the Information Technology (IT) industry…and for good reason. Converged solutions present a relatively easy way to manage complex infrastructure solutions. While some providers focus on CI as an opportunity to bundle solutions into a single SKU, companies such as Nutanix and HP have produced solutions for a couple of years now that go much further with true integration.

As enterprise IT customers shift their focus away from infrastructure and toward platforms, application and data, expect the CI space to heat up. Part of this shift includes platforms geared toward specific applications. This is especially true for those operating applications at scale.

Last week, HP announced their ‘shark’ approach of hardware solutions geared toward specific applications. One of the first targets is the SAP HANA application using HP Converged System 500 as part of a co-innovation project between HP & SAP. It is interesting to see HP partner with SAP HANA with so much emphasis on data analytics today. In addition, specialized solutions are becoming increasingly more important in this space.

Enterprise IT organizations need the ability to start small and grow accordingly. Even service providers may consider a start-small and grow approach. Michael Krigsman (@mkrigsman) recently wrote a post outlining how IT projects are getting smaller and still looking for relief. HP expressed their intent to provide scalable solutions that start small and include forthcoming ‘Project Kraken’ solutions later this year. Only time will tell how seamless this transition becomes.

Additional Reading:

HP CS Blog Entry:

http://h30507.www3.hp.com/t5/Converged-Infrastructure/HP-ConvergedSystem-for-SAP-HANA-meet-the-industry-s-most/ba-p/157176#.UynDsdy0bfM

HP Discover Barcelona: What to Watch For

Today kicks off HP’s Discover conference in Barcelona, Spain with a bevy of information on tap. Looking over the event guide, it is clear that HP is targeting the Enterprise customer with an emphasis on Cloud Computing, Data (including Big Data) and Converged Infrastructure. HP’s definition of ‘converged infrastructure’ does include a bevy of their core infrastructure components.

With an emphasis on cloud and data, HP is really targeting the future direction of technology, not just traditional IT. HP is a large company and can take a bit of work to evolve the thinking from traditional IT to transformational IT. It is good to see the changes.

Of note is the expansion of data beyond just Big Data. For many, the focus continues to persist on Big Data. Yet, for many enterprises, data expands well beyond just Big Data. Look for more information beyond the existing NASCAR example on both the breadth and depth. In addition, there are sessions that provide a deep dive specifically for HAVEn partners. It is good to see HP consider the importance of their partner program.

Core areas of both printing and mobility are making an appearance here at Discover. However, their presence pales in comparison with the big three.

So, what to look for… With cloud and data, the keys for HP will rest with how well they enable adoption. How easy do they make it for customers to easily adopt new technologies? Adoption is key to success. With converged infrastructure, has the story of integration moved beyond a reference architecture and single SKU approach? Look for more details on how far HP has come in developing their portfolio along with execution of the integration between the different solutions. This integration and execution is key.

Time to get on the Colocation Train Before it is Too Late

The data center industry is heading toward an inflection point that has significant impact on enterprises. It seems many aren’t looking far enough ahead, but the timeline appears to be 12-18 months, which is not that far out! The issue is a typical supply chain issue of supply, demand and timelines.

A CHANGE IN THE WINDS

First, let’s start with a bit of background… The advent of Cloud Computing and newer technologies, are driving an increase in the number of enterprises looking to ‘get out of the data center business. I, along with others, have presented many times about ‘Death of the Data Center.’ The data center, which used to serve as a strategic weapon in an enterprise IT org’s arsenal, is still very much critical, but fundamentally becoming a commodity. That’s not to say that the overall data center services are becoming a commodity, but the facility is. Other factors, such as the geographic footprint, network and ecosystem are becoming the real differentiators. And enterprises ‘in the know’ realize they can’t compete at the same level as today’s commercial data center facility providers.

THE TWO FLAVORS OF COLOCATION

Commercial data center providers offer two basic models of data center services: Wholesale and Retail. Digital Realty and DuPont Fabros are examples of major wholesale data center space and Equinix, Switch, IO, Savvis and QTS are examples of major retail colocation providers. It should be noted that some providers provide both wholesale and retail offerings. While there is a huge difference between wholesale and retail colocation space, I will leave the details on why an enterprise might consider one over the other for another post.

DATA CENTER SUPPLY, DEMAND AND TIMELINES

The problem is still the same for both types of data center space: there is a bit of surplus today, but there won’t be enough capacity in the near term. Data center providers are adding capacity around the globe, but they’re caught in a conundrum of how much capacity to build. It typically takes anywhere between 2-4 years to build a new data center and bring it online. And the demand isn’t there to support significant growth yet.

But if you read the tea leaves, the demand is getting ready to pop. Many folks are only now starting to consider their options with cloud and other services. So, why are data center providers not building data centers now in preparation for the pop? There are two reasons: On the supply side, it costs a significant amount of capital to build a data center today and having an idle data center burns significant operational expenses too. On the demand side, enterprises are just starting to evaluate colocation options. Evaluating is different from ready to commit spending on colocation services.

Complicating matters further, even for the most aggressive enterprises, the preparation can take months and the migrations years in the making. Moving a data center is not a trivial exercise and often peppered with significant risk. There are applications, legacy requirements, 3rd party providers, connections, depreciation schedules, architectures, organization, process and governance changes to consider…just to name a few. In addition to the technical challenges, organizations and applications are typically not geared up to handle multi-day outages and moves of this nature. Ponder this: When was the last time your IT team moved a critical business application from one location to another? What about multiple applications? The reality is: it just doesn’t happen often…if at all.

But just because it’s hard, does not mean it should not be done. In this case, it needs to be done. At this point, every organization on the planet should have a plan for colocation and/or cloud. Of course there are exceptions and corner cases, but today they are few and shrinking.

COMPLIANCE AND REGULATORY CONCERNS

Those with compliance and regulatory requirements are moving too…and not just non-production or Disaster Recovery systems. Financial Services organizations are already moving their core banking systems into colocation. While Healthcare organizations are moving their Electronic Health Records (EHR) and Electronic Medical Record (EMR) systems into colocation…and in some cases, the cloud. This is in addition to any core legacy and greenfield applications. The compliance and regulatory requirements are an additional component to consider, not a reason to stop moving.

TIME CHANGES DATA CENTER THINKING

Just five years ago, a discussion of moving to colocation or cloud would have been far more challenging to do. Today, we are starting to see this migration happening. However, it is only happening in very small numbers of IT firms around the globe. We need to significantly increase the number of folks planning and migrating.

DATA CENTER ELASTICITY

On the downside, even if an enterprise started to build their data center strategy and roadmap today, it is unclear if adequate capacity to supply the demand will exist once they’re ready to move. Now, that’s not to say the sky is falling. But it does suggest that enterprises (in mass) need to get on the ball and start planning for death of the data center (their own). At a minimum, it would provider data center providers with greater visibility of the impending demand and timeline. In the best scenario, it provides a healthy ecosystem in the supply/ demand equation without creating a rubber-band effect where supply and demand each fluctuate toward equilibrium.

BUILDING A ROADMAP

The process starts with a vision and understanding of what is truly strategic. Recall that vitally important and strategic can be two different things. Power is vitally important to data centers, but data center providers are not building power plants next to each one.

The next step is building a roadmap that supports the vision. The roadmap includes more than just technological advancements. The biggest initial hurdles will come in the form of organization and process. In addition, a strong visionary and leader will provide the right combination skills to lead the effort and ask the right questions to achieve success.

Part of the roadmap will inevitably include an evaluation of colocation providers. Before you get started down this path, it is important to understand the differences between wholesale and retail colocation providers, what they offer and what your responsibilities are. That last step is often lost as part of the evaluation process.

Truly understand what your requirements are. Space, power and bandwidth are just scratching the surface. Take a holistic view of your environment and portfolio. Understand what and how things will change when moving to colocation. This is as much a clear snapshot of your current situation, as it is where you’re headed over time.

TIME TO GET MOVING

Moving into colocation is a great first-step for many enterprises. It gets them ‘out of the data center’ business while still maintaining their existing portfolio intact. Colocation also provides a great way to move the maturity of an organization (and portfolio) toward cloud.

The evaluation process for colocation services is much different today from just 5 years ago. Today, some of the key differentiators are geographic coverage, network and ecosystem. But a stern warning: The criteria for each enterprise will be different and unique. What applies to one does not necessarily apply to the next. It’s important to clearly understand this and how each provider matches against the requirements.

The process takes time and effort. For this and a number of other reasons, it may take months to years even for the most aggressive movers. As such, it is best to started sooner than later before the train leaves the station.

Further Reading:

Applying Cloud Computing in the Enterprise

Cloud Application Matrix

A Workload is Not a Workload, is Not a Workload