Top 5 Posts of 2013

Over the course of 2013, I wrote a number of posts about CIOs, Cloud Computing, Big Data, Data Centers and IT in general. Here are the top-5 most popular posts in 2013:

5. Time to get on the Colocation Train Before it is Too Late

In the number 5 spot is a post addressing the forthcoming challenges to the data center colocation market and how the ripple effect hits IT.

4. A Workload is Not a Workload, is Not a Workload

Number 4 is a post written in 2012 about the discrepancy between cloud computing case studies. Not all workloads are the same and many of the examples used do not represent the masses.

3. The IT Role in Value Creation is Not a Technology

The number 3 spot goes to a post that addresses the direction of IT organizations within the business and how it is evolving. It is this very evolution that is both very difficult and very exciting at the same time.

2. Motivation And Work Ethics: Passion Fuels the Engine

Another post from 2012 goes to the number 2 spot, which shows that some subjects (like: the importance of passion) have staying power. This post addresses important characteristics for a leader to consider. It addresses the intersection of passion, work ethic and motivation.

1. What is Your Cloud Exit Strategy?

Probably one of the most controversial titles goes to the number 1 spot. This post addresses the challenges faced with cloud when one doesn’t think about their end-state and evolution.

Honorable Mention: So Which Is It? Airplane Mode or Turn Devices Completely Off?

Back in Apr 2012, I was traveling and noticed that many didn’t turn off their devices even though they were instructed to…which prompted the post. Even though the FAA changed their rules in US, this post still gets quite a bit of attention.

CIO Predictions for 2014

This year, I thought I would shift the focus from cloud-specific to the broader agenda for CIOs. But before I jump into my predictions for 2014, let’s take a trip down memory lane and review how I did on Cloud Predictions for 2013.

How did I do?

  1. Rise of the Cloud Verticals: We have seen an uptick of ‘cloud brokers’ but very little in the way of cloud verticals targeting specific industries or suite of services. There has been a feeble attempt at integration between solutions, but even that was lukewarm at best in 2013. (1/2pt)
  2. Widespread Planning of IaaS Migrations: Spot on with this one! Over 2013, the number of IT organizations planning IaaS migrations stepped up in a big way. That’s great news for the IT organization, the business units and the industry as a whole. It demonstrates progress along the maturity continuum. (1pt)
  3. CIO’s Look to Cloud to Catapult IT Transformation: This has been a mixed bag. Many have leveraged cloud because they were forced into it rather than see it as one of the most significant opportunities of our time. There are exceptions to this, but they are not as prominent yet. (1/2pt)
  4. Mobile Increases Intensity of Cloud Adoption: Mobile is taking off like wildfire. And cloud is enabling the progress, as traditional methods would simply be too challenging and slow. (1pt)
  5. Cloud Innovation Shifts from New Solutions to Integration & Consolidation: Over 2013, the number of new solutions has progressed at a feverish pitch. The good indicator is that new solutions are taking into account the requirement of integration with solutions within their ecosystem. While consolidation within cloud providers started to pickup in the 2nd half of 2013, I would expect it to increase into 2014. (1pt)

Total Score: 4/5

Overall, slower general adoption of cloud paired with strong adoption of specific cloud solutions lead to 2013’s progress. I had hoped to see us further along…but alas, 2014 is shaping up to be a very interesting year.

What to look for in 2014?

  1. Cloud Consolidation: Look for plenty of M&A activity as larger incumbents gobble cloud point solutions up. Also look for incumbents to flesh out their ecosystem more fully.
  2. CIOs Focus on Data: Conversations move beyond the next bell or whistle and onto items that really change the economic landscape for a company: data. Look for the CIO to shift focus to data and away from infrastructure.
  3. Colocation is in Vogue: As the CIO moves up the maturity model toward higher-value functions, look for IT organizations to move to colocation in droves. The challenge will be moving before it’s too late.
  4. CIO, CMO + Other Execs Become Best Friends: We’ve talked for some time about how the CIO strives for a ‘seat at the table’. The challenge is in how to be a relevant participant at the table. As the CIO role shifts from support org to business driver, look for the relationships to change too.
  5. One Size Does NOT Fit All: As we talk about newer technologies, CIOs, IT organizations, vendors and service providers get realistic about where their products/ services fit best…and don’t. OpenStack and HP Moonshot are great examples of awesome solutions that fit this statement.

As I’ve said before, this has got to be the best time to work in Information Technology. How will you embrace and leverage change? Here’s to an awesome 2014!

Time to get on the Colocation Train Before it is Too Late

The data center industry is heading toward an inflection point that has significant impact on enterprises. It seems many aren’t looking far enough ahead, but the timeline appears to be 12-18 months, which is not that far out! The issue is a typical supply chain issue of supply, demand and timelines.

A CHANGE IN THE WINDS

First, let’s start with a bit of background… The advent of Cloud Computing and newer technologies, are driving an increase in the number of enterprises looking to ‘get out of the data center business. I, along with others, have presented many times about ‘Death of the Data Center.’ The data center, which used to serve as a strategic weapon in an enterprise IT org’s arsenal, is still very much critical, but fundamentally becoming a commodity. That’s not to say that the overall data center services are becoming a commodity, but the facility is. Other factors, such as the geographic footprint, network and ecosystem are becoming the real differentiators. And enterprises ‘in the know’ realize they can’t compete at the same level as today’s commercial data center facility providers.

THE TWO FLAVORS OF COLOCATION

Commercial data center providers offer two basic models of data center services: Wholesale and Retail. Digital Realty and DuPont Fabros are examples of major wholesale data center space and Equinix, Switch, IO, Savvis and QTS are examples of major retail colocation providers. It should be noted that some providers provide both wholesale and retail offerings. While there is a huge difference between wholesale and retail colocation space, I will leave the details on why an enterprise might consider one over the other for another post.

DATA CENTER SUPPLY, DEMAND AND TIMELINES

The problem is still the same for both types of data center space: there is a bit of surplus today, but there won’t be enough capacity in the near term. Data center providers are adding capacity around the globe, but they’re caught in a conundrum of how much capacity to build. It typically takes anywhere between 2-4 years to build a new data center and bring it online. And the demand isn’t there to support significant growth yet.

But if you read the tea leaves, the demand is getting ready to pop. Many folks are only now starting to consider their options with cloud and other services. So, why are data center providers not building data centers now in preparation for the pop? There are two reasons: On the supply side, it costs a significant amount of capital to build a data center today and having an idle data center burns significant operational expenses too. On the demand side, enterprises are just starting to evaluate colocation options. Evaluating is different from ready to commit spending on colocation services.

Complicating matters further, even for the most aggressive enterprises, the preparation can take months and the migrations years in the making. Moving a data center is not a trivial exercise and often peppered with significant risk. There are applications, legacy requirements, 3rd party providers, connections, depreciation schedules, architectures, organization, process and governance changes to consider…just to name a few. In addition to the technical challenges, organizations and applications are typically not geared up to handle multi-day outages and moves of this nature. Ponder this: When was the last time your IT team moved a critical business application from one location to another? What about multiple applications? The reality is: it just doesn’t happen often…if at all.

But just because it’s hard, does not mean it should not be done. In this case, it needs to be done. At this point, every organization on the planet should have a plan for colocation and/or cloud. Of course there are exceptions and corner cases, but today they are few and shrinking.

COMPLIANCE AND REGULATORY CONCERNS

Those with compliance and regulatory requirements are moving too…and not just non-production or Disaster Recovery systems. Financial Services organizations are already moving their core banking systems into colocation. While Healthcare organizations are moving their Electronic Health Records (EHR) and Electronic Medical Record (EMR) systems into colocation…and in some cases, the cloud. This is in addition to any core legacy and greenfield applications. The compliance and regulatory requirements are an additional component to consider, not a reason to stop moving.

TIME CHANGES DATA CENTER THINKING

Just five years ago, a discussion of moving to colocation or cloud would have been far more challenging to do. Today, we are starting to see this migration happening. However, it is only happening in very small numbers of IT firms around the globe. We need to significantly increase the number of folks planning and migrating.

DATA CENTER ELASTICITY

On the downside, even if an enterprise started to build their data center strategy and roadmap today, it is unclear if adequate capacity to supply the demand will exist once they’re ready to move. Now, that’s not to say the sky is falling. But it does suggest that enterprises (in mass) need to get on the ball and start planning for death of the data center (their own). At a minimum, it would provider data center providers with greater visibility of the impending demand and timeline. In the best scenario, it provides a healthy ecosystem in the supply/ demand equation without creating a rubber-band effect where supply and demand each fluctuate toward equilibrium.

BUILDING A ROADMAP

The process starts with a vision and understanding of what is truly strategic. Recall that vitally important and strategic can be two different things. Power is vitally important to data centers, but data center providers are not building power plants next to each one.

The next step is building a roadmap that supports the vision. The roadmap includes more than just technological advancements. The biggest initial hurdles will come in the form of organization and process. In addition, a strong visionary and leader will provide the right combination skills to lead the effort and ask the right questions to achieve success.

Part of the roadmap will inevitably include an evaluation of colocation providers. Before you get started down this path, it is important to understand the differences between wholesale and retail colocation providers, what they offer and what your responsibilities are. That last step is often lost as part of the evaluation process.

Truly understand what your requirements are. Space, power and bandwidth are just scratching the surface. Take a holistic view of your environment and portfolio. Understand what and how things will change when moving to colocation. This is as much a clear snapshot of your current situation, as it is where you’re headed over time.

TIME TO GET MOVING

Moving into colocation is a great first-step for many enterprises. It gets them ‘out of the data center’ business while still maintaining their existing portfolio intact. Colocation also provides a great way to move the maturity of an organization (and portfolio) toward cloud.

The evaluation process for colocation services is much different today from just 5 years ago. Today, some of the key differentiators are geographic coverage, network and ecosystem. But a stern warning: The criteria for each enterprise will be different and unique. What applies to one does not necessarily apply to the next. It’s important to clearly understand this and how each provider matches against the requirements.

The process takes time and effort. For this and a number of other reasons, it may take months to years even for the most aggressive movers. As such, it is best to started sooner than later before the train leaves the station.

Further Reading:

Applying Cloud Computing in the Enterprise

Cloud Application Matrix

A Workload is Not a Workload, is Not a Workload

HP Converged Cloud Tech Day

Last week, I attended HP’s Converged Cloud Tech Day in Puerto Rico. Fellow colleagues attended from North, Latin and South America. The purpose of the event was to 1) take a deep dive into HP’s cloud offerings and 2) visit HP’s Aguadilla location, which houses manufacturing and an HP Labs presence. What makes the story interesting is that HP is a hardware manufacturer, a software provider and a provider of cloud services. Overall, I was very impressed by what HP is doing…but read on for the reasons why…and the surprises.

HP Puerto Rico

HP, like many other technology companies, has a significant presence in Puerto Rico. Martin Castillo, HP’s Caribbean Region Country Manager provided an overview for the group that left many in awe. HP exports a whopping $11.5b from Puerto Rico or roughly 10% of HP’s global revenue. In the Caribbean, HP holds more than 70% of the server market. Surprisingly, much of the influence to use HP cloud services in Puerto Rico comes from APAC and EMEA, not North America. To that end, 90% of HP’s Caribbean customers are already starting the first stage of moving to private clouds. Like others, HP is seeing customers move from traditional data centers to private clouds to managed clouds to public clouds.

Moving to the Cloud

Not surprisingly, HP is going through a transition by presenting the company from a solutions perspective rather than a product perspective. Shane Pearson, HP’s VP of Portfolio & Product Management explained that “At the end of the day, it’s all about applications and workloads. Everyone sees the importance of cloud, but everyone is trying to figure out how to leverage it.” By 2015 the projected markets are: Traditional $1.4b, Private Cloud $47b, Managed Cloud $55b, Public Cloud $30b for a cloud total of $132b. In addition, HP confirmed Hybrid Cloud approach as the approach of choice.

While customers are still focused on cost savings as the primary motivation to move to cloud, the tide is shifting to business process improvement. Put another way, cloud is allowing users to do things they could not do before. I was pleased to hear HP offer that it’s hard to take advantage of cloud if you don’t leverage automation. Automation and Orchestration are essential to cloud deployments.

HP CloudSystem Matrix

HP’s Nigel Cook was up next to talk about HP’s CloudSystem Matrix. Essentially, HP is (and has been) providing cloud services across the gamut of potential needs. Internally, HP is using OpenStack as the foundation for their cloud service offering. But CloudSystem Matrix provides a cohesive solution to manage across both internal and external cloud services. To the earlier point about automation, HP is focusing on automation and self-service as part of their cloud offering. Having a solution that helps customers manage the complexity that Hybrid Clouds presents could prove interesting. Admittedly, I have not kicked the tires of CloudSystem Matrix yet, but on the surface, it is very impressive.

Reference Architecture

During the visit to Aguadilla, we joined a Halo session with HP’s Christian Verstraete to discuss architecture. Christian and team have built an impressive cloud functional reference architecture. As impressive as it is, one challenge is how to best leverage such a comprehensive model for the everyday IT organization. It’s quite a bit to chew off. Very large enterprises can consume the level of detail contained within the model. Others will need a way to consume it in chunks. Christian goes into much greater depth in a series of blog entries on HP’s Cloud Source Blog.

HP Labs: Data Center in a Box

One treat on the trip was the visit to HP Labs. If you ever get the opportunity to visit HP Labs, it’s well worth the time to see what innovative solutions the folks are cooking up. HP demonstrated the results from their Thermal Zone Mapping (TZM) tool (US Patent 8,249,841) along with CFD modeling tools and monitoring to determine details around airflow/ cooling efficiency. While I’ve seen many different modeling tools, HP’s TZM was pretty impressive.

In addition to the TZM, HP shared a new prototype that I called Data Center in a Box. The solution is an encapsulated rack system that supports 1-8 racks that are fully enclosed. The only requirement is power and chilled water. The PUE numbers were impressive, but didn’t take into account every metric (ie: the cost of chilled water). Regardless, I thought the solution was pretty interesting. The HP folks kept mentioning that they planned to target the solution to Small-Medium Business (SMB) clients. While that may have been interesting to the SMB market a few years ago, today the SMB market is moving more to services (ie: Cloud Services). That doesn’t mean the solution is DOA. I do think it could be marketed as a modular approach to data center build-outs that provides a smaller increment to container solutions. Today, the solution is still just a prototype and not commercially available. It will be interesting to see where HP ultimately takes this.

In Summary

I was quite impressed by HP’s perspective on how customers can…and should leverage cloud. I felt they have a healthy perspective on the market, customer engagement and opportunity. However, I was left with one question: Why are HP’s cloud solutions not more visible? Arguably, I am smack in the middle of the ‘cloud stream’ of information. Sure, I am aware that HP has a cloud offering. However, when folks talk about different cloud solutions, HP is noticeably absent. From what I learned last week, this needs to change.

HP’s CloudSystem Matrix is definitely worth a look regardless of the state of your cloud strategy. And for data center providers and service providers, keep an eye out for their Data Center in a Box…or whatever they ultimately call it.

How to Leverage the Cloud for Disasters like Hurricane Sandy

Between natural disasters like Hurricanes Sandy and Irene or man-made disasters like the recent data center outages, disasters happen. The question isn’t whether they will happen. The question is: What can be done to avoid the next one? Cloud computing provides a significant advantage to avoid disaster. However, simply leveraging cloud-based services is not enough. First, a tiered approach in leveraging cloud-based services is needed. Second, a new architectural paradigm is needed. Third, organizations need to consider the holistic range of issues they will contend with.

Technology Clouds Help Natural Clouds

If used correctly, cloud computing can significantly limit or completely avoid outages. Cloud offers a physical abstraction layer and allows applications to be located outside of disaster zones where services, staff and recovery efforts do not conflict.

  1. Leverage commercial data centers and Infrastructure as a Service (IaaS). Commercial data centers are designed to be more robust and resilient. Prior to a disaster, IaaS provides the ability to move applications to alternative facilities out of harms way.
  2. Leverage core application and platform services. This may come in the form of PaaS or SaaS. These service providers often architect solutions that are able to withstand single data center outages. That is not true in every case, but by leveraging this in addition to other changes, the risks are mitigated.

In all cases, it is important to ‘trust but verify’ when evaluating providers. Neither tier provides a silver bullet. The key is: Take a multi-faceted approach that architects services with the assumption for failure.

Changes in Application Resiliency

Historically, application resiliency relied heavily on redundant infrastructure. Judging from the responses to Amazon’s recent outages, users still make this assumption. The paradigm needs to change. Applications need to take more responsibility for resiliency. By doing so, applications ensure service availability in times of infrastructure failure.

In a recent blog post, I discussed the relationship cloud computing provides to greenfield and legacy applications. Legacy applications present a challenge to move into cloud-based services. They can (and eventually should) be moved into cloud. However, it will require a bit of work to take advantage of what cloud offers.

Greenfield applications, on the other hand, present a unique opportunity to fully take advantage of cloud-based services…if used correctly. With Hurricane Sandy, we saw greenfield applications still using the old paradigm of relying heavily on redundant infrastructure. And the consequence was significant application outages due to infrastructure failures. Consequently, greenfield applications that rely on the new paradigm (ie: Netflix) experienced no downtime due to Sandy. Netflix not only avoided disaster, but saw a 20% increase in streaming viewers.

Moving Beyond Technology

Leveraging cloud-based services requires more than a technology change. Organizational impact, process changes and governance are just a few of the things to consider. Organizations need to consider the changes to access, skill sets and roles. Is staff in other regions able to assist if local staff is impacted by the disaster? Fundamental changes from change management to application design processes will change too. And at what point are services preemptively moved to avoid disaster? Lastly, how do governance models change if the core players are out of pocket due to disaster? Without considering these changes, the risks increase exponentially.

Start Here

So, where you do you get started? First, determine where you are today. All good maps start with a “You Are Here” label. Consider how to best leverage cloud services and build a plan. Take into account your disaster recovery and business continuity planning. Then put the plan in motion. Test your disaster scenarios to improve your ability to withstand outages. Hopefully by the time the next disaster hits (and it will), you will be in a better place to weather the storm.

The Real Problem with Data Center Efficiency

In the past week, The New York Times (and Greenpeace previously) called attention to the inefficiency in data centers. These stories bring light to a serious issue, but are a bit misguided and do not include the whole story. Generally speaking, are data centers inefficient? Absolutely! Read on to fully understand the significance of the situation, the reasons why they are inefficient and opportunities that lie ahead.

Background

Data centers are large consumers of power. According to a 2007 U.S EPA report, data centers in 2006 accounted for a full 1.5% of the US energy consumption. That number was expected to double to 3% by 2011. At the time, 38% was attributable to the nation’s largest data centers. However, these numbers do not represent the entire footprint of data centers. Smaller facilities, closets and lab spaces were not included in the study. From experience, these represent a significant aggregate footprint.

Organizations like The New York Times and Greenpeace have called out the issues around inefficiency in data centers. Good points are made, but the focus is misdirected and backfiring. The companies called out in their reports are operating some of the most efficient data centers on the planet. So, what’s the problem? The vast majority of data centers operated by everyone else.

Size Matters

In my post The Future Data Center Is… Part II, I breakdown the importance of understanding differences between SMB, Mid-Tier, Enterprise and Very Large Enterprise data centers. There is a very significant difference between the different tiers, their requirements and ability to run an efficient data center. My good friend and fellow Data Center Pulse board member Mark Thiele provided additional color in his SwitchScribe post Measuring the Size of a Data Center – Yes, it Matters. Unfortunately, the majority of articles focus on the Very Large Enterprise data centers. These facilities have very specific requirements and do not represent the vast majority of data centers in use today.

Sadly, the SMB, mid-tier and enterprise (to a smaller degree) data centers are some of the most inefficient operations on the planet. And that doesn’t account for the closets and rooms that house IT equipment. Is there something to learn from the larger enterprises? Absolutely. Should (or can) the rest of data center operators mimic them? No. The very large enterprises are getting more and more efficient every day. The SMB, mid-tier and enterprise data centers (to a smaller degree) are simply not able to keep up.

Different Purposes of Data Centers

In my post A Workload is not a Workload, is not a Workload, I delve into the differences that drive data center architecture and operation. The fundamental premise behind the story is that the larger providers have two things that differentiate them from the rest.

  1. Monolithic Application: Organizations like eBay, Zynga, Google and Facebook run very specialized applications. These apps (and the infrastructure they run on) are highly tuned. In most other organizations, the workload is very mixed and presents challenges (if not impossible) to effectively tune.
  2. Scale: The same companies run their monolithic applications at web-scale, which is much larger than typical enterprise applications. The very nature of the scale requires and allows specialization in the tuning of the application and related infrastructure.

These two factors lead to a much different situation than exists in the typical enterprise, mid-tier or SMB environment.

Organizational Impact

Beyond the technical specifics, the organizational complexities cannot go unmentioned. For the above-mentioned companies, they have teams of people with specific jobs dedicated to supporting the data center and its operation. This dedication allows specialization that drives further efficiency in the facility and operations. The staff understands the PUE of the data center and how each specific each tweak impacts the number. They often run their facilities as close to maximum efficiency and capacity as possible. Why? They clearly understand the business impact.

The typical enterprise, mid-tier or SMB presents a much different situation. In this case, the responsibility of the data center is often one of many requirements in a person’s job description. That’s on top of everything else. These organizations simply don’t have the scale to justify specialization of data center operations.

Sustainability

There is no question that data centers are huge energy consumers. That will not change. The opportunity is to run more efficient facilities that leverage renewable energy sources. Some larger (and newer) facilities are being located near renewable power sources. Yahoo, VMware and others recently built data centers in Wanatchee, WA near the Grand Coulee dam’s hydroelectric power source. In other cases, wind and solar farms are being built near larger data centers.

It should be noted that these decisions do not always make good business sense. Renewable energy sources are not available at reasonable costs everywhere. And moving data centers near renewable power sources is not always feasible either. Staffing, backbone network connectivity and a host of other factors influence the decision. Regardless of the interest in social and environmental responsibility, a data center is an expensive business asset that requires analysis of many factors.

Challenges

Other issues present challenges for data centers including costs, knowledge, legacy applications, governance and security. Data centers are complex ecosystems that require attention, understanding and specialized management. Over time, data centers are getting more complex…not simple. There needs to be an appreciation and acceptance to these issues.

Shared Knowledge

In summary, the very large data center operators are running some of the most efficient facilities and operations. The data center (and broader IT) industry needs to learn from their examples. However, because of articles in the NYT and Greenpeace calling out their flaws, there is growing hesitation to share what they’re doing for fear of bad PR. Can you blame them? But leaders like Dean Nelson (VP at eBay and fellow Data Center Pulse board member) are still fighting the trend in order to benefit the industry. The data center and IT industry needs an environment where ideas and experiences can be freely shared without concern of misguided criticism in the mass media.

Opportunity/ Solution

Data centers are huge consumers of energy. Their demand is increasing and not expected to shrink. So, what can be done to address the issues? There are several immediate changes that are needed.

  1. Strategic Differentiation: Organizations need to take a hard look at their own focus. Is the company in the data center business? Is the company willing to invest in the data center to truly run it efficiently? Is the data center a strategic differentiator? For the vast majority of companies currently running data centers the clear answer to this will be no.
  2. Efficient Facilities: The very large enterprises are already running efficient facilities and driving hard toward greater efficiency. Let’s encourage them to continue to do so! Those not able to run at this level of efficiency need to stop running their own facilities. SMB, mid-tier and some enterprise organizations need to develop plans to eliminate their own data centers and leverage more efficient ones. Consider colocation, hosted infrastructure and other options.
  3. Efficiency Programs: Several power utilities have offered programs with incentives to offset costs to implement energy efficient solutions. The problem is that programs were not consistent across power utilities and some do not offer the programs. The industry needs a consistent program similar to the National Data Center Power Reduction Incentive Program proposed by Data Center Pulse in 2009.
  4. Virtualization: Virtualization is not new and a very mature technology. However, the adoption rates are still very anemic. Depending on which analyst organization you believe, the numbers average roughly 30-40%. This isn’t the whole story though. From experience, organizations sit at opposite ends by being either heavily virtualized or virtualizing only a few servers. The excess, unused capacity is simply wasteful. There are a number of reasons for this including cost, legacy applications, knowledge, fear, and an already overwhelming plate of issues to address.
  5. Cloud Computing: The cloud services market is maturing very quickly. Even highly regulated industries with compliance requirements are leveraging cloud computing for even their most sensitive applications. Cloud based services is a good solution to leverage where possible.

In summary, let’s commend those that are trying hard to help our industry move forward. Let’s bring the focus to the real issues preventing the efficient operation of data centers. There are a number of viable and immediate solutions available today to help the larger contingent of organizations.

Further Reading:

The Future Data Center Is… Part II

The Future Data Center Is…

The New Data Center Park Trend

Cloud Data Centers Become Black Sheep?

A Workload is not a Workload, is not a Workload

Reference:

  1. Glanz, James. “Power, Pollution and the Internet.” The New York Times. 22 Sep 2012. <http://www.nytimes.com/2012/09/23/technology/data-centers-waste-vast-amounts-of-energy-belying-industry-image.html>
  2. Glanz, James. “Data Barns in a Farm Town, Gobbling Power and Flexing Muscle.” The New York Times. 23 Sep 2012. <http://www.nytimes.com/2012/09/24/technology/data-centers-in-rural-washington-state-gobble-power.html>
  3. Fehrenbacher, Katie. “NYT’s data center power reports like taking a time machine back to 2006.” GigaOM. 24 Sep 2012. <http://gigaom.com/cleantech/nyts-data-center-power-article-reports-from-a-time-machine-back-to-2006/>
  4. Weinman, Joe. “The Power of IT (it’s not all in energy consumption).” GigaOM. 26 Sep 2012. <http://gigaom.com/cloud/the-power-of-it-its-not-all-in-energy-consumption/>
  5. “How Clean is Your Cloud?” Greenpeace. 17 Apr 2012. <http://www.greenpeace.org/international/en/publications/Campaign-reports/Climate-Reports/How-Clean-is-Your-Cloud/>
  6. “National Data Center Power Reduction Incentive Program.” Data Center Pulse. 6 May 2009. <http://datacenterpulse.org/incentive_program>
  7. “The Green Grid Power Efficiency Metrics: PUE & DCiE.” The Green Grid. 23 Oct 2007. <http://www.thegreengrid.org/Global/Content/white-papers/The-Green-Grid-Data-Center-Power-Efficiency-Metrics-PUE-and-DCiE>
  8. “Report to Congress on Server and Data Center Energy Efficiency.” U.S. Environmental Protection Agency. 2 Aug 2007. <http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf>
  9. Thiele, Mark. “Measuring the Size of a Data Center – Yes, it Matters.” SwitchScribe. 30 Jan 2012. <http://www.switchscribe.com/?p=100>

The Future Data Center Is… Part II

Last month, I wrote The Future Data Center Is… and alluded to a shift in demand for data centers. Just to be clear, I don’t believe data center demand is decreasing. Quite the contrary, I believe demand is exploding! But how is demand for data centers going to change? What does the mapping of organizations to services look like?

First, why should you care? Today, the average PUE of a data center is 1.8. …and that’s just the average. That’s atrocious! Very Large Enterprises are able to drive that to near 1.1-1.3. The excess is a waste of energy resources. At a time when Corporate Social Responsibility and carbon footprint are becoming more in vogue in the corporate arena, data centers are becoming a large target. So efficiency matters!

Yesterday, I presented a slide depicting the breakdown of types of organizations and (respectively) the shift in demand.

It is important to understand the details behind this. To start, let’s take a look at the boundary situations.

SMB/ Mid-Tier Organziations

Data center demand from SMB and Mid-Tier organizations starts to shift to service providers. Typically, their needs are straightforward and small in scale. In most cases, they use a basic data center (sometimes just a closet) supporting a mixed workload running on common off-the-shelf hardware. Unfortunately, the data centers in use by these organizations are highly inefficient due to their small scale and lack of sophistication. That’s not the fault of the organization. It just further supports the point that others can manage data centers more effectively than they can. Their best solution would be to move to a colocation agreement or IaaS provider and leverage SaaS where possible. That takes the burden off those organizations and allows them to focus on higher value functions.

Very Large Enterprises (VLE)

At the other end of the spectrum, Very Large Enterprises will continue to build custom solutions for their web-scale, highly tuned, very specific applications. This is different from their internal IT demand. See my post A Workload is Not a Workload, is Not a Workload where I outline this in more detail. Due to the scale of their custom applications, they’re able to carry the data center requirements of their internal IT demand at a similar level due to their scale. If they only supported their internal IT demand, their scale would pale in comparison and arguably, so would their efficiency.

Enterprises

In some ways, the VLE without the web-scale custom application is a typical Enterprise with a mixed workload. Enterprises sit in the middle. Depending on the scale of the workloads, characterization, organization and sophistication, enterprises may leverage internal data centers or external ones. It’s very likely they will leverage a combination of both for a number of reasons (compliance, geography, technical, etc). The key is to take an objective view of the demand and alternatives.

The question is, can you manage a data center more effectively and efficiently than the alternatives? Also, is managing a data center strategic to your IT strategic initiatives and aligns with business objectives? If not, then it’s probably time to make the shift.

Related Articles:

Mark Thiele: Measuring the Size of a Data Center – Yes, it Matters

The Green Grid: Metrics and Measurements

The Future Data Center Is…

Many folks want to look in a crystal ball and magically profess what the future looks like. In the land of technology, it’s not that easy. Or is it? Sure, we do have the ability to control our destiny. We are limited by our own boundaries…artificially set or not. This may seem fairly straight forward, but it’s not. Businesses are looking for technology organizations to evolve and change. Even if that means they shift how they use services and applications on their own. Hence shadow IT.

Over the course of my career, I’ve seen many data centers in various countries. Even today, the level of sophistication varies greatly with Switch’s primary Las Vegas data center at one end of the spectrum and a 20-year old data center from a top data center/ cloud provider at the other end. I’ll leave them unnamed to avoid any potential embarrassment. To contrast, I’ve toured newer data centers in their portfolio that are much more innovative.

The advent of cloud computing has flipped the way computing resources are used on it’s head. How data centers are used is changing quickly. And what’s inside is becoming more relevant to those that manage data centers, but less relevant to those who use them. Let me explain.

Operating a data center is complex. It is no longer just four walls with additional power and cooling requirements. To add complexity, the line between facilities and IT has blurred greatly. How does an organization deal with this growing complexity on top of what they’re already dealing with? Furthermore, as the complexity of the applications and services increases, so do the expertise requirements within the organization. How is every company that currently operates a data center expected to meet these growing requirements? In reality, they can’t.

Only those that are able to bring the scale of their applications and services will warrant the continued operation of their facility. General purpose IT services (core applications, custom applications and the like) will move to alternative solutions. Sure, cloud is one option. Co-location is another. There are many clever solutions to this growing challenge. Are there exception cases? Yes. However, it is important to take an unbiased view at the maturing marketplace and how to best leverage the limited resources available internally.

In summary, unless you are 1) operating applications or services at scale or 2) have a specific use-case, possibly due to regulatory or compliance requirements, or 3) do not, realistically, have a viable alternative… then you should consider moving away from operating your own data center. The future data center for many is an empty one.

The New Data Center Park Trend

Building data centers in specific areas is nothing new. Data centers are large consumers of power. That’s not news either. Typically, data centers are located near sources of low-cost (and hopefully renewable) energy. Energy is a large portion of the overall data center operational costs.

But power isn’t everything. Two other major considerations are connectivity to a variety of major backbone providers and people. Yes, people. How many skilled workers are willing to take the risk and relocate to a rural area? If the job doesn’t work out, where do they go? There is a premium to relocate people, which factors against the power savings.

Two ways to address the people issue are 1) locate the data center in close proximity to other data centers and 2) architect for a truly lights-out operation to limit staffing requirements. It seems that both are not only possible today, but also being encouraged.

Major companies such as VMware, Intuit, Microsoft, Yahoo, Dell along with commercial providers have build data centers in the Wenatchee/ Quincy area of Central Washington State. The combined data centers comprise more than two million square feet of data center space. That’s quite a large footprint for such a rural area. More recently, Facebook located and Apple is locating a data center in the Prineville, Oregon area.

If your company does not have the scale for large data centers, there are still options. Commercial data center providers are locating data centers in the Wenatchee/ Quincy area. There is also a growing trend in the creation of data center “parks”. These are locations that are specifically built to take advantage of power, cooling, tax implications and connectivity options. In addition, they’re close enough to metro areas to attract the talent required for operations.

Reno, Nevada

http://www.datacenterknowledge.com/archives/2010/11/15/large-reno-project-to-generate-its-own-power/

Colorado

http://www.datacenterknowledge.com/archives/2012/03/09/energy-park-proposed-at-nexus-of-fiber-power/

I would expect to see an increase in data centers popping up in these data center parks and away from metropolitan areas where rent and power is expensive. In addition, cloud computing will only increase the movement of data center functions away from traditional approaches to commercial offerings in remote areas.

Could Data Centers Become Black Sheep?

Could they? Could it be out of Vogue to operate your own data center? Current developments in Corporate Social Responsibility and a maturing data center marketplace are starting to drive these changes.

For many, this could be a discussion about the pink elephant in the room. Data centers have been, and continue to be a requirement for businesses around the world. We rely more heavily every day on systems and the applications they run. Those applications run on servers and use storage subsystems; all of which are connected with networking devices. Collectively, we call this the “IT Load” for a data center.

The root question is not whether a data center is required. The obvious answer is: Yes! The real question is: Do I need to operate my own data center? But we will get to answer that question in a minute.

Data Center Energy Consumption

Data centers are consuming a larger percentage of the world’s energy every day. Our growing appetite will continue to take a toll on natural resources. In 2007, the EPA issued (for some) an eye-opening report on data center consumption and potential areas of efficiency.

http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf

While the report is a bit dated (2007), the core data still holds true today. The majority of the report is focused on projections and potential areas of efficiency. In 2011, Jonathan Koomey issued an updated report on the findings.

http://www.analyticspress.com/datacenters.html/

In his report, he noted that data center power consumption did not grow as strongly as the EPA projected. By 2010, global data center energy consumption hit 1.3% while in the US that number rose to 2%. Those are still very significant numbers.

What is Missing?

To add more fuel to the figures, a significant number of “facilities” are missing. Most notably missing from these findings is the energy consumption by the myriad of smaller “data centers”. While many would not call them data centers, they still serve the same purpose of housing servers, storage and networking equipment. These are smaller closets, rooms and labs. It may be as small as a server and switch under a desk to a rack or two of gear in a closet to a 1,000 sqft room. It is much harder to pin down the power consumed by each of these smaller locations. If you consider that these are the common solution for Small and Medium Businesses (SMB), the aggregate consumption is significant.

Potential Impact

Increasing the efficiency of the physical data center is a great start. There are many opportunities to improve the efficiency of power and cooling systems. People have focused on increasing the efficiency of power and cooling systems for years. Many of the solutions are simple to implement and make a significant impact. While others take quite a bit of work, expertise and money. And there are many brilliant minds around the world that are currently working on this very challenge.

However, the largest potential impact may come from the IT load itself. For the majority of IT loads, the equipment is not used efficiently. Server, storage and network utilization figures are much lower than they could be. Servers are designed (from an energy perspective) for high utilization. One look at the power supply power curve for a server supports this. On the server, processor utilization rates commonly peak at 20-30% with average utilization in the 5-10% range. In addition, the current implementation rates for virtualization are still relatively low. The latest figures suggest that as many as 50% of servers are virtualized. Anecdotally, that figure still seems high. Regardless, pushing the implementation of virtualization to 80%+ would significantly reduce the overall power consumption…for the same IT workload.

Imagine reducing the US power consumption by a full 1%. The impact could be that significant.

Strategic and World-Class Expertise

Now back to the root question: Do you need to have your own data center? Before answering, two other questions will shed light on the answer. Is your organization in a position to operate your data center (100,000 sqft facility, 5,000 sqft room, closet, lab, etc) at a world-class level? Asked a different way: Is your organization willing to make the investment of installing a team of people to operate a world-class facility where it is their whole job, not just a line in the job description? Second, is operation of a data center strategic to your organization? We already covered that data centers are vitally important. So is electricity. Are you willing to make the investment in operating a data center that is unique and provides an advantage from your competition? Or are there alternatives that better fit the strategic direction of the organization?

The Solution

If you set personal beliefs, cultural norms and inertia aside, for most, the answer to these questions is no. There are viable alternatives today that offer the economics, flexibility and responsiveness. And the alternative data center providers do employ teams to ensure their facilities are world-class. Only those few with large-scale requirements or the uncommon corner case will still need to operate their own data center.

Cloud computing is just one of many ways to accomplish these objectives. Startups and others are already heading down this path unencumbered by cultural norms and inertia. The challenge for established organizations is how to effectively turn the corner.

Bottom Line: Most organizations are not in a position to efficiently operate a world-class data center and should look at alternative solutions. The data center provider market is mature and competitors are already heading down this path.