The Real Problem with Data Center Efficiency

In the past week, The New York Times (and Greenpeace previously) called attention to the inefficiency in data centers. These stories bring light to a serious issue, but are a bit misguided and do not include the whole story. Generally speaking, are data centers inefficient? Absolutely! Read on to fully understand the significance of the situation, the reasons why they are inefficient and opportunities that lie ahead.


Data centers are large consumers of power. According to a 2007 U.S EPA report, data centers in 2006 accounted for a full 1.5% of the US energy consumption. That number was expected to double to 3% by 2011. At the time, 38% was attributable to the nation’s largest data centers. However, these numbers do not represent the entire footprint of data centers. Smaller facilities, closets and lab spaces were not included in the study. From experience, these represent a significant aggregate footprint.

Organizations like The New York Times and Greenpeace have called out the issues around inefficiency in data centers. Good points are made, but the focus is misdirected and backfiring. The companies called out in their reports are operating some of the most efficient data centers on the planet. So, what’s the problem? The vast majority of data centers operated by everyone else.

Size Matters

In my post The Future Data Center Is… Part II, I breakdown the importance of understanding differences between SMB, Mid-Tier, Enterprise and Very Large Enterprise data centers. There is a very significant difference between the different tiers, their requirements and ability to run an efficient data center. My good friend and fellow Data Center Pulse board member Mark Thiele provided additional color in his SwitchScribe post Measuring the Size of a Data Center – Yes, it Matters. Unfortunately, the majority of articles focus on the Very Large Enterprise data centers. These facilities have very specific requirements and do not represent the vast majority of data centers in use today.

Sadly, the SMB, mid-tier and enterprise (to a smaller degree) data centers are some of the most inefficient operations on the planet. And that doesn’t account for the closets and rooms that house IT equipment. Is there something to learn from the larger enterprises? Absolutely. Should (or can) the rest of data center operators mimic them? No. The very large enterprises are getting more and more efficient every day. The SMB, mid-tier and enterprise data centers (to a smaller degree) are simply not able to keep up.

Different Purposes of Data Centers

In my post A Workload is not a Workload, is not a Workload, I delve into the differences that drive data center architecture and operation. The fundamental premise behind the story is that the larger providers have two things that differentiate them from the rest.

  1. Monolithic Application: Organizations like eBay, Zynga, Google and Facebook run very specialized applications. These apps (and the infrastructure they run on) are highly tuned. In most other organizations, the workload is very mixed and presents challenges (if not impossible) to effectively tune.
  2. Scale: The same companies run their monolithic applications at web-scale, which is much larger than typical enterprise applications. The very nature of the scale requires and allows specialization in the tuning of the application and related infrastructure.

These two factors lead to a much different situation than exists in the typical enterprise, mid-tier or SMB environment.

Organizational Impact

Beyond the technical specifics, the organizational complexities cannot go unmentioned. For the above-mentioned companies, they have teams of people with specific jobs dedicated to supporting the data center and its operation. This dedication allows specialization that drives further efficiency in the facility and operations. The staff understands the PUE of the data center and how each specific each tweak impacts the number. They often run their facilities as close to maximum efficiency and capacity as possible. Why? They clearly understand the business impact.

The typical enterprise, mid-tier or SMB presents a much different situation. In this case, the responsibility of the data center is often one of many requirements in a person’s job description. That’s on top of everything else. These organizations simply don’t have the scale to justify specialization of data center operations.


There is no question that data centers are huge energy consumers. That will not change. The opportunity is to run more efficient facilities that leverage renewable energy sources. Some larger (and newer) facilities are being located near renewable power sources. Yahoo, VMware and others recently built data centers in Wanatchee, WA near the Grand Coulee dam’s hydroelectric power source. In other cases, wind and solar farms are being built near larger data centers.

It should be noted that these decisions do not always make good business sense. Renewable energy sources are not available at reasonable costs everywhere. And moving data centers near renewable power sources is not always feasible either. Staffing, backbone network connectivity and a host of other factors influence the decision. Regardless of the interest in social and environmental responsibility, a data center is an expensive business asset that requires analysis of many factors.


Other issues present challenges for data centers including costs, knowledge, legacy applications, governance and security. Data centers are complex ecosystems that require attention, understanding and specialized management. Over time, data centers are getting more complex…not simple. There needs to be an appreciation and acceptance to these issues.

Shared Knowledge

In summary, the very large data center operators are running some of the most efficient facilities and operations. The data center (and broader IT) industry needs to learn from their examples. However, because of articles in the NYT and Greenpeace calling out their flaws, there is growing hesitation to share what they’re doing for fear of bad PR. Can you blame them? But leaders like Dean Nelson (VP at eBay and fellow Data Center Pulse board member) are still fighting the trend in order to benefit the industry. The data center and IT industry needs an environment where ideas and experiences can be freely shared without concern of misguided criticism in the mass media.

Opportunity/ Solution

Data centers are huge consumers of energy. Their demand is increasing and not expected to shrink. So, what can be done to address the issues? There are several immediate changes that are needed.

  1. Strategic Differentiation: Organizations need to take a hard look at their own focus. Is the company in the data center business? Is the company willing to invest in the data center to truly run it efficiently? Is the data center a strategic differentiator? For the vast majority of companies currently running data centers the clear answer to this will be no.
  2. Efficient Facilities: The very large enterprises are already running efficient facilities and driving hard toward greater efficiency. Let’s encourage them to continue to do so! Those not able to run at this level of efficiency need to stop running their own facilities. SMB, mid-tier and some enterprise organizations need to develop plans to eliminate their own data centers and leverage more efficient ones. Consider colocation, hosted infrastructure and other options.
  3. Efficiency Programs: Several power utilities have offered programs with incentives to offset costs to implement energy efficient solutions. The problem is that programs were not consistent across power utilities and some do not offer the programs. The industry needs a consistent program similar to the National Data Center Power Reduction Incentive Program proposed by Data Center Pulse in 2009.
  4. Virtualization: Virtualization is not new and a very mature technology. However, the adoption rates are still very anemic. Depending on which analyst organization you believe, the numbers average roughly 30-40%. This isn’t the whole story though. From experience, organizations sit at opposite ends by being either heavily virtualized or virtualizing only a few servers. The excess, unused capacity is simply wasteful. There are a number of reasons for this including cost, legacy applications, knowledge, fear, and an already overwhelming plate of issues to address.
  5. Cloud Computing: The cloud services market is maturing very quickly. Even highly regulated industries with compliance requirements are leveraging cloud computing for even their most sensitive applications. Cloud based services is a good solution to leverage where possible.

In summary, let’s commend those that are trying hard to help our industry move forward. Let’s bring the focus to the real issues preventing the efficient operation of data centers. There are a number of viable and immediate solutions available today to help the larger contingent of organizations.

Further Reading:

The Future Data Center Is… Part II

The Future Data Center Is…

The New Data Center Park Trend

Cloud Data Centers Become Black Sheep?

A Workload is not a Workload, is not a Workload


  1. Glanz, James. “Power, Pollution and the Internet.” The New York Times. 22 Sep 2012. <>
  2. Glanz, James. “Data Barns in a Farm Town, Gobbling Power and Flexing Muscle.” The New York Times. 23 Sep 2012. <>
  3. Fehrenbacher, Katie. “NYT’s data center power reports like taking a time machine back to 2006.” GigaOM. 24 Sep 2012. <>
  4. Weinman, Joe. “The Power of IT (it’s not all in energy consumption).” GigaOM. 26 Sep 2012. <>
  5. “How Clean is Your Cloud?” Greenpeace. 17 Apr 2012. <>
  6. “National Data Center Power Reduction Incentive Program.” Data Center Pulse. 6 May 2009. <>
  7. “The Green Grid Power Efficiency Metrics: PUE & DCiE.” The Green Grid. 23 Oct 2007. <>
  8. “Report to Congress on Server and Data Center Energy Efficiency.” U.S. Environmental Protection Agency. 2 Aug 2007. <>
  9. Thiele, Mark. “Measuring the Size of a Data Center – Yes, it Matters.” SwitchScribe. 30 Jan 2012. <>

Cloud Application Matrix

Several years in and there is still quite a bit of confusion around the value of cloud computing. What is it? How can I use it? What value will it provide? There are several perspectives on how to approach cloud computing value. Interesting, that very question elicits several possible responses. This missive specifically targets how applications map against a cloud value matrix. From the application perspective, scale along with the historical component governs the direction of value.

Scale (y-axis)

As scale increases, so does the potential value from cloud computing. That is not to say that traditional methods are not valuable. It has more to do with the direction and velocity that the scale of an application is taking. Greenfield applications provide a different perspective from legacy applications. Rewriting legacy applications simply to use cloud brings questionable value. There may be extenuating circumstances to consider. However, those are not common.

Legacy vs. Greenfield (x-axis)

The x-axis represents the spectrum of applications from legacy to greenfield. Greenfield applications may include either brand new applications or rewritten legacy applications. Core, off the shelf applications may fall into either category. The current state of the cloud marketplace maturity suggests that any new or greenfield applications should consider cloud computing. That includes both PaaS and SaaS approaches.


The first step is to map the portfolio of applications against the grid. Each application type and scale is represented in relation to the others. This is a good exercise to 1) identify the complete portfolio of applications, 2) understand the current state and lifecycle and 3) develop a roadmap for application lifecycles. The roadmap can then become the playbook to support a cloud strategy.

Upper-Right Quadrant

The value cloud computing brings increases as application requirements move toward the upper-left quadrant. In most cases, applications will move horizontally to the right rather than vertically upward. The clear exception is web-scale applications. Most of those start in the lower-right quadrant and move vertically upward.


The matrix is intended to be a general guideline to characterize the majority, but not all applications and situations. In one example, legacy applications may be encapsulated to support cloud-based services as an alternative to rewriting.

Dreamforce 2012 Trip Report


Last week’s Salesforce Dreamforce event had to be the largest conference I have seen at San Francisco’s Moscone Center. It covered Moscone North, South and West plus several hotels. And if that was not enough, Howard Street was turned into a lawn area complete with concert stage, outdoor lounge area and exhibits. Dreamforce presented a great opportunity to learn more about the Salesforce community…and a number of missed opportunities.

Walking the expo floor, one thing becomes clear very quickly: Salesforce is the largest exhibitor. Taking up 25-30% of the expo floor the Salesforce area maintained focal points around sales, marketing and service. Surrounding the Salesforce area were partners in their ecosystem. Some based on their platform, while others with their own platforms. There were solutions for all types of needs. Unfortunately, the different subject matter was intertwined throughout the floor (Sales next to Service next to Marketing). Salesforce is a broad platform. If you were interested in a specific aspect of Salesforce-based solutions, it was hard to find the related solutions. Interestingly, consulting firms held some of the largest booths outside of Salesforce.

Moscone West held the Developer Zone with less structured community areas for folks with similar interests to gather. Multiple presentations were taking place in the Developer Zone non-stop. In addition to the Unconference area, there was plenty of space for folks with common interests to gather around tables complete with power and Wi-Fi.

The 750+ sessions provided a wide range of presentations from how-to to case studies. In addition, there was a good mix of detailed to high-level session depending on your particular interest level.


Dreamforce is a good example of the maturity of Salesforce’s ecosystem. However, the large prominence of consulting firms provides a bit more contrast to that statement. Just walking around the expo floor one could get the impression that there is a solution to every problem imaginable. Not true and several of the basics are still woefully absent. Many of the solutions are excellent point solutions to address specific pain points.

Unfortunately, there are two aspects missing: Integration and Accessibility. Earlier this year, I wrote about the importance of onramps. At the expo, I randomly sampled several folks walking the show floor to get their thoughts. The theme was consistent: Great solutions, but each of them looking for an integrated solution. And it was not clear how they get from their current state to a future state leveraging the innovative solution. The prominence of consulting firms could serve as both a solution and further validation. Consulting firms provide a good short-term solution to the integration and onramp problem. However, the both issues need to be baked into the ecosystem’s solutions to sustain the ecosystem long-term.


Are conferences like Saleforce’s Dreamforce valuable to attend? In a nutshell…yes! If you knew very little about Salesforce before last week, Dreamforce presented a great opportunity to get an overview of opportunities, dig further into specific details and network with peers. If you were already an established customer, there is plenty of innovation still coming from the ecosystem.