Cap and Trade Impact on Data Centers

Cap and trade is a relatively new threat to data centers. However, the concept is not new; having been around for decades.

http://en.wikipedia.org/wiki/Emissions_trading

The US Environmental Protection Agency (EPA) has a website dedicated to their programs.

http://www.epa.gov/captrade/

The cap and trade concept is pretty simple. Emissions from a facility are given a “cap” or maximum allowed. Surplus credits can be traded to facilities exceeding their cap; hence the “trade”. Interestingly, this creates a market opportunity for trading credits. For data centers, it puts a spotlight on the type, and therefore cost of energy. Green or clean energy sources are preferred, but cost more per kWh.

The question is not whether increased data center efficiency is needed. It most definitely is needed! The question is how to incentivize data centers to get more efficient. Cap and trade could be viewed as a “big stick” approach. Some local power utilities offer rebate programs. But they are often hard to leverage and not offered in many areas. One incentive-based approach to drive data center efficiency has been proposed by Data Center Pulse (DCP).

http://www.datacenterpulse.org/

Ideally, data centers will see the need to become as efficient as possible without the need for the big stick. Incentive-based programs can help provide a catalyst to data center efficiency without the pressure of a program such as cap and trade.

Strategic Impact of Acquisitions: Oracle Acquisition of Virtual Iron Example

This week’s announcement of Oracle’s (NYSE: ORCL) acquisition of Virtual Iron caused a bit of a stir. Oracle’s press release is a bit vague and financial terms were not disclosed:

http://www.oracle.com/us/corporate/press/018535

The acquisition acknowledges an architectural shift toward a collapsed stack. Today’s stack includes layers for applications, platforms and infrastructure. A future stack would eliminate the middle layer. Platform components move into the application and infrastructure layers. Applications become more “infrastructure aware”. Infrastructure becomes more accessible to applications. The traditional abstraction layer goes away.

What significance does this have on strategy? This is important for a number of reasons. First, it signals a maturity of the model and readiness to evolve to the next version. Second, it provides a streamlined model for developers to model against. Third (and related to #2), a collapsed stack affords tighter integration between the other two layers. All three of these reasons allow a faster development cycle through streamlined operations.

Moving forward, we should expect to see more consolidation of the platform providers by the larger software and infrastructure players. VMware (NYSE: VMW) is noted as a likely target. However, with a market cap of $10.58B, it becomes harder to swallow. Companies like Rackspace (NYSE: RAX), with a market cap of 1.1B could be a target. Larger organizations such as IBM (NYSE: IBM) or HP (NYSE: HPQ) could acquire a good infrastructure player. Larger software companies such as Microsoft (NASDAQ: MSFT) and Oracle become good candidates to acquire the software end of the spectrum.

With markets down, expect to see an increase in consolidation over the next 6-12 months. For consumers of these services, it becomes even more critical to understand the potential business opportunities. Provider evaluations will continue and you should hedge your bets on the services they provide.

Importance of Private Clouds

First a quick primer on private vs. public clouds… Private clouds are, in many aspects, similar to their public big brothers. However, a private cloud is essentially a pooling of resources within an organization. The limit to the economies of scale is limited to the size of the organization. But it is owned and operated locally.

Private clouds provide benefits to the cloud computing movement. First, they provide organizations the ability to get familiar with cloud computing concepts. This is important both technically and organizationally. Second, private clouds provide a migration path to public clouds. Private clouds provide a potential stepping-stone moving from a traditional infrastructure to a public cloud infrastructure. Movement to private clouds also provides the potential for better utilization of internal resources…therefore increasing efficiency.

Not everyone is ready to support private clouds. Microsoft stated that their Azure offering would not be available to run locally.

http://blogs.zdnet.com/microsoft/?p=2340

It is possible that Microsoft is simply focused on the public offering for now and will return to a private offering at a later date. However, it could also be a ploy to force Microsoft customers to move directly from traditional infrastructure straight to a public cloud infrastructure.

Financially, moving from traditional infrastructure to private clouds provides better efficiency of capital investments. Moving further from private clouds (or traditional infrastructure) to public clouds provides a move from CapEx spending to OpEx spending. To many companies, this provides greater financial flexibility in times when capital is tough to come by.

Should Google Outage Cause Change in Cloud Computing Strategy?

Google suffered an outage this week that left 14% of their users headed to sites in China rather than the US. As such, a traffic jam between the US & Asia ensued. Google’s SVP of Operations posted an update to their blog:

http://googleblog.blogspot.com/2009/05/this-is-your-pilot-speaking-now-about.html

The larger question (and one many are asking) is: Should this cause a change to your cloud strategy? The short answer should be no. However, if you’re asking this question, then the immediate question is probably yes. Let me explain…

Many view leveraging of cloud services as a panacea away from the binds of traditional methods. While that is true, there are differences. One difference is how your providers deliver services vs. traditional methods. In your own facility, you would consider redundant options for critical services. Leveraging cloud-based services is no different. However, the redundancy needs to move up the stack. Instead of simply looking at redundancy at the infrastructure layer, redundancy needs to happen at the provider layer. This is a change from the traditional model where you are the provider.

In summary, if your cloud strategy takes into account redundancy across providers, then your strategy does not need to change. An outage at one provider does not significantly impact services to users. However, if services are delivered from a single provider, then yes, your strategy should probably change.

Cloud Computing Goes Mainstream

For years, the magazines on aircraft have played a significant role in the direction of technology. Many an executive have returned from a flight brim with interest in an article they read in the airline’s complementary magazine.

For years, the business world has used the Big Mac Index as a financial indicator. Published by The Economist, The Big Mac Index seeks to simplify exchange-rate theory in terms mainstream can understand.

http://www.economist.com/markets/bigmac/

In juxtaposition to indicators, technology articles in airline magazines signal the acceptance into the mainstream.

The May issue of Continental (Continental Airline’s in-flight magazine) contains a cover story on Cloud Computing.

http://magazine.continental.com/200905-home

What is interesting is that it’s not a buried story, but rather a cover story right of the front of the publication. Hewlett Packard’s Russ Daniels explains the cloud and how it can be leveraged.

For those that have not been asked about cloud computing already, this is a mass-market way that is sure to cause a new wave of interest.