The Future Data Center Is… Part II

Last month, I wrote The Future Data Center Is… and alluded to a shift in demand for data centers. Just to be clear, I don’t believe data center demand is decreasing. Quite the contrary, I believe demand is exploding! But how is demand for data centers going to change? What does the mapping of organizations to services look like?

First, why should you care? Today, the average PUE of a data center is 1.8. …and that’s just the average. That’s atrocious! Very Large Enterprises are able to drive that to near 1.1-1.3. The excess is a waste of energy resources. At a time when Corporate Social Responsibility and carbon footprint are becoming more in vogue in the corporate arena, data centers are becoming a large target. So efficiency matters!

Yesterday, I presented a slide depicting the breakdown of types of organizations and (respectively) the shift in demand.

It is important to understand the details behind this. To start, let’s take a look at the boundary situations.

SMB/ Mid-Tier Organziations

Data center demand from SMB and Mid-Tier organizations starts to shift to service providers. Typically, their needs are straightforward and small in scale. In most cases, they use a basic data center (sometimes just a closet) supporting a mixed workload running on common off-the-shelf hardware. Unfortunately, the data centers in use by these organizations are highly inefficient due to their small scale and lack of sophistication. That’s not the fault of the organization. It just further supports the point that others can manage data centers more effectively than they can. Their best solution would be to move to a colocation agreement or IaaS provider and leverage SaaS where possible. That takes the burden off those organizations and allows them to focus on higher value functions.

Very Large Enterprises (VLE)

At the other end of the spectrum, Very Large Enterprises will continue to build custom solutions for their web-scale, highly tuned, very specific applications. This is different from their internal IT demand. See my post A Workload is Not a Workload, is Not a Workload where I outline this in more detail. Due to the scale of their custom applications, they’re able to carry the data center requirements of their internal IT demand at a similar level due to their scale. If they only supported their internal IT demand, their scale would pale in comparison and arguably, so would their efficiency.

Enterprises

In some ways, the VLE without the web-scale custom application is a typical Enterprise with a mixed workload. Enterprises sit in the middle. Depending on the scale of the workloads, characterization, organization and sophistication, enterprises may leverage internal data centers or external ones. It’s very likely they will leverage a combination of both for a number of reasons (compliance, geography, technical, etc). The key is to take an objective view of the demand and alternatives.

The question is, can you manage a data center more effectively and efficiently than the alternatives? Also, is managing a data center strategic to your IT strategic initiatives and aligns with business objectives? If not, then it’s probably time to make the shift.

Related Articles:

Mark Thiele: Measuring the Size of a Data Center – Yes, it Matters

The Green Grid: Metrics and Measurements

Tracelytics Heats Up Cloud-based APM

Gaining visibility to application performance is key. Application Performance Management (APM) solutions are not new and provide insight to tiers within an application stack. With the entry of cloud based computing in the past couple of years, the APM world got a bit more complex.

APM is mature enough to consider cloud-based providers in the application stack. In the classic model, an application has three layers in the stack: 1) Database layer, 2), Application layer and 3) Web layer. Depending on the complexity of the application, it may have 5 or more layers in the mix. Today, a cloud service provider may serve one or more of these layers.

Several solutions exist that support cloud-based APM. New Relic, OPNET, and CA are just a few examples. At the Under the Radar conference, Tracelytics presented their approach to APM. Tracelytics started two years ago by a small team of three to address a growing problem they observed in research from Brown University. I met with Spiros Eliopoulos, Co-Founder and CTO to discuss how Tracelytic’s approach differs from the competition.

So, what’s different? Bottom line: It has to do with the flexibility of the solution. As the application stack gets increasingly more complex, so does the management. The number of providers and shared resources is growing exponentially. According to Spiros, their solution “looks at each layer individually, then ties together the different layers to provide a complete view.” Tracelytics allows APM visibility through “drilldown performance across layers.” Their clever approach uses heat maps to visually find problem spots. Managing APM within layers and up/down the entire stack is key to providing clear visibility to correct problem areas quickly.

Many providers struggle with pricing strategies in today’s cloud and virtualized world. In the traditional computing world, it was easy to license solutions. Tracelytic’s approach continues to provide flexibility by focusing on the tracing volume rather than hosts or layers. The entire stack of an application is considered one application. So, whether you engage one application to report 10x per hour or 10 applications once per hour, the cost is the same. This is true regardless of the number of layers within the application stack. Nice!

Shadow IT is a Good Thing for IT Organizations

Shadow IT is a good thing for IT organizations…and here’s why…

It is important to first understand what Shadow IT is and why it happens. Shadow IT is commonly referred to when non-IT organizations delve into the delivery of technology solutions…without IT’s involvement. It happens for a number of reasons. But the most common is when there is demand for a technology solution and it is believed (right or wrong) that IT is not able to assist or deliver the solution. This could be due to timing, availability, experience, bureaucracy, or a number of other factors. The bottom line is that the non-IT organization believes they can address a need better than the IT organization can.

In general, is Shadow IT a bad thing? Yes, but has the opportunity to evolve into a very good thing. Shadow IT (as it is often implemented today) is a reaction to a problem with a solution that is not ideal. The solution is a non-IT or trying to provide IT services. Unfortunately, this is often not their core competency and furthermore distracts from their core mission.

So, why is this new? In the past, it was hard for non-IT organizations to leverage technology without the assistance of IT. People were also not as familiar with technology. In the cloud-based world, leveraging technology is far easier. In addition, knowledge workers today are more familiar with technology than in past generations. For those that build shadow IT organizations, the believe is that it is the path of least resistance; build yourself or leverage IT. While not an ideal situation, it is often the only choice.

At the Forrester CIO Forum yesterday, 79% of business decision makers say they rely on technology to innovate in the business. 42% say IT is too bureaucratic and 11% of those business decision makers are bypassing IT.

The move to shadow IT is a good thing for IT. Why? It is a wake-up call. It provides a clear message that IT is not meeting the requirements of the business. IT leaders need to rethink how to transform the IT organization to better serve the business and get ahead of the requirements. There is a significant opportunity for IT play a leading role in business today. However, it goes beyond just the nuts and bolts of support and technology. It requires IT to get more involved in understanding how business units operate and proactively seek opportunities to advance their objectives. It requires IT to reach beyond the cultural norms that have been built over the past 10, 20, 30 years.

A new type of IT organization is required. A fresh coat of paint won’t cut it. Change is hard, but the opportunities are significant. This is more of a story about moving from a reactive state to a proactive state for IT. It does require a significant change in the way IT operates for many. That includes both internally within the IT organization and externally in the non-IT organizations. The opportunities can radically transform the value IT brings to driving the business forward.

Shadow IT is a turning point for IT. Embrace it and leverage the best that it can deliver while transforming how technology solutions are delivered. Look for ways to embrace the amplitude in change of technology, process and organization. Embrace change and look for ways to transform IT to better serve the business. Cloud is a significant opportunity to leverage for this change. Shed the ways of old and adopt the new. Opportunity awaits.

Analyzing Cloud Utilization and Optimization with Cloudyn

Moving workloads into Cloud Computing environments is on everyone’s task list. As one evaluates the choices between public and private cloud, the sizing of an environment quickly comes into view. How large or small should an environment be? Once you get started, how does one “rightsize” their cloud environment? As the cloud based environment, or environments start to grow, sizing them correctly will ensure that performance and financial objectives are kept in check.

Last week at the Under The Radar conference, I had a chance to meet with one company that addresses this need. I met with the Founder and CEO of Cloudyn, Sharon Wagner. Cloudyn’s approach is to evaluate cloud details and provide a set of recommendations. But that is just the start. Cloudyn’s approach is to ingest a number of variables via provider APIs from cost information to performance characteristics. Their solution is able to do this automatically even if negotiated pricing is in play with public cloud providers. The engine ingests cost elements from both public and private clouds. According to Sharon, the SaaS-based solution uses “a predefined algorithm that the user can modify to produce actionable recommendations. The recommendations provide specific details on the action to take and why”. Understanding the reason behind a decision puts users in a better position to make informed decisions. Armed with this information, users can size cloud environments more accurately and manage costs. Cloudyn’s solution takes it a step further to tie business metrics with technical metrics to derive metrics like ‘cost per transaction’.

Taking it a different direction, users can leverage the recommended actions to feed into the orchestration layer of their cloud. While this step may be a bit too automated for some, those with a clear understanding of their workloads and capable of setting boundaries might enjoy this valuable perk.

DataSift Takes Big Data Analytics of Social Networks to the Next Level

We all know data is growing at an astronomical pace. By many accounts, the sheer amount of data coming at us is overwhelming. According to one survey, enterprises today create only 8% of the entire data set they consume. That leaves quite a bit of external data to collect and process. Increasingly, for many this data is coming from Social Media. The concept of Big Data does address these large, growing datasets. However, challenges still await the social media data landscape.

Enter DataSift, a UK-based startup launched in November 2011. Since then, they’ve covered quite a bit of ground and just secured an additional $7.2M this week. I had the opportunity to sit down with Rob Bailey, CEO of DataSift at the Under The Radar conference last week to discuss the company and their value proposition. Rob confirmed that “data is exploding” and while many companies are able to track sentiment of social networks, DataSift is able to provide the “metadata of social data platforms through the aggregation of social data platforms.” Simply screen-scraping data from social networks provides one level of value to users. By using “augmentations of the data using geo, sentiment, meaning and context”, DataSift is able to provide a much richer context to clients. It is this very metadata or relationships between data, over time that starts to get interesting.

According to Rob, the challenge facing DataSift is people understanding the space. Bringing different data elements and considering the metadata of social networks is an exercise in thought. Integration of massive data streams and context on the data present an interesting challenge. These challenges could be addressed over time with adequate resources. However, providing trending information from the variety of data social media sources, with context, in real-time is valuable.

Analytics of data performed by data scientists is one approach. For everyone else, gaining ready access to valuable data sources and making sense of them in real-time is one area to watch very closely.