The Shark of HP Converged Systems

The story of Converged Infrastructure (CI) continues to gain steam within the Information Technology (IT) industry…and for good reason. Converged solutions present a relatively easy way to manage complex infrastructure solutions. While some providers focus on CI as an opportunity to bundle solutions into a single SKU, companies such as Nutanix and HP have produced solutions for a couple of years now that go much further with true integration.

As enterprise IT customers shift their focus away from infrastructure and toward platforms, application and data, expect the CI space to heat up. Part of this shift includes platforms geared toward specific applications. This is especially true for those operating applications at scale.

Last week, HP announced their ‘shark’ approach of hardware solutions geared toward specific applications. One of the first targets is the SAP HANA application using HP Converged System 500 as part of a co-innovation project between HP & SAP. It is interesting to see HP partner with SAP HANA with so much emphasis on data analytics today. In addition, specialized solutions are becoming increasingly more important in this space.

Enterprise IT organizations need the ability to start small and grow accordingly. Even service providers may consider a start-small and grow approach. Michael Krigsman (@mkrigsman) recently wrote a post outlining how IT projects are getting smaller and still looking for relief. HP expressed their intent to provide scalable solutions that start small and include forthcoming ‘Project Kraken’ solutions later this year. Only time will tell how seamless this transition becomes.

Additional Reading:

HP CS Blog Entry:

http://h30507.www3.hp.com/t5/Converged-Infrastructure/HP-ConvergedSystem-for-SAP-HANA-meet-the-industry-s-most/ba-p/157176#.UynDsdy0bfM

HP Aims for the Stars with CloudSystem and Moonshot

Over the past few months, I’ve had a chance to spend time with the HP product teams. In doing so, it’s really opened my eyes to a new HP with a number of solid offerings. Two solutions (CloudSystem and Moonshot) really caught my attention.

HP CloudSystem

HP’s CloudSystem Matrix provides a management solution that manages internal and external resources and across multiple cloud providers. The heart of the CloudSystem platform is in its extendible architecture. In doing so, it provides the glue that many enterprises look to leverage for bridging the gap between internal and external resources. On the surface, HP CloudSystem looks pretty compelling for enterprises considering the move to cloud (internal, external, public or private). For those thinking that CloudSystem only works with OpenStack solutions, think again. CloudSystem’s architecture is designed to work across both OpenStack and non-OpenStack infrastructures.

However, the one question I do have is why CloudSystem doesn’t get the airplay it should. While it may not be the right solution for everyone, it should be in the mix when considering the move to cloud-based solutions (public or private).

HP Moonshot

Probably one of the most interesting solutions recently announced is HP’s Moonshot. On the surface, it may appear to be a replacement for traditional blades or general-purpose servers. Not true. The real opportunity comes from it’s ability to tune infrastructure for a specific IT workload.

Traditionally, IT workloads are mixed. Within an enterprise’s data center run a variety of applications with mixed requirements. In sum, a mixed workload looks like a melting pot of technology. One application may be chatty, while another is processor intensive and yet another is disk intensive.  The downside to the mixed workload is the inability to tune the infrastructure (and platforms) to most efficiently run the workload.

All Workloads Are Not Created Equally

As the world increasingly embraces cloud computing and a services-based approach, we are starting to see workloads coalesce into groupings. Instead of running a variety of workloads on general-purpose servers, we group applications together with service providers. For example, one service provider might offer an Microsoft Exchange email solution. Their entire workload is Exchange and they’re able to tune their infrastructure to most efficiently support Exchange. This also leads to a high level of specialization not possible in the typical enterprise.

That’s where Moonshot comes in. Moonshot provides a platform that is highly scalable and tunable for specific workloads. Don’t think of Moonshot as a high-performance general-purpose server. That’s like taking an Indy car and trying to haul the kids to soccer practice. You can do it, but would you? Moonshot was purpose-built and not geared for the typical enterprise data center or workload. The sweet spot for Moonshot is in the Service Provider market where you typically find both the scale and focus on specific workloads. HP also considered common challenges Service Providers would face with Moonshot at scale. As an example, management software offers the ability to update CPUs and instances in bulk.

Two downsides to Moonshot are side effects of the change in architecture. One is in the creation of bandwidth problems. Moonshot is very power efficient, but requires quite a bit of bandwidth. The other challenge is around traditional software licensing. This problem is not new and seems to rear its ugly head with changes in innovation. We saw this with both Virtualization and Cloud. Potential users of Moonshot need to consider how to best address these issues. Plus, industry standard software licensing will need to evolve to support newer infrastructure methods. HP (along with users) need to lobby software providers to evolve their practices.

OpenStack at the Core

HP is one of the core OpenStack open-source contributors. OpenStack, while a very powerful solution, is a hard sell for the enterprise market. This will only get harder over time. On the other hand, Service Providers present a unique match for the challenges and opportunities that OpenStack presents. HP is leveraging OpenStack as part of the Moonshot offering. Pairing Moonshot with OpenStack is a match made in heaven. The combination, when leveraged by Service Providers provides a strong platform to support their offering compared with alternatives.

When considering the combination of CloudSystem along with Moonshot and OpenStack, HP has raised the stakes from a single provider. The solutions provide a bridge from the current traditional environments to Service Provider solutions.

I am pleased to see a traditional hardware/ software provider acknowledging how the technology industry is evolving and providing solutions that span the varied requirements. I, for one, will be interested to see how successful HP is in continuing their path through the evolution.

HP Converged Cloud Tech Day

Last week, I attended HP’s Converged Cloud Tech Day in Puerto Rico. Fellow colleagues attended from North, Latin and South America. The purpose of the event was to 1) take a deep dive into HP’s cloud offerings and 2) visit HP’s Aguadilla location, which houses manufacturing and an HP Labs presence. What makes the story interesting is that HP is a hardware manufacturer, a software provider and a provider of cloud services. Overall, I was very impressed by what HP is doing…but read on for the reasons why…and the surprises.

HP Puerto Rico

HP, like many other technology companies, has a significant presence in Puerto Rico. Martin Castillo, HP’s Caribbean Region Country Manager provided an overview for the group that left many in awe. HP exports a whopping $11.5b from Puerto Rico or roughly 10% of HP’s global revenue. In the Caribbean, HP holds more than 70% of the server market. Surprisingly, much of the influence to use HP cloud services in Puerto Rico comes from APAC and EMEA, not North America. To that end, 90% of HP’s Caribbean customers are already starting the first stage of moving to private clouds. Like others, HP is seeing customers move from traditional data centers to private clouds to managed clouds to public clouds.

Moving to the Cloud

Not surprisingly, HP is going through a transition by presenting the company from a solutions perspective rather than a product perspective. Shane Pearson, HP’s VP of Portfolio & Product Management explained that “At the end of the day, it’s all about applications and workloads. Everyone sees the importance of cloud, but everyone is trying to figure out how to leverage it.” By 2015 the projected markets are: Traditional $1.4b, Private Cloud $47b, Managed Cloud $55b, Public Cloud $30b for a cloud total of $132b. In addition, HP confirmed Hybrid Cloud approach as the approach of choice.

While customers are still focused on cost savings as the primary motivation to move to cloud, the tide is shifting to business process improvement. Put another way, cloud is allowing users to do things they could not do before. I was pleased to hear HP offer that it’s hard to take advantage of cloud if you don’t leverage automation. Automation and Orchestration are essential to cloud deployments.

HP CloudSystem Matrix

HP’s Nigel Cook was up next to talk about HP’s CloudSystem Matrix. Essentially, HP is (and has been) providing cloud services across the gamut of potential needs. Internally, HP is using OpenStack as the foundation for their cloud service offering. But CloudSystem Matrix provides a cohesive solution to manage across both internal and external cloud services. To the earlier point about automation, HP is focusing on automation and self-service as part of their cloud offering. Having a solution that helps customers manage the complexity that Hybrid Clouds presents could prove interesting. Admittedly, I have not kicked the tires of CloudSystem Matrix yet, but on the surface, it is very impressive.

Reference Architecture

During the visit to Aguadilla, we joined a Halo session with HP’s Christian Verstraete to discuss architecture. Christian and team have built an impressive cloud functional reference architecture. As impressive as it is, one challenge is how to best leverage such a comprehensive model for the everyday IT organization. It’s quite a bit to chew off. Very large enterprises can consume the level of detail contained within the model. Others will need a way to consume it in chunks. Christian goes into much greater depth in a series of blog entries on HP’s Cloud Source Blog.

HP Labs: Data Center in a Box

One treat on the trip was the visit to HP Labs. If you ever get the opportunity to visit HP Labs, it’s well worth the time to see what innovative solutions the folks are cooking up. HP demonstrated the results from their Thermal Zone Mapping (TZM) tool (US Patent 8,249,841) along with CFD modeling tools and monitoring to determine details around airflow/ cooling efficiency. While I’ve seen many different modeling tools, HP’s TZM was pretty impressive.

In addition to the TZM, HP shared a new prototype that I called Data Center in a Box. The solution is an encapsulated rack system that supports 1-8 racks that are fully enclosed. The only requirement is power and chilled water. The PUE numbers were impressive, but didn’t take into account every metric (ie: the cost of chilled water). Regardless, I thought the solution was pretty interesting. The HP folks kept mentioning that they planned to target the solution to Small-Medium Business (SMB) clients. While that may have been interesting to the SMB market a few years ago, today the SMB market is moving more to services (ie: Cloud Services). That doesn’t mean the solution is DOA. I do think it could be marketed as a modular approach to data center build-outs that provides a smaller increment to container solutions. Today, the solution is still just a prototype and not commercially available. It will be interesting to see where HP ultimately takes this.

In Summary

I was quite impressed by HP’s perspective on how customers can…and should leverage cloud. I felt they have a healthy perspective on the market, customer engagement and opportunity. However, I was left with one question: Why are HP’s cloud solutions not more visible? Arguably, I am smack in the middle of the ‘cloud stream’ of information. Sure, I am aware that HP has a cloud offering. However, when folks talk about different cloud solutions, HP is noticeably absent. From what I learned last week, this needs to change.

HP’s CloudSystem Matrix is definitely worth a look regardless of the state of your cloud strategy. And for data center providers and service providers, keep an eye out for their Data Center in a Box…or whatever they ultimately call it.

How to Leverage the Cloud for Disasters like Hurricane Sandy

Between natural disasters like Hurricanes Sandy and Irene or man-made disasters like the recent data center outages, disasters happen. The question isn’t whether they will happen. The question is: What can be done to avoid the next one? Cloud computing provides a significant advantage to avoid disaster. However, simply leveraging cloud-based services is not enough. First, a tiered approach in leveraging cloud-based services is needed. Second, a new architectural paradigm is needed. Third, organizations need to consider the holistic range of issues they will contend with.

Technology Clouds Help Natural Clouds

If used correctly, cloud computing can significantly limit or completely avoid outages. Cloud offers a physical abstraction layer and allows applications to be located outside of disaster zones where services, staff and recovery efforts do not conflict.

  1. Leverage commercial data centers and Infrastructure as a Service (IaaS). Commercial data centers are designed to be more robust and resilient. Prior to a disaster, IaaS provides the ability to move applications to alternative facilities out of harms way.
  2. Leverage core application and platform services. This may come in the form of PaaS or SaaS. These service providers often architect solutions that are able to withstand single data center outages. That is not true in every case, but by leveraging this in addition to other changes, the risks are mitigated.

In all cases, it is important to ‘trust but verify’ when evaluating providers. Neither tier provides a silver bullet. The key is: Take a multi-faceted approach that architects services with the assumption for failure.

Changes in Application Resiliency

Historically, application resiliency relied heavily on redundant infrastructure. Judging from the responses to Amazon’s recent outages, users still make this assumption. The paradigm needs to change. Applications need to take more responsibility for resiliency. By doing so, applications ensure service availability in times of infrastructure failure.

In a recent blog post, I discussed the relationship cloud computing provides to greenfield and legacy applications. Legacy applications present a challenge to move into cloud-based services. They can (and eventually should) be moved into cloud. However, it will require a bit of work to take advantage of what cloud offers.

Greenfield applications, on the other hand, present a unique opportunity to fully take advantage of cloud-based services…if used correctly. With Hurricane Sandy, we saw greenfield applications still using the old paradigm of relying heavily on redundant infrastructure. And the consequence was significant application outages due to infrastructure failures. Consequently, greenfield applications that rely on the new paradigm (ie: Netflix) experienced no downtime due to Sandy. Netflix not only avoided disaster, but saw a 20% increase in streaming viewers.

Moving Beyond Technology

Leveraging cloud-based services requires more than a technology change. Organizational impact, process changes and governance are just a few of the things to consider. Organizations need to consider the changes to access, skill sets and roles. Is staff in other regions able to assist if local staff is impacted by the disaster? Fundamental changes from change management to application design processes will change too. And at what point are services preemptively moved to avoid disaster? Lastly, how do governance models change if the core players are out of pocket due to disaster? Without considering these changes, the risks increase exponentially.

Start Here

So, where you do you get started? First, determine where you are today. All good maps start with a “You Are Here” label. Consider how to best leverage cloud services and build a plan. Take into account your disaster recovery and business continuity planning. Then put the plan in motion. Test your disaster scenarios to improve your ability to withstand outages. Hopefully by the time the next disaster hits (and it will), you will be in a better place to weather the storm.

How Important are Ecosystems? Ecosystems are Everything

The IT industry is in a state of significant flux. Paradigms are changing and so are the underlying technologies. Along with these changes come the way we think about solutions. Over time, IT organizations have amassed a phenomenal number of solutions, vendors, complex configurations and experience. Continuing to support that ever-expanding model is starting to show cracks. Trying to sustain this approach is just not possible…nor should it be. It is time for a change. Consolidation, integration, efficiency and value creation are the current focal points. Those shifts create a significant shift in how we function as IT organizations and providers.

Changes in Buying Habits

In order to truly understand the value of an ecosystem, one first needs to understand the change in buying habits. IT organizations are making a significant shift from buying point solutions to buying ecosystems. In some ways, this is nothing new. IT organizations have bought into the solutions from major providers for decades. The change is in the composition of the ecosystem. Instead of buying into an ecosystem from a single provider, buyers are looking for comprehensive ecosystems that span multiple providers. This lowers the risk for the buyer and creates a broader offering while providing an integrated solution.

Creating the Cloud Supply Chain

Cloud Computing is a great use-case of the importance of building a supply chain within the ecosystem. Think about it. Applications, services and solutions that IT organization provides to users are not single-purpose, non-integrated solutions. At least they shouldn’t be. Good applications and services are integrated with other offerings. When buyers choose a component, that component needs to connect to another component. In addition, alternatives are needed, as one solution does not fit all. In many ways, this is no different from a traditional manufacturing supply chain. The change is to apply those fundamentals to the cloud ecosystem.

Integration

In concert with the supply chain, each component needs solid integration with the next. Today, many point solutions require the buyer to figure out how to integrate solutions. This often becomes a barrier to adoption and introduces risk into the process. One could go crazy coming up with the permutations of different solutions that connect. However, if each solution considered the top 3-4 commonly connected components, the integration requirements become more manageable. And they are left to the folks that understand the solutions best…the providers.

Cloud Verticals

As cloud-based ecosystems start to mature, the natural progression is to develop cloud verticals. Essentially, creating ecosystems with components for a specific vertical or industry. In the healthcare vertical, an ecosystem might include a choice of EHR solutions, billing systems, claims systems and patient portal. For SMB or Mid-Tier businesses, it might be an accounting system, email, file storage and website. Remember that the ecosystem is not just a brokerage of selling the solutions as a package. It is a comprehensive solution that is already integrated.

Bottom Line: Buyers are moving to buying ecosystems, especially with cloud services. The value of your solution comes from the value of your ecosystem.

Cloud Application Matrix

Several years in and there is still quite a bit of confusion around the value of cloud computing. What is it? How can I use it? What value will it provide? There are several perspectives on how to approach cloud computing value. Interesting, that very question elicits several possible responses. This missive specifically targets how applications map against a cloud value matrix. From the application perspective, scale along with the historical component governs the direction of value.

Scale (y-axis)

As scale increases, so does the potential value from cloud computing. That is not to say that traditional methods are not valuable. It has more to do with the direction and velocity that the scale of an application is taking. Greenfield applications provide a different perspective from legacy applications. Rewriting legacy applications simply to use cloud brings questionable value. There may be extenuating circumstances to consider. However, those are not common.

Legacy vs. Greenfield (x-axis)

The x-axis represents the spectrum of applications from legacy to greenfield. Greenfield applications may include either brand new applications or rewritten legacy applications. Core, off the shelf applications may fall into either category. The current state of the cloud marketplace maturity suggests that any new or greenfield applications should consider cloud computing. That includes both PaaS and SaaS approaches.

Mapping

The first step is to map the portfolio of applications against the grid. Each application type and scale is represented in relation to the others. This is a good exercise to 1) identify the complete portfolio of applications, 2) understand the current state and lifecycle and 3) develop a roadmap for application lifecycles. The roadmap can then become the playbook to support a cloud strategy.

Upper-Right Quadrant

The value cloud computing brings increases as application requirements move toward the upper-left quadrant. In most cases, applications will move horizontally to the right rather than vertically upward. The clear exception is web-scale applications. Most of those start in the lower-right quadrant and move vertically upward.

Exceptions

The matrix is intended to be a general guideline to characterize the majority, but not all applications and situations. In one example, legacy applications may be encapsulated to support cloud-based services as an alternative to rewriting.

Dreamforce 2012 Trip Report

Overview

Last week’s Salesforce Dreamforce event had to be the largest conference I have seen at San Francisco’s Moscone Center. It covered Moscone North, South and West plus several hotels. And if that was not enough, Howard Street was turned into a lawn area complete with concert stage, outdoor lounge area and exhibits. Dreamforce presented a great opportunity to learn more about the Salesforce community…and a number of missed opportunities.

Walking the expo floor, one thing becomes clear very quickly: Salesforce is the largest exhibitor. Taking up 25-30% of the expo floor the Salesforce area maintained focal points around sales, marketing and service. Surrounding the Salesforce area were partners in their ecosystem. Some based on their Force.com platform, while others with their own platforms. There were solutions for all types of needs. Unfortunately, the different subject matter was intertwined throughout the floor (Sales next to Service next to Marketing). Salesforce is a broad platform. If you were interested in a specific aspect of Salesforce-based solutions, it was hard to find the related solutions. Interestingly, consulting firms held some of the largest booths outside of Salesforce.

Moscone West held the Developer Zone with less structured community areas for folks with similar interests to gather. Multiple presentations were taking place in the Developer Zone non-stop. In addition to the Unconference area, there was plenty of space for folks with common interests to gather around tables complete with power and Wi-Fi.

The 750+ sessions provided a wide range of presentations from how-to to case studies. In addition, there was a good mix of detailed to high-level session depending on your particular interest level.

Ecosystem

Dreamforce is a good example of the maturity of Salesforce’s ecosystem. However, the large prominence of consulting firms provides a bit more contrast to that statement. Just walking around the expo floor one could get the impression that there is a solution to every problem imaginable. Not true and several of the basics are still woefully absent. Many of the solutions are excellent point solutions to address specific pain points.

Unfortunately, there are two aspects missing: Integration and Accessibility. Earlier this year, I wrote about the importance of onramps. At the expo, I randomly sampled several folks walking the show floor to get their thoughts. The theme was consistent: Great solutions, but each of them looking for an integrated solution. And it was not clear how they get from their current state to a future state leveraging the innovative solution. The prominence of consulting firms could serve as both a solution and further validation. Consulting firms provide a good short-term solution to the integration and onramp problem. However, the both issues need to be baked into the ecosystem’s solutions to sustain the ecosystem long-term.

Summary

Are conferences like Saleforce’s Dreamforce valuable to attend? In a nutshell…yes! If you knew very little about Salesforce before last week, Dreamforce presented a great opportunity to get an overview of opportunities, dig further into specific details and network with peers. If you were already an established customer, there is plenty of innovation still coming from the ecosystem.

Tracelytics Heats Up Cloud-based APM

Gaining visibility to application performance is key. Application Performance Management (APM) solutions are not new and provide insight to tiers within an application stack. With the entry of cloud based computing in the past couple of years, the APM world got a bit more complex.

APM is mature enough to consider cloud-based providers in the application stack. In the classic model, an application has three layers in the stack: 1) Database layer, 2), Application layer and 3) Web layer. Depending on the complexity of the application, it may have 5 or more layers in the mix. Today, a cloud service provider may serve one or more of these layers.

Several solutions exist that support cloud-based APM. New Relic, OPNET, and CA are just a few examples. At the Under the Radar conference, Tracelytics presented their approach to APM. Tracelytics started two years ago by a small team of three to address a growing problem they observed in research from Brown University. I met with Spiros Eliopoulos, Co-Founder and CTO to discuss how Tracelytic’s approach differs from the competition.

So, what’s different? Bottom line: It has to do with the flexibility of the solution. As the application stack gets increasingly more complex, so does the management. The number of providers and shared resources is growing exponentially. According to Spiros, their solution “looks at each layer individually, then ties together the different layers to provide a complete view.” Tracelytics allows APM visibility through “drilldown performance across layers.” Their clever approach uses heat maps to visually find problem spots. Managing APM within layers and up/down the entire stack is key to providing clear visibility to correct problem areas quickly.

Many providers struggle with pricing strategies in today’s cloud and virtualized world. In the traditional computing world, it was easy to license solutions. Tracelytic’s approach continues to provide flexibility by focusing on the tracing volume rather than hosts or layers. The entire stack of an application is considered one application. So, whether you engage one application to report 10x per hour or 10 applications once per hour, the cost is the same. This is true regardless of the number of layers within the application stack. Nice!

Sneak Peek of the Under The Radar Conference

The Under The Radar (UTR) Conference (http://www.undertheradarblog.com/) is tomorrow, April 26, 2012. UTR is the intersection of hot up-and-coming startups, investors and judging. If the reception tonight was any indication, the conference and presentations should be very interesting. Here’s a sneak peak of my take of the hot areas and companies to watch:

Application Development Solutions

A few companies are presenting their solutions in the mobile and security space. In the era of cloud computing, these are two hot buttons that enterprises and service providers alike need to be keenly aware of. The move of the information worker from a stationary device to a mobile device is in process. CoIT and BYOD are both serious factors to the movement. Likewise, using traditional security paradigms in the new model run into serious complications. Tools are needed to help organizations make this move while managing and securing environments.

Companies: BitzerMobile, Cabana, Duo Security, Fabric Engine, Framehawk, StackMob

Platforms and Infrastructure

Building applications on top of infrastructure is nothing new. In the cloud era, the architecture…and options open up quite a bit. The cloud market is starting to mature and value is moving from core infrastructure to platforms and on to applications. Leveraging hosted platforms does require a different paradigm to succeed. In addition, when considering apps at scale, automation and orchestration become even more important. This is a very broad area with quite a bit of specialization. Moving forward, integration in the space will be the key to success…along with some consolidation.

Companies: Appfog, AppHarbor, CloudBees, CloudScaling, Drawn to Scale, ionGrid, Iron.io, MemSQL, MongoLab, Nodejitsu, NuoDB, Piston Cloud, Puppet Labs, Sauce Labs, ScaleArc, Zadara Storage

Monitoring and Analytics

One of the most interesting areas is how data is used and analyzed. And then taking action based on the information gleaned from the data. Players in this space range from aggregating data to understanding and analyzing it. Value increases as the data is moved into analytics and ultimately business actions taken based on the intelligence. While there is quite a bit of specialization in this area at different levels (application monitoring/ performance management to analytics and intelligence), added value will come when these can be tied together to drive business decisions.

Companies: Chart.io, Cloudability, Cloudyn, Datadog, DataSift, Infochimps, Metamarkets, Nodeable, Sumo Logic, Tracelytics

Interesting Areas to Watch

In today’s marketplace, there are the future-state solutions and concepts. And then there are the real-world solutions that solve today’s problems. Both states need to be understood and the ball needs to be moved forward…and fast! The increased amplitude of mobile devices along with cloud computing bring applications at scale into the forefront. Orchestration and automation becoming hallmarks to success to up-level the conversation and value IT brings to organizations. Ultimately, the play will be with data and analytics. But today, there are more fundamental issues on the table.

Of course, that’s just a cursory review of the upcoming presentations from the UTR conference. Look for more details in the UTR Twitter stream (#UTRconf) and posts after the conference.

A Workload is Not a Workload, is Not a Workload

Over the past year, I’ve observed a concerning trend about workloads. It seems that with the advent of cloud computing, the idea of a workload has been confused a bit. The fundamental concern is a misguided view that all workloads are the same or similar. Specifically, I’ve heard general IT professionals making decisions around cloud computing by following those of Netflix, Zynga, Facebook and Google. This makes some very large and flawed assumptions that are fundamentally based in a misunderstanding of the business drivers and workload requirements.

What is a Workload?

First, let’s start with what a workload is. A workload is a characterization of the work that applications perform. This includes the applications, systems, storage and network infrastructure. It’s a holistic view of the type of “work” being performed with the entire system. The nature of the work is the load being placed on the infrastructure systems. The work being performed is governed by the application, systems, configurations and specific use of the applications or services. At a macro level, this is fairly unique to each company. There are exceptions, which I will discuss in a minute…read on.

Workload Modeling

For well over 20 years, organizations have modeled their workloads to better understand performance characteristics of systems. Others may refer to it as Web Testing, Software Testing, Load Testing and the like. When I was at InfoWorld in the early 90’s, I participated with BAPCO to model performance of systems based on the 10 most popular applications at the time. At the time, we used scripts to perform functions in each app similar to popular actions taken by typical users. It was very cool for the time. The idea was to create a “typical” load by characterizing typical application use on systems, storage and networking devices. Today, the level of sophistication of workload modeling has increased significantly. And many tools like TPC target a specific application or service. I’ve listed a number of more popular ones in the references section.

Two Fundamental Types of Workloads

At a high level, when you consider the different types of workloads, there are two fundamental categories. One is the monolithic application/ service while the other is more generalized. These are very different.

Monolithic Applications

The monolithic application is often a single-purpose custom-built application (or application suite) and runs at scale. In addition, it’s commonly a dedicated application separate from general business IT functions. Examples would be Zynga’s gaming platform or Google’s search platform. Both Zynga and Google’s environments also run at an extreme scale. Because of the scale, it’s even more important to understand nuances around workload characterization that are less critical (and harder to pin down) with mixed workloads. For example, Google can fine-tune the different aspects of their search platform to decrease the time to present results. In addition, they can create custom infrastructure components, architectures and configurations. Why? Because they clearly understand the myriad of possible tweaks to the application and their impact. In addition, applications at scale bring a whole host of unique challenges on their own. This is a very different environment from their internal core business applications that run their business. It is also very uncommon for most businesses to have this type of workload with the exception being the aforementioned or possibly a Line of Business (LOB) application. Arguably, one might consider Google, Facebook, Netflix or Zynga’s apps the company’s LOB application.

Mixed Workloads

The second type of workload is a mixed workload that combines a variety of core business applications. Internal core business applications are great examples of a mixed workload (email, ERP, HR, Financials, custom applications, etc). Each company will have a different combination of applications. They may also be a combination of off the shelf and custom applications. And each application does not typically run at a very large scale. These are classic IT workloads and found in just about every organization. The amount of effort to characterize and tweak this workload at a granular level vs. the value gained is often hard to justify.

Comparing Apples and Oranges

It’s important to clearly understand the type of workload you are comparing. Comparing what Zynga does with your own decisions is not the wisest of choices. Meaning, the demands and specifics of a monolithic workload are very different from a mixed workload. In addition, this does not taken into account the business factors that each type of workload brings to the forefront. All of these should be considered in the decision making process.

Following, Learning and Thinking

So, simply following Zynga, Google or Facebook’s decisions with cloud computing should not be happening without further consideration. Unfortunately, it is. Yet even Netflix and Zynga have taken very different paths for their applications/ services. Can we all learn from these industry leaders in the cloud computing space? Absolutely! But we need to consider what factors and aspects compare with our own needs. Getting to the answer is more complex than simply saying “Facebook went right, we should go right too!”. It means we need to think more and understand our own needs.

And as if understanding your own workload is not complex enough, comparing workloads across companies is very challenging. There are so many variables to consider that the value may not be worth the effort. For most it will still be an apples to oranges comparison. The best advice is to understand the factors that go into your decision-making process and compare common attributes across workloads. That way, you can learn from others while making good decisions about understanding your own workload.

Leveraging the appropriate tools can also assist in the decision making process.

 

References:

TPC Benchmarks: Transaction Processing Performance Council (http://www.tpc.org/)

SYSmark/ MobileMark/ WebMark Benchmarks: BAPCO (http://www.bapco.com/)

Cloud Testing: SOASTA (http://www.soasta.com/)

HP LoadRunner (http://www8.hp.com/us/en/software/software-product.html?compURI=tcm:245-935779)