Simple and Standard…or Custom and Complex

Over the years, IT has had the ability to customize the heck out of applications. Even the industry enabled this addiction to feature creep. Vendors asked what new button, bell, and whistle customers wanted and then delivered what they could. Customization became a hallmark of IT trying to ultimately please the customer and meet their ever-changing requirements.

Custom configurations lead to the ability to do more and increase the value of the application/ service to the user. As the number of customizations increased, so did the level of complexity. Eventually, that very flexibility and customization starts to work against the value of the customizations themselves.

There is another nasty side effect with customization. It creates a sort of lock-in. Essentially, the further a solution is customized, the more unique it is and the harder it is to leverage alternative solutions. The customizations create such a unique solution that alternative solutions struggle to compete against. That is, unless they offer the exact same features and functionality…and customization options.

In the end, significant customization is ultimately only possible when using applications that are run internally. When moving to a shared or cloud environment, the level of potential customization drops precipitously. For many, this presents a significant hurdle to moving into a new solution like cloud (ie: SaaS).

The question really comes down to: What is the true value of the customizations? Are they providing more value than their cost? And is this really what customers want? Here at HP Discover in Barcelona, the very issue became a hot button of discussion. Ultimately, the outcome was: Customers want simple and standard over custom and complex. There is a difference between want, need and should.

Bottom Line: In IT, we’ve had the opportunity to customize the heck out of applications. Why? Because we could and truly believed it was valuable. That may have been the case in the past, but today, it is about business value. And there are larger considerations (like alternatives, agility and choice) that play a more significant role in our decisions.

HP Discover Barcelona: What to Watch For

Today kicks off HP’s Discover conference in Barcelona, Spain with a bevy of information on tap. Looking over the event guide, it is clear that HP is targeting the Enterprise customer with an emphasis on Cloud Computing, Data (including Big Data) and Converged Infrastructure. HP’s definition of ‘converged infrastructure’ does include a bevy of their core infrastructure components.

With an emphasis on cloud and data, HP is really targeting the future direction of technology, not just traditional IT. HP is a large company and can take a bit of work to evolve the thinking from traditional IT to transformational IT. It is good to see the changes.

Of note is the expansion of data beyond just Big Data. For many, the focus continues to persist on Big Data. Yet, for many enterprises, data expands well beyond just Big Data. Look for more information beyond the existing NASCAR example on both the breadth and depth. In addition, there are sessions that provide a deep dive specifically for HAVEn partners. It is good to see HP consider the importance of their partner program.

Core areas of both printing and mobility are making an appearance here at Discover. However, their presence pales in comparison with the big three.

So, what to look for… With cloud and data, the keys for HP will rest with how well they enable adoption. How easy do they make it for customers to easily adopt new technologies? Adoption is key to success. With converged infrastructure, has the story of integration moved beyond a reference architecture and single SKU approach? Look for more details on how far HP has come in developing their portfolio along with execution of the integration between the different solutions. This integration and execution is key.

Time to get on the Colocation Train Before it is Too Late

The data center industry is heading toward an inflection point that has significant impact on enterprises. It seems many aren’t looking far enough ahead, but the timeline appears to be 12-18 months, which is not that far out! The issue is a typical supply chain issue of supply, demand and timelines.

A CHANGE IN THE WINDS

First, let’s start with a bit of background… The advent of Cloud Computing and newer technologies, are driving an increase in the number of enterprises looking to ‘get out of the data center business. I, along with others, have presented many times about ‘Death of the Data Center.’ The data center, which used to serve as a strategic weapon in an enterprise IT org’s arsenal, is still very much critical, but fundamentally becoming a commodity. That’s not to say that the overall data center services are becoming a commodity, but the facility is. Other factors, such as the geographic footprint, network and ecosystem are becoming the real differentiators. And enterprises ‘in the know’ realize they can’t compete at the same level as today’s commercial data center facility providers.

THE TWO FLAVORS OF COLOCATION

Commercial data center providers offer two basic models of data center services: Wholesale and Retail. Digital Realty and DuPont Fabros are examples of major wholesale data center space and Equinix, Switch, IO, Savvis and QTS are examples of major retail colocation providers. It should be noted that some providers provide both wholesale and retail offerings. While there is a huge difference between wholesale and retail colocation space, I will leave the details on why an enterprise might consider one over the other for another post.

DATA CENTER SUPPLY, DEMAND AND TIMELINES

The problem is still the same for both types of data center space: there is a bit of surplus today, but there won’t be enough capacity in the near term. Data center providers are adding capacity around the globe, but they’re caught in a conundrum of how much capacity to build. It typically takes anywhere between 2-4 years to build a new data center and bring it online. And the demand isn’t there to support significant growth yet.

But if you read the tea leaves, the demand is getting ready to pop. Many folks are only now starting to consider their options with cloud and other services. So, why are data center providers not building data centers now in preparation for the pop? There are two reasons: On the supply side, it costs a significant amount of capital to build a data center today and having an idle data center burns significant operational expenses too. On the demand side, enterprises are just starting to evaluate colocation options. Evaluating is different from ready to commit spending on colocation services.

Complicating matters further, even for the most aggressive enterprises, the preparation can take months and the migrations years in the making. Moving a data center is not a trivial exercise and often peppered with significant risk. There are applications, legacy requirements, 3rd party providers, connections, depreciation schedules, architectures, organization, process and governance changes to consider…just to name a few. In addition to the technical challenges, organizations and applications are typically not geared up to handle multi-day outages and moves of this nature. Ponder this: When was the last time your IT team moved a critical business application from one location to another? What about multiple applications? The reality is: it just doesn’t happen often…if at all.

But just because it’s hard, does not mean it should not be done. In this case, it needs to be done. At this point, every organization on the planet should have a plan for colocation and/or cloud. Of course there are exceptions and corner cases, but today they are few and shrinking.

COMPLIANCE AND REGULATORY CONCERNS

Those with compliance and regulatory requirements are moving too…and not just non-production or Disaster Recovery systems. Financial Services organizations are already moving their core banking systems into colocation. While Healthcare organizations are moving their Electronic Health Records (EHR) and Electronic Medical Record (EMR) systems into colocation…and in some cases, the cloud. This is in addition to any core legacy and greenfield applications. The compliance and regulatory requirements are an additional component to consider, not a reason to stop moving.

TIME CHANGES DATA CENTER THINKING

Just five years ago, a discussion of moving to colocation or cloud would have been far more challenging to do. Today, we are starting to see this migration happening. However, it is only happening in very small numbers of IT firms around the globe. We need to significantly increase the number of folks planning and migrating.

DATA CENTER ELASTICITY

On the downside, even if an enterprise started to build their data center strategy and roadmap today, it is unclear if adequate capacity to supply the demand will exist once they’re ready to move. Now, that’s not to say the sky is falling. But it does suggest that enterprises (in mass) need to get on the ball and start planning for death of the data center (their own). At a minimum, it would provider data center providers with greater visibility of the impending demand and timeline. In the best scenario, it provides a healthy ecosystem in the supply/ demand equation without creating a rubber-band effect where supply and demand each fluctuate toward equilibrium.

BUILDING A ROADMAP

The process starts with a vision and understanding of what is truly strategic. Recall that vitally important and strategic can be two different things. Power is vitally important to data centers, but data center providers are not building power plants next to each one.

The next step is building a roadmap that supports the vision. The roadmap includes more than just technological advancements. The biggest initial hurdles will come in the form of organization and process. In addition, a strong visionary and leader will provide the right combination skills to lead the effort and ask the right questions to achieve success.

Part of the roadmap will inevitably include an evaluation of colocation providers. Before you get started down this path, it is important to understand the differences between wholesale and retail colocation providers, what they offer and what your responsibilities are. That last step is often lost as part of the evaluation process.

Truly understand what your requirements are. Space, power and bandwidth are just scratching the surface. Take a holistic view of your environment and portfolio. Understand what and how things will change when moving to colocation. This is as much a clear snapshot of your current situation, as it is where you’re headed over time.

TIME TO GET MOVING

Moving into colocation is a great first-step for many enterprises. It gets them ‘out of the data center’ business while still maintaining their existing portfolio intact. Colocation also provides a great way to move the maturity of an organization (and portfolio) toward cloud.

The evaluation process for colocation services is much different today from just 5 years ago. Today, some of the key differentiators are geographic coverage, network and ecosystem. But a stern warning: The criteria for each enterprise will be different and unique. What applies to one does not necessarily apply to the next. It’s important to clearly understand this and how each provider matches against the requirements.

The process takes time and effort. For this and a number of other reasons, it may take months to years even for the most aggressive movers. As such, it is best to started sooner than later before the train leaves the station.

Further Reading:

Applying Cloud Computing in the Enterprise

Cloud Application Matrix

A Workload is Not a Workload, is Not a Workload

What is Your Cloud Exit Strategy?

Cloud computing is still one of the hottest subjects in IT and business today. As the cloud market starts to mature, the interest to leverage cloud only grows further. And the sources of demand are not limited to only IT. Business units and non-IT users are quickly discovering they can engage cloud-based services on their own.

While there is quite a bit of interest in discussing how to best apply cloud and moving to leverage cloud services, there has been little conversation about a Cloud Exit Strategy. Yes, thinking about the divorce even before marriage. While many have enough on their plate with thinking about how to best leverage cloud, how one exits a cloud is equally important.

And there are a number of different factors that one should consider with regards to their cloud exit strategy. In addition to terms in the service provider agreement, data integrity, size of data and alternative providers are just a few of the considerations.

CONTRACT TERMS

Cloud Service Providers (CSPs) may, or may not include terms in their contract outlining what happens to data in case of contract termination. However, the details outlined in the terms vary widely from provider to provider. Here is how two different providers addressed the issue.

Provider 1:

(iii) we will provide you with the same post-termination data retrieval assistance that we generally make available to all customers.

Provider 2:

12.1 You will not have access to your data stored on the Services during a suspension or following termination.

12.2 You have the option to create a snapshot or backup of your Cloud Servers or Databases, respectively, however, it is your responsibility to initiate the snapshot or backup and test your backup to determine the quality and success of your backups. You will be charged for your use of backup services as listed in your Order.

The concern is that not all providers will include terms in their default contracts. And even if the terms outlined above, it leaves many of the important specific details up for question. What is the method in which the data is provided? If we are talking about database files, are they provided in a CSV flat file, Excel spreadsheet or database? Before you engage a contract, it is important to work out the details on what works best when it is time to cancel the contract. If you don’t, the downside is that the provider might tack on professional services fees to ‘help’ export your data into a useful format. And that assumes they are willing to do it since you have effectively cancelled the relationship with the provider and left them with little to no incentive to help.

DATA INTEGRITY

Determining the format of the data for export is a good first step. But what happens with relational data? Simply dumping the atomic level data into a CSV file technically fulfills the terms of the contract, but leaves the data unusable. Without understanding the relationships between data elements, the data goes from readily usable to practically unusable. This is where the nature of the data needs to drive how it is exported and the format in which it is exported. That, in turn, must drive the export process that is outline in the terms of the contract.

SIZE OF DATA

Moving small amounts of data is a relatively trivial thing to do today. The amount of preparation, time and effort to do so is relatively small. In addition, the cost of high-bandwidth network connections has dropped considerably in recent years. At this point, it is common that even the average home may have a high-speed connection to the Internet.

But what if you need to move large quantities of data? Even with cheaper bandwidth and larger connections, it may still be cheaper and faster to ship physical storage devices to the CSP for initial upload. The CSP often has a service to allow shipping storage devices for importing large data loads. What they often do not have is a clear process to export large quantities of data via storage devices. And again, you have terminated the agreement and they are less inclined to work with you and your data.

It is important to think ahead about the volume and type of data being considered for the CSP and how best to move it around.

ALTERNATIVE PROVIDERS

Even if all of the details about exporting services and data have been worked out, where does one go to form a new relationship? Even with a maturing cloud marketplace, not all cloud providers offer the same services or ecosystems. Before pulling the plug on one provider, consider the process to move to an alternative provider and what they offer based on the application’s requirements.

Leveraging the evaluation criteria and selection process used to get to your first choice can provide a guide to consider alternative solutions. Be careful to consider that in such a volatile and ever-changing market such as cloud computing, the providers, services and ecosystems are constantly in flux. The list of providers used last month might not apply for this month.

WHERE TO GET STARTED

Contract negotiations on their own are a fine art. If the terms are left to an attorney to negotiate, the outcome might not be what was expected. One needs to appreciate that an attorney works with what they know. Unfortunately, they cannot be expected to know everything about cloud services nor the nuances as mentioned above. This is a good opportunity to partner with your counsel (either internal or external). They can help guide you as much as you guide them to a successful outcome.

In parallel with engaging legal counsel, map out the workflow of your application, processes, connections and data elements to provide a clearer picture of the service. Consider what you need going in and coming back out. Imagine the entire lifecycle of your relationship with the CSP from start to finish. Use that as your model for the next steps. Then take a look at the prospect CSP and their service agreement. What terms already exist regarding export and contract termination? How well does the CSP map against your requirements with both application and lifecycle? This is where the negotiations and adjustments come into play. And this is all before the contract has been signed.

IN SUMMARY

Hopefully you can see how thinking ahead and doing some planning will save quite a bit of time and headaches on the backend. As with most vendor agreements, think through the entire process before engaging. What are the most common scenarios and does your exit strategy address the needs? If not, consider what changes are needed to the cloud exit strategy to best match the requirements.

The IT Role in Value Creation is Not a Technology

For CIOs and the IT organizations they lead, what is their role in value creation? Can IT create value or are they simply an enabler to value creation? And can the implementation of technology really create value? Those seem to be hot topics of contention today.

First, let’s take a look at what value is and where it comes from. There are two types of value sources in a business organization: Those that contribute to lowering ‘bottom line’ expenses and those that contribute to ‘top line’ revenue growth. Some of the most valuable contributions to a company come in the form of top line revenue growth.

So, what is IT’s contribution to value and where does it come from? In past years, IT was perceived as a cost center. In essence, IT was an expense that pulled from the bottom line. IT was seen as simply a support organization that users engaged when technology is needed or broken. IT, in turn, looks for ways to lower the hurdles and to use technology more efficiently. In this paradigm, much of the ‘value’ IT provided came from cost efficiencies that contributed to bottom-line operational savings. The paradigm considers IT as a service ‘delivery’ organization to the company and business units. For many IT organizations, they still work within this paradigm today.

It is time to change the paradigm. IT needs to think of itself as a business organization that drives value rather than simply a delivery or technology organization. And transformational IT CIOs are doing just that. There are many who question IT’s ability to contribute to top line value. Based on the traditional paradigm, the question is well supported. However, in the new paradigm, IT can provide top line value creation through new revenue streams. Examples might include online portals or ecommerce activities. IT essentially creates new revenue streams not previously possible.

But not every CIO or IT organization is ready for this level of transformation.  Even so, at a minimum, the transformational IT organization should provide value enablement. In other words, enabling others within the company to contribute top line revenue growth rather than directly driving it. In either case, IT plays a central role in driving the conversation and opportunity for value creation.

Of late, the subject of cloud computing has been suggested as an opportunity for value creation. Technology does not create value; at least not by itself. Technology is, however, an enabler to create value (top line and/or bottom line). And, cloud is one of the most significant opportunities to leverage for value creation today. It provides two significant opportunities for IT organizations: 1) It provides the ability to maximize efficient use of traditional infrastructure and capital resources. And 2) it enables IT organizations to change their paradigm from focusing on infrastructure to focusing on value creation.

Earlier this year, I wrote a piece titled “Transforming IT Requires a Three-Legged Race.” The premise of the piece talked about the need for IT transformation and how there are three components that need to change: 1) the CIO, 2) the IT organization and 3) the business’ perception/ expectations of IT. In order for IT to create value (value creation or value enablement), these three components will need to be considered. And the CIO should lead the charge to do so.

So, going back to the original question: What is IT’s role in value creation? The bottom line is that IT’s role in creating value is significant. Whether IT creates value or enables value, the opportunity is there waiting. Changing the paradigm is not trivial, but needs to happen across the industry, not just with a few leading CIOs. The question is what will you contribute to the evolution?

LifeSize Tech Day 2013

Video conferencing trumps audio conferencing! Why you ask? More than 80% of communication is non-verbal. So, why don’t more people use video conferencing over audio? There are a number reasons…read on.

HISTORY

While some may feel video conferencing is passé, I attended LifeSize’s TechDay in Austin, TX and now have a different perspective. Founded in 2006 and later acquired by Logitech, LifeSize is a producer of video conferencing equipment and services. Historically, video conferencing has been relegated to two extremes: 1) Personal 1:1 communications and 2) Fixed and proprietary meeting room systems. And until recently, the only option was the fixed and propriety meeting room systems. Today, 70% of all video conference calls are point-to-point (1:1 or room-to-room). The great thing about personal systems (ie: Skype, Google Hangouts or FaceTime) is the ability to use them across multiple devices in just about any location. While some provide group video conferencing, they are often not as high quality as fixed systems with high-end cameras and high-speed data connections.

INCREASING PRODUCTIVITY

As people look for ways to increase productivity, an increase in video conferencing could provide a useful tool. Picking up on the non-verbal communication helps drive clarity and highlight nuances not otherwise visible with audio conferencing. Plus, we know that team interaction provides a greater opportunity for collaboration and team building. Video conferencing, while not exactly the same as being in the same room as other people, is coming very close. Even mobile solutions are providing an interesting spin on the ability to video conference from just about anywhere. By bridging the gap between the fixed systems and the personal systems, users can start up a video conference as easily as they would with a phone call.

SPECTRUM OF SOLUTIONS

Video conferencing sits within a spectrum of communication solutions and alone is a $3b market with a number of different solutions. The different solutions within the spectrum of communications are:

- Audio Conferencing: Commonly used for group meetings, but lacks the video interaction. Audio is easy to access and only requires a telephone to use. All of the backend infrastructure is hosted.

- Web Conferencing: Web conferencing offers the ability to share screens and present documents in a one-to-many fashion. Some audio collaboration may exist, but only limited video or sharing bi-directional.

- Video Conferencing: Provides the ability to interact with both audio and video. It provides attendees to interact with each other visually. Video conferencing itself spans a wide range of needs from 1:1 personal video conferencing to high quality video required when connecting meeting rooms together.

- Telepresence: Similar to Video Conferencing, telepresence provide a very high quality way for multiple rooms to participate in meetings. Telepresence carries a hefty price tag and is best geared for connecting entire rooms of people together.

LIFESIZE PORTFOLIO

The LifeSize product portfolio covers a wide space from their smaller Passport Series that supports a single high-definition (HD) display to their flagship Icon series which supports Dual HD displays along with a myriad of other features. LifeSize even offers a video Softphone solution too. While many of the solutions require infrastructure on premises to support video calls, LifeSize is starting to offer a Hosted Infrastructure option.

Many of the existing solutions on the market today may use standards to communicate between end points, but they don’t integrate well with competing solutions. That becomes evident if you want to start a video conference session between companies that may have standardized on different solutions. LifeSize has taken a different path by leveraging standards to provide interoperability with other competing solutions.

Two factors govern the success of any given solution:

1)     Interoperability: How well does the solution interact with other devices, solutions and products? Not only is it standards based, but how accessible is the solution to use?

2)     Critical Mass: Unlike the fixed systems of years past, newer systems need a critical mass of users to function well. Think Metcalfe’s Law here: The utility of a network increases at the square of the number of nodes within it. The more users using the system, the more valuable it becomes.

WEBRTC

An alternative and simple option would be to launch a video conferencing session in a browser. Google and others are working on that via the WebRTC movement. Today, the browser of choice for WebRTC is Google Chrome. But hopefully that will span out to include other browsers like Internet Explorer, Firefox and Safari. Will WebRTC replace video conferencing? Probably not as it is not able to “ring” someone.

HOSTED SOLUTIONS

It was a bit disappointing that LifeSize’s efforts are not centered around their hosted offering. At least not yet. We know that the market is moving away from on-premises equipment and my point of view is that LifeSize should move full-steam in that direction too.

Another opportunity might be for service providers to host the solution for small medium business (SMB) clients. It could provide an interesting market to help augment LifeSize’s existing hosted offering. However, at this time, LifeSize explicitly forbids multi-tenant use of their solution.

IN SUMMARY

While video conferencing may have been around for some time, I believe we are just starting a to see its mass adoption. It is in the relatively early stages as behaviors change to accept starting a video call just like one would an audio call. The adoption of personal solutions will help change this behavior and in turn help open up video conferencing more broadly in the workplace.

Today, LifeSize offers a great portfolio of solutions with both good quality at an interesting price point. As their hosted solution develops further, it will be interesting to see LifeSize’s adoption in the marketplace.

The Plumbing of Cloud Computing

Over the past several years, the conversation about cloud computing inevitably comes back to the technology and connections between systems. It includes the systems, storage, network and interconnection components that make up the cloud environment. In essence, the ‘plumbing’ of cloud computing.

If we use the analogy of plumbers and water systems to cloud computing, the pictures becomes a bit clearer. The pipes and water systems that carry the water from the reservoirs that store, carry and deliver the water to homes and businesses are analogous to data centers, systems, storage and networking solutions.

The water itself with its quality, temperature, mineral content and such are analogous to the applications and services that leverage cloud computing as their delivery mechanism.

Do users care about the pipes that carry the water? No. They care about the quality and attributes of the water.

The users who benefit from the applications and services delivered via cloud computing care little about this plumbing. Why? They’re far removed from how the applications and services relate to the individual nuances between solutions.

There are those that believe users should understand more about the underlying technology. That’s like saying that a consumer of water should understand the differences between a 45 degree bend, nipple, pressure regulator and the rest. The consumer doesn’t want or need to know the differences. There are specialists that understand what the consumer wants and knows how to deliver it. They don’t burden the consumer with having to understand the backend.

Even with cloud computing there is a concept of service providers. Sure, the water systems that we plug our homes and businesses into are service providers. In the water industry, it includes those that deliver bottled water to our homes and businesses. Turn on the service when we need it, turn it off when we don’t. Vary the volume, number of instances and locations we want the service to best meet the consumer’s demand. But that doesn’t mean that the consumer has to understand how the water got from the ground (in the case of spring water) and into the bottle sitting in their home/ business.

The bottom line is that we need to separate the different roles focus on the end product. In the case of water, it’s the quality and attributes of the water. In the case of cloud computing, it’s the applications and services that the consumer accesses. Leave the plumbing details behind the scenes to the experts in their field and don’t confuse the roles.

Thanks to Tom Lounibos (@lounibos) and Jake Kaldenbaugh (@jakewk) for the inspiration for this post.

Applying Cloud Computing in the Enterprise

There has been a bit of confusion around the applicability of cloud computing in the enterprise space. Recently, the question has come up as to where/ when/ how/ what cloud applies to enterprises and the challenges that enterprises face when considering cloud. Now, that’s a big ball of yarn to address even before you address the secondary complexities.

Ben Kepes wrote a good article in Forbes responding to comments made by an SVP at HP portraying Amazon Web Services (AWS) as a ‘legacy cloud’ and the reality of the situation. Does it really apply to addressing the enterprise ball of yarn? My point of view: If AWS is a legacy cloud, traditional IT infrastructure must be downright Jurassic. Neither statement is true. Nor does it directly address the reality of the challenges that exist.

In response, Jeff Sussna wrote a good counter missive suggesting that NetFlix is more than just an edge case. Jeff goes on to suggest that current enterprise legacy applications are far from static and IT orgs would prefer not to perform an ‘forklift’ upgrade of their legacy apps into the cloud. I couldn’t agree more…but the devil is in the details as to why.

There are several factors to consider:

  1. Differences in workloads: I wrote a missive 18 months ago about the differences in workloads (A Workload is Not a Workload, is Not a Workload). It’s important to characterize what you have (legacy and otherwise). No two will be the same.
  2. Application of Best Practices: There is a common misconception that how one company leverages cloud will apply directly to others. The thinking being: If NetFlix has success, so will I. I call this the ‘lemming approach’. It may have worked for IT in the past, but will not serve us well moving forward. First, one has to go back and understand point #1 and more importantly understand the reasons the solution was chosen. Which leads to point #3.
  3. Business Drivers: What factors apply when considering different cloud solutions? Aside from the technical merits, there are business factors to consider too. Not everything is about technology. Is there a regulatory or compliance requirement? How would one solution support my business drivers better than the next? While those are just examples, the business drivers are unique to each company.

And when you’re ready to move into the cloud, especially a legacy app, a forklift upgrade is probably not at the top of the list for a number of reasons. Risk, cost, effort just being three of the top ones…but there are many more to consider. What about all of the 3rd party partner connections? What about the interconnections between apps? How will processes and data governance change? As you can see, there are many factors that need to be considered before taking that first step.

For many, the simple thought of moving a legacy app and its tentacles into the cloud can bring shutters. That doesn’t mean it shouldn’t be considered. But it does mean that it needs greater care and consideration than a greenfield application.

In the end, does this mean that enterprises can’t learn from what companies like NetFlix, Zynga, Dropbox and others have done in the cloud? Of course not. It just means that it should not be taken as a cookie-cutter approach and adapted as appropriate. Use the aspects that are relevant for your situation and leave the rest behind. One size of cloud does not fit all. This is especially true for legacy applications.

If this sounds downright hard and potentially not worth the trouble, then the point has been lost. The move needs consideration, planning and quite a bit of preparation. Best to get started down the path now.

The Importance of rIrTrPrM

Many in the technology world focus on the technology itself without significant consideration of the data or more importantly, the information. When you dig a bit deeper, the real reason, the business reason we exist is about the data. Just presenting data, however, it not enough. When building applications to present data, we need to consider how to best present information. And with information, there is a core principle to follow. I call it ‘rIrTrPrM’. rIrTrPrM is an acronym of sorts:

rIrTrPrM

rI = right Information: Ensuring that the right information is presented. Extraneous or wrong information creates a convoluted picture. And it’s important to consider the information to present, not just data or data elements.

rT = right Time: Presenting the information at the right time or point when the user or consumer is looking for it.

rP = right Person: Matching the correct information to the right person looking for it. This is more of a matching of interests rather than security paradigm.

rM = right Medium: With several ways to present information, delivering the information in the right channel or medium. Is paper the right medium or mobile device or web application?

From a marketing perspective, getting the right information to the right person at the right time has been a basic principle. However, with the advent of newer technology methods and a change in the behaviors of how people consume information, the medium is a new component to consider.

When building applications, one must consider the user of the application and the information they will consume. In doing so, consider using the rIrTrPrM principle.

HP Aims for the Stars with CloudSystem and Moonshot

Over the past few months, I’ve had a chance to spend time with the HP product teams. In doing so, it’s really opened my eyes to a new HP with a number of solid offerings. Two solutions (CloudSystem and Moonshot) really caught my attention.

HP CloudSystem

HP’s CloudSystem Matrix provides a management solution that manages internal and external resources and across multiple cloud providers. The heart of the CloudSystem platform is in its extendible architecture. In doing so, it provides the glue that many enterprises look to leverage for bridging the gap between internal and external resources. On the surface, HP CloudSystem looks pretty compelling for enterprises considering the move to cloud (internal, external, public or private). For those thinking that CloudSystem only works with OpenStack solutions, think again. CloudSystem’s architecture is designed to work across both OpenStack and non-OpenStack infrastructures.

However, the one question I do have is why CloudSystem doesn’t get the airplay it should. While it may not be the right solution for everyone, it should be in the mix when considering the move to cloud-based solutions (public or private).

HP Moonshot

Probably one of the most interesting solutions recently announced is HP’s Moonshot. On the surface, it may appear to be a replacement for traditional blades or general-purpose servers. Not true. The real opportunity comes from it’s ability to tune infrastructure for a specific IT workload.

Traditionally, IT workloads are mixed. Within an enterprise’s data center run a variety of applications with mixed requirements. In sum, a mixed workload looks like a melting pot of technology. One application may be chatty, while another is processor intensive and yet another is disk intensive.  The downside to the mixed workload is the inability to tune the infrastructure (and platforms) to most efficiently run the workload.

All Workloads Are Not Created Equally

As the world increasingly embraces cloud computing and a services-based approach, we are starting to see workloads coalesce into groupings. Instead of running a variety of workloads on general-purpose servers, we group applications together with service providers. For example, one service provider might offer an Microsoft Exchange email solution. Their entire workload is Exchange and they’re able to tune their infrastructure to most efficiently support Exchange. This also leads to a high level of specialization not possible in the typical enterprise.

That’s where Moonshot comes in. Moonshot provides a platform that is highly scalable and tunable for specific workloads. Don’t think of Moonshot as a high-performance general-purpose server. That’s like taking an Indy car and trying to haul the kids to soccer practice. You can do it, but would you? Moonshot was purpose-built and not geared for the typical enterprise data center or workload. The sweet spot for Moonshot is in the Service Provider market where you typically find both the scale and focus on specific workloads. HP also considered common challenges Service Providers would face with Moonshot at scale. As an example, management software offers the ability to update CPUs and instances in bulk.

Two downsides to Moonshot are side effects of the change in architecture. One is in the creation of bandwidth problems. Moonshot is very power efficient, but requires quite a bit of bandwidth. The other challenge is around traditional software licensing. This problem is not new and seems to rear its ugly head with changes in innovation. We saw this with both Virtualization and Cloud. Potential users of Moonshot need to consider how to best address these issues. Plus, industry standard software licensing will need to evolve to support newer infrastructure methods. HP (along with users) need to lobby software providers to evolve their practices.

OpenStack at the Core

HP is one of the core OpenStack open-source contributors. OpenStack, while a very powerful solution, is a hard sell for the enterprise market. This will only get harder over time. On the other hand, Service Providers present a unique match for the challenges and opportunities that OpenStack presents. HP is leveraging OpenStack as part of the Moonshot offering. Pairing Moonshot with OpenStack is a match made in heaven. The combination, when leveraged by Service Providers provides a strong platform to support their offering compared with alternatives.

When considering the combination of CloudSystem along with Moonshot and OpenStack, HP has raised the stakes from a single provider. The solutions provide a bridge from the current traditional environments to Service Provider solutions.

I am pleased to see a traditional hardware/ software provider acknowledging how the technology industry is evolving and providing solutions that span the varied requirements. I, for one, will be interested to see how successful HP is in continuing their path through the evolution.