Containers in the Enterprise

Containers are all the rage right now, but are they ready for enterprise consumption? It depends on whom you ask, but here’s my take. Enterprises should absolutely be considering container architectures as part of their strategy…but there are some considerations before heading down the path.

Container conferences

Talking with attendees at Docker’s DockerCon conference and Redhat’s Summit this week, you hear a number of proponents and live enterprise users. For those that are not familiar with containers, the fundamental concept is a fully encapsulated environment that supports application services. Containers should not be confused with virtualization. In addition, containers are not to be confused with Micro Services, which can leverage containers, but do not require them.

A quick rundown

Here are some quick points:

  • Ecosystem: I’ve written before about the importance of a new technology’s ecosystem here. In the case of containers, the ecosystem is rich and building quickly.
  • Architecture: Containers allow applications to break apart into smaller components. Each of the components can then spin up/ down and scale as needed. Of course automation and orchestration comes into play.
  • Automation/ Orchestration: Unlike typical enterprise applications that are installed once and run 24×7, the best architectures for containers spin up/ down and scale as needed. Realistically, the only way to efficiently do this is with automation and orchestration.
  • Security: There is quite a bit of concern about container security. With potentially thousands or tens of thousands of containers running, a compromise might have significant consequences. If containers are architected to be ephemeral, the risk footprint shrinks exponentially.
  • DevOps: Container-based architectures can run without a DevOps approach with limited success. DevOps brings a different methodology that works hand-in-hand with containers.
  • Management: There are concerns the short lifespan of a container creates challenges for audit trails. Using traditional audit approaches, this would be true. Using newer methods provides real-time audit capability.
  • Stability: The $64k question: Are containers stable enough for enterprise use? Absolutely! The reality is that legacy architecture applications would not move directly to containers. Only those applications that are significantly modified or re-written would leverage containers. New applications are able to leverage containers without increasing the risk.

Cloud-First, Container-First

Companies are looking to move faster and faster. In order to do so, the problem needs reduction into smaller components. As those smaller components become micro services (vs. large monolithic applications), containers start to make sense.

Containers represent an elegant way to leverage smaller building blocks. Some have equated containers to the Lego building blocks of the enterprise application architecture. The days of large, monolithic enterprise applications are past. Today’s applications may be complex in sum, but are a culmination of much smaller building blocks. These smaller blocks provide the nimble and fast speed that enterprises are clamoring for today.

Containers are more than Technology

More than containers, there are other components needed for success. Containers represent the technology building blocks. Culture and process are needed to support the change in technology. DevOps provides the fluid that lubricates the integration of the three components.

Changing the perspective

As with the newer technologies coming, other aspects of the IT organization must change too. Whether you are a CIO, IT leader, developer or operations team, the very fundamentals in which we function must change in order to truly embrace and adopt these newer methodologies.

Containers are ready for the enterprise…if the other aspects are considered as well.

The enterprise view of cloud, specifically Private Cloud, is confusing

Enterprise organizations are actively looking for ways to leverage cloud computing. Cloud presents the single-largest opportunity for CIOs and the organizations they lead. The move to cloud is often part of a larger strategy for the CIO moving to a consumption-first paradigm. As the CIO charts a path to cloud along the cloud spectrum, Private Cloud provides a significant opportunity.

Adoption of private cloud infrastructure is anemic at best. Looking deeper into the problem, the reason becomes painfully clear. The marketplace is heavily fractured and quite confusing even to the sophisticated enterprise buyer. After reading this post, one could question the feasibility of private cloud. The purpose of this post is not to present a case to avoid private cloud, but rather expose the challenges to adoption to help build awareness towards solving the issues.

Problem statement

Most enterprises have a varied strategy with cloud adoption. Generally there are two categories of applications and services:

  1. Existing enterprise applications: These may include legacy and custom applications. The vast majority was never designed for virtualization let alone cloud. Even if there is an interest to move to cloud, the cost and risk to move (read: re-write) these applications to cloud is extreme.
  2. Greenfield development: New applications or those modified to support cloud-based architectures. Within the enterprise, greenfield development represents a small percentage compared with existing applications. On the other hand, web-scale and startup organizations are able to leverage almost 100% greenfield development.


Private Cloud Market Mismatch

The disconnect is that most cloud solutions in the market today suit greenfield development, but not existing enterprise applications. Ironically, from a marketing perspective, most of the marketing buzz today is geared toward solutions that service the greenfield development leaving existing enterprise applications in the dust.

Driving focus to private cloud

For the average enterprise organization, they are faced with a cloud conundrum. Cloud, theoretically, is a major opportunity for enterprise applications. Yet the private cloud solutions are a mismatched potpourri of offerings, which make it difficult to compare. In addition, private cloud may take different forms.


Private Cloud Models

Keep in mind that within the overall cloud spectrum, this is only private cloud. At the edges of private cloud, colocation and public cloud present a whole new set of criteria to consider.

Within the private cloud models, it would be easy if the only criteria were compute, storage and network requirements. The reality is that a myriad of other factors are the true differentiators.

The hypervisor and OpenStack phenomenon

The defacto hypervisor in enterprises today is VMware. Not every provider supports VMware. Private cloud providers may support VMware along with other hypervisors such as Hyper-V, KVM and Zen. Yes, it is possible to move enterprise workloads from one hypervisor to another. That is not the problem. The problem is the amount of work required to address the intricacies of the existing environment. Unwinding the ball of yarn is not a trivial task and presents yet another hurdle. On the flipside, there are advantages to leveraging other hypervisors + OpenStack.

Looking beyond the surface of selection criteria

There are about a dozen different criteria that often show up when evaluating providers. Of those, hypervisor, architecture, location, ecosystem and pricing models are just some of the top-line criteria.

In order to truly evaluate providers, one must delve further into the details of each to understand the nuances of each component. It is those details that can make the difference between success and failure. And each nuance is unique to the specific provider. As someone recent stated, “Each provider is like a snowflake.” No two are alike.

The large company problem

Compounding the problem is a wide field of providers trying to capture a slice of the overall pie. Even large, incumbent companies are failing miserably to deliver private cloud solutions. There are a number of reasons companies are failing.

Time to go!

With all of these reasons, one may choose to hold off considering private cloud solutions. That would be a mistake. Sure, there are a number of challenges to adopting private cloud solutions today. Yes, the marketplace is highly fractured and confusing. However, with work comes reward.

The more enterprise applications and services move to private cloud solutions, the more opportunities open for the CIO. The move to private cloud does not circumvent alternatives from public cloud and SaaS-based solutions. It does, however, help provide greater agility and focus for the IT organization compared to traditional infrastructure solutions.

Originally published @ Gigaom Research 2/16/2015

Time’s up! Changing core IT principles

There is a theme gaining ground within IT organizations. In truth, there are a number of examples that support a common theme coming up for IT organizations. And this very theme will change the way solutions are built, configured, sold and used. Even the ecosystems and ancillary services will change. It also changes how we think, organize, lead and manage IT organizations. The theme is:

Just because you (IT) can do something does not mean you should.

Ironically, there are plenty of examples in the history of IT where the converse of this principle served IT well. Well, times have changed and so must the principles that govern the IT organization.

Take it to the customization of applications and you get this:

Just because IT can customize applications to the nth degree does not mean they necessarily should.

A great example of this is in the configuration and customization of applications. Just because IT could customize the heck out of it, should they have? Now, the argument often made here is that it provides some value, somewhere, either real or (more often) perceived. However, the reality is that it comes at a cost, sometimes, a very significant and real cost.

Making it real

Here is a real example that has played out time and time again. Take application XYZ. It is customized to the nth degree for ACME Company. Preferences are set, not necessarily because they should be, but rather because they could. Fast-forward a year or two. Now it is time to upgrade XYZ. The costs are significantly higher due to the customizations done. It requires more planning, more testing, more work all around. Were those costs justified by the benefit of the customizations? Typically not.

Now it is time to evaluate alternatives for XYZ. ACME builds a requirements document based on XYZ (including the myriad of customizations). Once the alternatives are matched against the requirements, the only solution that really fits the need is the incumbent. This approach actually gives significant weight to the incumbent solution therefore limiting alternatives.

These examples are not fictitious scenarios. They are very real and have played out in just about every organization I have come across. The lesson here is not that customizations should be avoided. The lesson is to limit customizations to only those necessary and provide significant value.

And the lesson goes beyond just configurations to understanding what IT’s true value is based on what they should and should not do.

Leveraging alternative approaches

Much is written about the value of new methodologies and technologies. Understanding IT’s true core value opportunity is paramount. The value proposition starts with understanding how the business operates. How does it make money? How does it spend money? Where are the opportunities for IT to contribute to these activities?

Every good strategy starts with a firm understanding of the ecosystem of the business. That is, how the company operates and it’s interactions. A good target that many are finding success with sits furthest away from the core company operations and therefore hardest to explain true business value…in business terms. For many, it starts with the data center and moves up the infrastructure stack. For a bit more detail: CIOs are getting out of the data center business.

Preparing for the future today

Is your IT organization ready for today? How prepared is your organization, processes and systems to handle real-time analytics? As companies consider how to engage customers from a mobile platform in real-time, the shift from batch-mode to real-time data analytics quickly takes shape. Yet many of the core systems and infrastructure are nowhere ready to take on the changing requirements.

Beyond data, are the systems ready to respond to the changing business climate? What is IT’s holistic cloud strategy? Is a DevOps methodology engaged? What about container-based architectures?

These are only a few of the core changes in play today…not in the future. If organizations are to keep up, they need to start making the evolutionary turn now.

Originally posted @ Gigaom Research 1/26/2015

The Shark of HP Converged Systems

The story of Converged Infrastructure (CI) continues to gain steam within the Information Technology (IT) industry…and for good reason. Converged solutions present a relatively easy way to manage complex infrastructure solutions. While some providers focus on CI as an opportunity to bundle solutions into a single SKU, companies such as Nutanix and HP have produced solutions for a couple of years now that go much further with true integration.

As enterprise IT customers shift their focus away from infrastructure and toward platforms, application and data, expect the CI space to heat up. Part of this shift includes platforms geared toward specific applications. This is especially true for those operating applications at scale.

Last week, HP announced their ‘shark’ approach of hardware solutions geared toward specific applications. One of the first targets is the SAP HANA application using HP Converged System 500 as part of a co-innovation project between HP & SAP. It is interesting to see HP partner with SAP HANA with so much emphasis on data analytics today. In addition, specialized solutions are becoming increasingly more important in this space.

Enterprise IT organizations need the ability to start small and grow accordingly. Even service providers may consider a start-small and grow approach. Michael Krigsman (@mkrigsman) recently wrote a post outlining how IT projects are getting smaller and still looking for relief. HP expressed their intent to provide scalable solutions that start small and include forthcoming ‘Project Kraken’ solutions later this year. Only time will tell how seamless this transition becomes.

Additional Reading:

HP CS Blog Entry:

HP Aims for the Stars with CloudSystem and Moonshot

Over the past few months, I’ve had a chance to spend time with the HP product teams. In doing so, it’s really opened my eyes to a new HP with a number of solid offerings. Two solutions (CloudSystem and Moonshot) really caught my attention.

HP CloudSystem

HP’s CloudSystem Matrix provides a management solution that manages internal and external resources and across multiple cloud providers. The heart of the CloudSystem platform is in its extendible architecture. In doing so, it provides the glue that many enterprises look to leverage for bridging the gap between internal and external resources. On the surface, HP CloudSystem looks pretty compelling for enterprises considering the move to cloud (internal, external, public or private). For those thinking that CloudSystem only works with OpenStack solutions, think again. CloudSystem’s architecture is designed to work across both OpenStack and non-OpenStack infrastructures.

However, the one question I do have is why CloudSystem doesn’t get the airplay it should. While it may not be the right solution for everyone, it should be in the mix when considering the move to cloud-based solutions (public or private).

HP Moonshot

Probably one of the most interesting solutions recently announced is HP’s Moonshot. On the surface, it may appear to be a replacement for traditional blades or general-purpose servers. Not true. The real opportunity comes from it’s ability to tune infrastructure for a specific IT workload.

Traditionally, IT workloads are mixed. Within an enterprise’s data center run a variety of applications with mixed requirements. In sum, a mixed workload looks like a melting pot of technology. One application may be chatty, while another is processor intensive and yet another is disk intensive.  The downside to the mixed workload is the inability to tune the infrastructure (and platforms) to most efficiently run the workload.

All Workloads Are Not Created Equally

As the world increasingly embraces cloud computing and a services-based approach, we are starting to see workloads coalesce into groupings. Instead of running a variety of workloads on general-purpose servers, we group applications together with service providers. For example, one service provider might offer an Microsoft Exchange email solution. Their entire workload is Exchange and they’re able to tune their infrastructure to most efficiently support Exchange. This also leads to a high level of specialization not possible in the typical enterprise.

That’s where Moonshot comes in. Moonshot provides a platform that is highly scalable and tunable for specific workloads. Don’t think of Moonshot as a high-performance general-purpose server. That’s like taking an Indy car and trying to haul the kids to soccer practice. You can do it, but would you? Moonshot was purpose-built and not geared for the typical enterprise data center or workload. The sweet spot for Moonshot is in the Service Provider market where you typically find both the scale and focus on specific workloads. HP also considered common challenges Service Providers would face with Moonshot at scale. As an example, management software offers the ability to update CPUs and instances in bulk.

Two downsides to Moonshot are side effects of the change in architecture. One is in the creation of bandwidth problems. Moonshot is very power efficient, but requires quite a bit of bandwidth. The other challenge is around traditional software licensing. This problem is not new and seems to rear its ugly head with changes in innovation. We saw this with both Virtualization and Cloud. Potential users of Moonshot need to consider how to best address these issues. Plus, industry standard software licensing will need to evolve to support newer infrastructure methods. HP (along with users) need to lobby software providers to evolve their practices.

OpenStack at the Core

HP is one of the core OpenStack open-source contributors. OpenStack, while a very powerful solution, is a hard sell for the enterprise market. This will only get harder over time. On the other hand, Service Providers present a unique match for the challenges and opportunities that OpenStack presents. HP is leveraging OpenStack as part of the Moonshot offering. Pairing Moonshot with OpenStack is a match made in heaven. The combination, when leveraged by Service Providers provides a strong platform to support their offering compared with alternatives.

When considering the combination of CloudSystem along with Moonshot and OpenStack, HP has raised the stakes from a single provider. The solutions provide a bridge from the current traditional environments to Service Provider solutions.

I am pleased to see a traditional hardware/ software provider acknowledging how the technology industry is evolving and providing solutions that span the varied requirements. I, for one, will be interested to see how successful HP is in continuing their path through the evolution.

HP Converged Cloud Tech Day

Last week, I attended HP’s Converged Cloud Tech Day in Puerto Rico. Fellow colleagues attended from North, Latin and South America. The purpose of the event was to 1) take a deep dive into HP’s cloud offerings and 2) visit HP’s Aguadilla location, which houses manufacturing and an HP Labs presence. What makes the story interesting is that HP is a hardware manufacturer, a software provider and a provider of cloud services. Overall, I was very impressed by what HP is doing…but read on for the reasons why…and the surprises.

HP Puerto Rico

HP, like many other technology companies, has a significant presence in Puerto Rico. Martin Castillo, HP’s Caribbean Region Country Manager provided an overview for the group that left many in awe. HP exports a whopping $11.5b from Puerto Rico or roughly 10% of HP’s global revenue. In the Caribbean, HP holds more than 70% of the server market. Surprisingly, much of the influence to use HP cloud services in Puerto Rico comes from APAC and EMEA, not North America. To that end, 90% of HP’s Caribbean customers are already starting the first stage of moving to private clouds. Like others, HP is seeing customers move from traditional data centers to private clouds to managed clouds to public clouds.

Moving to the Cloud

Not surprisingly, HP is going through a transition by presenting the company from a solutions perspective rather than a product perspective. Shane Pearson, HP’s VP of Portfolio & Product Management explained that “At the end of the day, it’s all about applications and workloads. Everyone sees the importance of cloud, but everyone is trying to figure out how to leverage it.” By 2015 the projected markets are: Traditional $1.4b, Private Cloud $47b, Managed Cloud $55b, Public Cloud $30b for a cloud total of $132b. In addition, HP confirmed Hybrid Cloud approach as the approach of choice.

While customers are still focused on cost savings as the primary motivation to move to cloud, the tide is shifting to business process improvement. Put another way, cloud is allowing users to do things they could not do before. I was pleased to hear HP offer that it’s hard to take advantage of cloud if you don’t leverage automation. Automation and Orchestration are essential to cloud deployments.

HP CloudSystem Matrix

HP’s Nigel Cook was up next to talk about HP’s CloudSystem Matrix. Essentially, HP is (and has been) providing cloud services across the gamut of potential needs. Internally, HP is using OpenStack as the foundation for their cloud service offering. But CloudSystem Matrix provides a cohesive solution to manage across both internal and external cloud services. To the earlier point about automation, HP is focusing on automation and self-service as part of their cloud offering. Having a solution that helps customers manage the complexity that Hybrid Clouds presents could prove interesting. Admittedly, I have not kicked the tires of CloudSystem Matrix yet, but on the surface, it is very impressive.

Reference Architecture

During the visit to Aguadilla, we joined a Halo session with HP’s Christian Verstraete to discuss architecture. Christian and team have built an impressive cloud functional reference architecture. As impressive as it is, one challenge is how to best leverage such a comprehensive model for the everyday IT organization. It’s quite a bit to chew off. Very large enterprises can consume the level of detail contained within the model. Others will need a way to consume it in chunks. Christian goes into much greater depth in a series of blog entries on HP’s Cloud Source Blog.

HP Labs: Data Center in a Box

One treat on the trip was the visit to HP Labs. If you ever get the opportunity to visit HP Labs, it’s well worth the time to see what innovative solutions the folks are cooking up. HP demonstrated the results from their Thermal Zone Mapping (TZM) tool (US Patent 8,249,841) along with CFD modeling tools and monitoring to determine details around airflow/ cooling efficiency. While I’ve seen many different modeling tools, HP’s TZM was pretty impressive.

In addition to the TZM, HP shared a new prototype that I called Data Center in a Box. The solution is an encapsulated rack system that supports 1-8 racks that are fully enclosed. The only requirement is power and chilled water. The PUE numbers were impressive, but didn’t take into account every metric (ie: the cost of chilled water). Regardless, I thought the solution was pretty interesting. The HP folks kept mentioning that they planned to target the solution to Small-Medium Business (SMB) clients. While that may have been interesting to the SMB market a few years ago, today the SMB market is moving more to services (ie: Cloud Services). That doesn’t mean the solution is DOA. I do think it could be marketed as a modular approach to data center build-outs that provides a smaller increment to container solutions. Today, the solution is still just a prototype and not commercially available. It will be interesting to see where HP ultimately takes this.

In Summary

I was quite impressed by HP’s perspective on how customers can…and should leverage cloud. I felt they have a healthy perspective on the market, customer engagement and opportunity. However, I was left with one question: Why are HP’s cloud solutions not more visible? Arguably, I am smack in the middle of the ‘cloud stream’ of information. Sure, I am aware that HP has a cloud offering. However, when folks talk about different cloud solutions, HP is noticeably absent. From what I learned last week, this needs to change.

HP’s CloudSystem Matrix is definitely worth a look regardless of the state of your cloud strategy. And for data center providers and service providers, keep an eye out for their Data Center in a Box…or whatever they ultimately call it.

How to Leverage the Cloud for Disasters like Hurricane Sandy

Between natural disasters like Hurricanes Sandy and Irene or man-made disasters like the recent data center outages, disasters happen. The question isn’t whether they will happen. The question is: What can be done to avoid the next one? Cloud computing provides a significant advantage to avoid disaster. However, simply leveraging cloud-based services is not enough. First, a tiered approach in leveraging cloud-based services is needed. Second, a new architectural paradigm is needed. Third, organizations need to consider the holistic range of issues they will contend with.

Technology Clouds Help Natural Clouds

If used correctly, cloud computing can significantly limit or completely avoid outages. Cloud offers a physical abstraction layer and allows applications to be located outside of disaster zones where services, staff and recovery efforts do not conflict.

  1. Leverage commercial data centers and Infrastructure as a Service (IaaS). Commercial data centers are designed to be more robust and resilient. Prior to a disaster, IaaS provides the ability to move applications to alternative facilities out of harms way.
  2. Leverage core application and platform services. This may come in the form of PaaS or SaaS. These service providers often architect solutions that are able to withstand single data center outages. That is not true in every case, but by leveraging this in addition to other changes, the risks are mitigated.

In all cases, it is important to ‘trust but verify’ when evaluating providers. Neither tier provides a silver bullet. The key is: Take a multi-faceted approach that architects services with the assumption for failure.

Changes in Application Resiliency

Historically, application resiliency relied heavily on redundant infrastructure. Judging from the responses to Amazon’s recent outages, users still make this assumption. The paradigm needs to change. Applications need to take more responsibility for resiliency. By doing so, applications ensure service availability in times of infrastructure failure.

In a recent blog post, I discussed the relationship cloud computing provides to greenfield and legacy applications. Legacy applications present a challenge to move into cloud-based services. They can (and eventually should) be moved into cloud. However, it will require a bit of work to take advantage of what cloud offers.

Greenfield applications, on the other hand, present a unique opportunity to fully take advantage of cloud-based services…if used correctly. With Hurricane Sandy, we saw greenfield applications still using the old paradigm of relying heavily on redundant infrastructure. And the consequence was significant application outages due to infrastructure failures. Consequently, greenfield applications that rely on the new paradigm (ie: Netflix) experienced no downtime due to Sandy. Netflix not only avoided disaster, but saw a 20% increase in streaming viewers.

Moving Beyond Technology

Leveraging cloud-based services requires more than a technology change. Organizational impact, process changes and governance are just a few of the things to consider. Organizations need to consider the changes to access, skill sets and roles. Is staff in other regions able to assist if local staff is impacted by the disaster? Fundamental changes from change management to application design processes will change too. And at what point are services preemptively moved to avoid disaster? Lastly, how do governance models change if the core players are out of pocket due to disaster? Without considering these changes, the risks increase exponentially.

Start Here

So, where you do you get started? First, determine where you are today. All good maps start with a “You Are Here” label. Consider how to best leverage cloud services and build a plan. Take into account your disaster recovery and business continuity planning. Then put the plan in motion. Test your disaster scenarios to improve your ability to withstand outages. Hopefully by the time the next disaster hits (and it will), you will be in a better place to weather the storm.

How Important are Ecosystems? Ecosystems are Everything

The IT industry is in a state of significant flux. Paradigms are changing and so are the underlying technologies. Along with these changes come the way we think about solutions. Over time, IT organizations have amassed a phenomenal number of solutions, vendors, complex configurations and experience. Continuing to support that ever-expanding model is starting to show cracks. Trying to sustain this approach is just not possible…nor should it be. It is time for a change. Consolidation, integration, efficiency and value creation are the current focal points. Those shifts create a significant shift in how we function as IT organizations and providers.

Changes in Buying Habits

In order to truly understand the value of an ecosystem, one first needs to understand the change in buying habits. IT organizations are making a significant shift from buying point solutions to buying ecosystems. In some ways, this is nothing new. IT organizations have bought into the solutions from major providers for decades. The change is in the composition of the ecosystem. Instead of buying into an ecosystem from a single provider, buyers are looking for comprehensive ecosystems that span multiple providers. This lowers the risk for the buyer and creates a broader offering while providing an integrated solution.

Creating the Cloud Supply Chain

Cloud Computing is a great use-case of the importance of building a supply chain within the ecosystem. Think about it. Applications, services and solutions that IT organization provides to users are not single-purpose, non-integrated solutions. At least they shouldn’t be. Good applications and services are integrated with other offerings. When buyers choose a component, that component needs to connect to another component. In addition, alternatives are needed, as one solution does not fit all. In many ways, this is no different from a traditional manufacturing supply chain. The change is to apply those fundamentals to the cloud ecosystem.


In concert with the supply chain, each component needs solid integration with the next. Today, many point solutions require the buyer to figure out how to integrate solutions. This often becomes a barrier to adoption and introduces risk into the process. One could go crazy coming up with the permutations of different solutions that connect. However, if each solution considered the top 3-4 commonly connected components, the integration requirements become more manageable. And they are left to the folks that understand the solutions best…the providers.

Cloud Verticals

As cloud-based ecosystems start to mature, the natural progression is to develop cloud verticals. Essentially, creating ecosystems with components for a specific vertical or industry. In the healthcare vertical, an ecosystem might include a choice of EHR solutions, billing systems, claims systems and patient portal. For SMB or Mid-Tier businesses, it might be an accounting system, email, file storage and website. Remember that the ecosystem is not just a brokerage of selling the solutions as a package. It is a comprehensive solution that is already integrated.

Bottom Line: Buyers are moving to buying ecosystems, especially with cloud services. The value of your solution comes from the value of your ecosystem.

Cloud Application Matrix

Several years in and there is still quite a bit of confusion around the value of cloud computing. What is it? How can I use it? What value will it provide? There are several perspectives on how to approach cloud computing value. Interesting, that very question elicits several possible responses. This missive specifically targets how applications map against a cloud value matrix. From the application perspective, scale along with the historical component governs the direction of value.

Scale (y-axis)

As scale increases, so does the potential value from cloud computing. That is not to say that traditional methods are not valuable. It has more to do with the direction and velocity that the scale of an application is taking. Greenfield applications provide a different perspective from legacy applications. Rewriting legacy applications simply to use cloud brings questionable value. There may be extenuating circumstances to consider. However, those are not common.

Legacy vs. Greenfield (x-axis)

The x-axis represents the spectrum of applications from legacy to greenfield. Greenfield applications may include either brand new applications or rewritten legacy applications. Core, off the shelf applications may fall into either category. The current state of the cloud marketplace maturity suggests that any new or greenfield applications should consider cloud computing. That includes both PaaS and SaaS approaches.


The first step is to map the portfolio of applications against the grid. Each application type and scale is represented in relation to the others. This is a good exercise to 1) identify the complete portfolio of applications, 2) understand the current state and lifecycle and 3) develop a roadmap for application lifecycles. The roadmap can then become the playbook to support a cloud strategy.

Upper-Right Quadrant

The value cloud computing brings increases as application requirements move toward the upper-left quadrant. In most cases, applications will move horizontally to the right rather than vertically upward. The clear exception is web-scale applications. Most of those start in the lower-right quadrant and move vertically upward.


The matrix is intended to be a general guideline to characterize the majority, but not all applications and situations. In one example, legacy applications may be encapsulated to support cloud-based services as an alternative to rewriting.

Dreamforce 2012 Trip Report


Last week’s Salesforce Dreamforce event had to be the largest conference I have seen at San Francisco’s Moscone Center. It covered Moscone North, South and West plus several hotels. And if that was not enough, Howard Street was turned into a lawn area complete with concert stage, outdoor lounge area and exhibits. Dreamforce presented a great opportunity to learn more about the Salesforce community…and a number of missed opportunities.

Walking the expo floor, one thing becomes clear very quickly: Salesforce is the largest exhibitor. Taking up 25-30% of the expo floor the Salesforce area maintained focal points around sales, marketing and service. Surrounding the Salesforce area were partners in their ecosystem. Some based on their platform, while others with their own platforms. There were solutions for all types of needs. Unfortunately, the different subject matter was intertwined throughout the floor (Sales next to Service next to Marketing). Salesforce is a broad platform. If you were interested in a specific aspect of Salesforce-based solutions, it was hard to find the related solutions. Interestingly, consulting firms held some of the largest booths outside of Salesforce.

Moscone West held the Developer Zone with less structured community areas for folks with similar interests to gather. Multiple presentations were taking place in the Developer Zone non-stop. In addition to the Unconference area, there was plenty of space for folks with common interests to gather around tables complete with power and Wi-Fi.

The 750+ sessions provided a wide range of presentations from how-to to case studies. In addition, there was a good mix of detailed to high-level session depending on your particular interest level.


Dreamforce is a good example of the maturity of Salesforce’s ecosystem. However, the large prominence of consulting firms provides a bit more contrast to that statement. Just walking around the expo floor one could get the impression that there is a solution to every problem imaginable. Not true and several of the basics are still woefully absent. Many of the solutions are excellent point solutions to address specific pain points.

Unfortunately, there are two aspects missing: Integration and Accessibility. Earlier this year, I wrote about the importance of onramps. At the expo, I randomly sampled several folks walking the show floor to get their thoughts. The theme was consistent: Great solutions, but each of them looking for an integrated solution. And it was not clear how they get from their current state to a future state leveraging the innovative solution. The prominence of consulting firms could serve as both a solution and further validation. Consulting firms provide a good short-term solution to the integration and onramp problem. However, the both issues need to be baked into the ecosystem’s solutions to sustain the ecosystem long-term.


Are conferences like Saleforce’s Dreamforce valuable to attend? In a nutshell…yes! If you knew very little about Salesforce before last week, Dreamforce presented a great opportunity to get an overview of opportunities, dig further into specific details and network with peers. If you were already an established customer, there is plenty of innovation still coming from the ecosystem.