Business · Cloud

Riverbed extends into the cloud

logo_riverbed_orange

One of the most critical, but often overlooked components in a system is that of the network. Enterprises continue to spend considerable amounts of money on network optimization as part of their core infrastructure. Traditionally, enterprises have controlled much of the network between applications components. Most of the time the different tiers of an application were collocated in the same data center or across multiple data centers and dedicated network connections that the enterprise had control of.

The advent of cloud changed all of that. Now, different tiers of an application may be spread across different locations, running on systems that the enterprise does not control. This lack of control provides a new challenge to network management.

In addition to applications moving, so does the data. As applications and data move beyond the bounds of the enterprise data center, so does the need to address the increasingly dispersed network performance requirements. The question is: How do you still address network performance management with you no longer control the underlying systems and network infrastructure components?

Riverbed is no stranger to Network performance management. Their products are widely used across enterprises today. At Tech Field Day’sCloud Field Day 3, I had the chance to meet up with the Riverbed team to discuss how they are extending their technology to address the changing requirements that cloud brings.

EXTENDING NETWORK PERFORMANCE TO CLOUD

Traditionally network performance management involved hardware appliances that would sit at the edges of your applications or data centers. Unfortunately, in a cloud-based world, the enterprise does not have access to the cloud data center nor network egress points.

Network optimization in cloud requires an entirely different approach. Add to this that application services are moving toward ephemeral behaviors and one can quickly see how this becomes a moving target.

Riverbed takes a somewhat traditional approach to how they address the network performance management problem in the cloud. Riverbed gives the enterprise the option to run their software as either a ‘sidecar’ to the application or as part of the cloud-based container.

EXTENDING THE DATA CENTER OR EMBRACING CLOUD?

There are two schools of thought on how one engages a mixed environment of traditional data center assets along with cloud. The first is to look at extending the existing data center so that the cloud is viewed as simply another data center. The second approach is to change the perspective where the constraints are reduced to the application…or better yet service level. The latter is a construct that is typical in cloud-native applications.

Today, Riverbed has taken the former approach. They view the cloud as another data center in your network. To this point, Riverbed’s SteelFusion product works as if the cloud is another data center in the network. Unfortunately, this only works when you have consolidated your cloud-based resources into specific locations.

Most enterprises are looking at a very fragmented approach to their use of cloud-based resources today. A given application may consume resources across multiple cloud providers and locations due to specific resource requirements. This shows up in how enterprises are embracing a multi-cloud strategy. Unfortunately, consolidation of cloud-based resources works against one of the core value propositions to cloud; the ability to leverage different cloud solutions, resources and tools.

UNDERSTANDING THE RIVERBED PORTFOLIO

During the session with the Riverbed team, it was challenging to understand how the different components of their portfolio work together to address the varied enterprise requirements. The portfolio does contain extensions to existing products that start to bring cloud into the network fold. Riverbed also discussed their Steelhead SaaS product, but it was unclear how it fits into a cloud native application model. On the upside, Riverbed is already supporting multiple cloud services by allowing their SteelConnect Manager product to connect to both Amazon Web Services (AWS) and Microsoft Azure. On AWS, SteelConnect Manager can run as an AWS VPC.

Understanding the changing enterprise requirements will become increasingly more difficult as the persona of the Riverbed buyer changes. Historically, the Riverbed customer was a network administrator or infrastructure team member. As enterprises move to cloud, the buyer changes to the developer and possibly the business user in some cases. These new personas are looking for quick access to resources and tools in an easy to consume way. This is very similar to how existing cloud resources are consumed. These new personas are not accustomed to working with infrastructure nor do they have an interest in doing so.

PROVIDING CLARITY FOR THE CHANGING CLOUD CUSTOMER

Messaging and solutions geared to these new personas of buyers need to be clear and concise. Unfortunately, the session with the Riverbed team was very much focused on their traditional customer; the Network administrator. At times, they seemed to be somewhat confused by questions that addressed cloud native application architectures.

One positive indicator is that Riverbed acknowledged that the end-user experience is really what matters, not network performance. In Riverbed parlance, they call this End User Experience Management (EUEM). In a cloud-based world, this will guide the Riverbed team well as they consider what serves as their North Star.

As enterprise embrace cloud-based architectures more fully, so will the need for Riverbed’s model that drives their product portfolio, architecture and go-to-market strategy. Based on the current state, they have made some inroads, but have a long way to go.

Further Reading: The difference between hybrid and multi-cloud for the enterprise

CIO · Cloud

Three key changes to look for in 2018

wCZ2RdwyTCyFbOHFQBg4RQ

2017 has officially come to a close and 2018 has already started with a bang. As I look forward to what 2018 brings, the list is incredibly long and detailed. The genres of topics are equally long and cover people, process, technology, culture, business, social, economic and geopolitical boundaries…just to name a few.

Here are three highlights on my otherwise lengthy list…

EVOLVING THE CIO

I often state that after spending almost three decades in IT, now is the best time to work in technology. That statement is still true today.

One could not start a conversation about technology without first considering the importance of the technology leader and role of the Chief Information Officer (CIO). The CIO, as the most senior person leading the IT organization, takes on a very critical role for any enterprise. That was true in the past, and increasingly so moving forward.

In my post ‘The difference between the Traditional CIO and the Transformational CIO’, I outline many of the differences in the ever-evolving role of the CIO. Those traits will continue to evolve as the individual, organization, leadership and overall industry change to embrace a new way to leverage technology. Understanding the psyche of the CIO is something one simply cannot do without experiencing the role firsthand. Yet, understanding how this role is evolving is exactly what will help differentiate companies in 2018 and beyond.

In 2018, we start to see the emerging role of ‘Transformational’ CIO in greater numbers. Not only does the CIO see the need for change, so does the executive leadership team of the enterprise. The CIO becomes less of a technology leader and more of a business leader that has responsibility for technology. As I have stated in the past, this is very different from that of the ‘CEO of Technology’ concept that others have bandied about. In addition, there is a sense of urgency for the change as the business climate becomes increasingly competitive from new entrants and vectors. Culture and geopolitical changes will also impact the changing role of the CIO and that of technology.

TECHNOLOGY HITS ITS STRIDE

In a similar vein to that of the CIO, technology finds its stride in 2018. Recent years have shown a lot of experimentation in the hopes of leverage and success. This ‘shotgun’ approach has been very risky…and costly for enterprises. That is not to say that experimentation is a bad thing. However, the role of technology in mainstream business evolves in 2018 where enterprises face the reality that they must embrace change and technology as part of that evolution.

Executives will look for ways to, mindfully, leverage technology to create business advantage and differentiation. Instead of sitting at the extremes of either diving haphazardly into technology or analysis paralysis, enterprises will strike a balance to embrace technology in a thoughtful, but time-sensitive way. The concept of ‘tech for tech sake’ becomes a past memory like that of the dialup modem.

One hopeful wish is that boards will stop the practice of dictating technology decisions as they have in the past with mandating their organization use cloud. That is not to say cloud is bad, but rather to suggest that a more meaningful business discussion take place that may leverage cloud as one of many tools in an otherwise broadening arsenal.

CLOUD COMES OF AGE IN ALL FORMS

Speaking of cloud, a wholesale shift takes place in 2018 where we pass the inflection point in our thinking about cloud. For the enterprise, public cloud has already reached a maturity point with all three major public cloud providers offering solid solutions for any given enterprise.

Beyond public cloud, the concept of private cloud moves from theory to reality as solutions mature and the kinks worked out. Historically, private cloud was messy and challenging even for the most sophisticated enterprise to adopt. The theory of private cloud is incredibly alluring and now has reached a point where it can become a reality for the average enterprise. Cloud computing, in its different forms has finally come of age.

 

In summary, 2017 has taught us many tough lessons in which to leverage in 2018. Based on the initial read as 2017 came to a close, 2018 looks to be another incredible year for all of us! Let us take a moment to be grateful for what we have and respect those around us. The future is bright and we have much to be thankful for.

Happy New Year!

Cloud

Four expectations for AWS re:Invent

zD82bkYyQdKmZW4oSNxakg

This week brings Amazon Web Services’ (AWS) annual re:Invent conference where thousands will descend upon Las Vegas to learn about cloud and the latest in AWS innovations. Having attended the conference for several years now, there are a number of trends that are common at an AWS event. One of those is the sheer number of products that AWS announces. Aside from that, there are a number of specific things I am looking for at this week’s re:Invent conference.

ENTERPRISE ENGAGEMENT

AWS has done a stellar job of attracting the startup and web-scale markets to their platform. The enterprise market, however, has proven to be an elusive customer except for a (relatively) few case examples. This week, I am looking to see how things have changed for enterprise adoption of AWS. Has AWS found the secret sauce to engage the enterprise in earnest?

PORTFOLIO MANAGEMENT

Several years back, AWS made a big point of not being one of “those” companies with a very large portfolio of products and services. Yet, several years later, AWS has indeed become a behemoth with a portfolio of products and services a mile long. This is a great thing for customers, but can have a few downsides too. Customers, especially enterprise customers, tend to make decisions that last longer than the startup & web-scale customers. Therefore, service deprecation is a real concern with companies that a) do not have a major enterprise focus and b) have a very large portfolio. Unfortunately, this is where AWS is today. Similarly, to date, AWS has not done much in the way of portfolio pruning.

HYBRID CLOUD SUPPORT

For the enterprise, hybrid is their reality. In the past, AWS has taken the position that hybrid means a way to onboard customers into AWS Public Cloud. Hybrid, a combination of on-premises and cloud-based resources can be a means to onboard customers into public cloud. The question is: How is AWS evolving their thinking of hybrid cloud? In addition, how has their thinking evolved to encompass hybrid cloud from the perspective of the enterprise?

DEMOCRATIZATION OF AI & ML

Several of AWS’ competitors have done a great job of democratizing artificial intelligence (AI) and machine learning (ML) tools in a means to make them more approachable. AWS was one of the first out of the gate with a strong showing of AI & ML tools a few years back. The question is: How have they evolved in the past year to make the tools more approachable for the common developer?

BONUS ROUND

As a bonus, it would be interesting if AWS announced the location of their 2nd headquarters. Will they announce it at re:Invent versus a financial analyst call? We shall see.

In summary, AWS never fails to put on a great conference with a good showing. This year should not disappoint.

Business · Cloud · Data

Microsoft empowers the developer at Connect

iagGpMj8TOWl3Br86sxsgg

This week at Microsoft Connect in New York City, Microsoft announced a number of products geared toward bringing intelligence and the computing edge closer together. The tools continue Microsoft’s support of a varied and growing ecosystem of evolving solutions. At the same time, Microsoft demonstrated their insatiable drive to woo the developer with a number of tools geared toward modern development and advanced technology.

EMBRACING THE ECOSYSTEM DIVERSITY

Microsoft has tried hard in the past several years to shed their persona of Microsoft-centricity of a .NET Windows world. Similar to their very vocal support for inclusion and diversity in culture, Microsoft brings that same perspective to the tools, solutions and ecosystems they support. The reality is that the world is diverse and it is this very diversity that makes us stronger. Technology is no different.

At the Connect conference, similar to their recent Build & Ignite conferences, .NET almost became a footnote as much of the discussion was around other tools and frameworks. In many ways, PHP, Java, Node and Python appeared to get mentioned more than .NET. Does this mean that .NET is being deprecated in favor of newer solutions? No. But it does show that Microsoft is moving beyond just words in their drive toward inclusivity.

EXPANDING THE DEVELOPER TOOLS

At Connect, Microsoft announced a number of tools aimed squarely at supporting the modern developer. This is not the developer of years past. Today’s developer works in a variety of tools, with different methods and potentially in separate locations. Yet, they need the ability to collaborate in a meaningful way. Enter Visual Studio Live Share. What makes VS Live Share interesting is how it supports collaboration between developers in a more seamless way without the cumbersome screen sharing approach previously used. The level of sophistication that VS Live Share brings is impressive in that it allows each developer to walk through code in their own way while they debug and collaborate. While VS Live Share is only in preview, other recently-announced tools are already seeing significant adoption in a short period of time that ranges in the millions of downloads.

In the same vein of collaboration and integration, DevOps is of keen interest to most enterprise IT shops. Microsoft showed how Visual Studio Team Services embraces DevOps in a holistic way. While the demonstration was impressive, the question of scalability often comes into the picture for large, integrated teams. It was mentioned that VS Team Services is currently used by the Microsoft Windows development team and their whopping 25,000 developers.

Add to scale the ability to build ‘safe code’ pipelines with automation that creates triggers to evaluate code in-process and one can quickly see how Microsoft is taking the modern, sophisticated development process to heart.

POWERING DATA AND AI IN THE CLOUD

In addition to developer tools, time was spent talking about Azure, data and Databricks. I had the chance to sit down with Databricks CEO Ari Ghodsi to talk about how Azure Databricks is bringing the myriad of data sources together for the enterprise. The combination of Databricks on Azure provides the scale and ecosystem that highlights the power of Databricks to integrate the varied data sources that every enterprise is trying to tap into.

MIND THE DEVELOPER GAP

Developing applications that leverage analytics and AI is incredibly important, but not a trivial task. It often requires a combination of skills and experience to fully appreciate the value that comes from AI. Unfortunately, developers often do not have the data science skills nor business context needed in today’s world. I spoke with Microsoft’s Corey Sanders after his keynote about how Microsoft is bridging the gap for the developer. Both Sanders & Ghodsi agree that the gap is an issue. However, through the use of increasingly sophisticated tools such as Databricks and Visual Studio, Sanders & Ghodsi believe Microsoft is making a serious attempt at bridging this gap.

It is clear that Microsoft is getting back to its roots and considering the importance of the developer in an enterprise’s digital transformation journey. While there are still many gaps to fill, it is interesting to see how Microsoft is approaching the evolving landscape and complexity that is the enterprise reality.

Business · Cloud · Data

Salesforce bridges the customer engagement gap for growth at Dreamforce

JIBpFbbzTT6vN+p%RxbEzg
Last week was Salesforce’s Dreamforce conference in San Francisco with a whopping 170,000+ attendees. Even so, what were the key takeaways?

Today, many enterprises are either Salesforce customers and follow the space closely as it pertains to a key element for executive teams today: Customer engagement. One of the top issues that executive teams and board of directors face is how to create a deeper relationship with customers. Salesforce sits at this nexus. Here are the top takeaways from the conference;

UPSIDES:

  1. Partnership with Google: Salesforce announced their partnership with Google. While much of the discussion was integration with Google Cloud and G Suite, there are benefits that both companies (and customers) could gain from the relationship. The data that Google maintains on user behavior and ad-related impact could provide useful to Salesforce customers. Salesforce in turn could provide integration and insights to Google Ad Words. The potential from this symbiotic relationship could prove significant.
  2. Democratizing Einstein & AI: Last year, Einstein provided an interesting opportunity for Salesforce and their customers. This year, Salesforce showed how providing customers with an easy way to leverage Einstein provides a powerhouse of potential to support customer engagement. Plus, proactively predicting outcomes provides insights not previously possible.
  3. myTrailhead: Personalization has long-since been a key success factor to engage users. myTrailhead provides a level of personalization to allow users to work as they work best. Often, we require all users to work from a single console or interface. myTrailhead allows users to customize their experience.

DOWNSIDES:

  1. Fewer Feature/ Function Announcements: There was quite a bit of discussion around the number of feature/ functionality announcements made at Dreamforce. Further suggesting that maybe things are slowing down for Salesforce in terms of innovation. It is unclear to predict a trend from one data point. However, there are several indicators that this may only indicate a maturing of the innovation cycle.
  2. Expansion of Platform to Verticals: Salesforce supports a number of verticals with their solution. However, the depth they support the ecosystem around verticals pales in comparison with newer startups focused on specific verticals in the CRM space.
  3. Lack of New Data Sources: Unlike its competition, Salesforce takes a partnership approach to data integration into the platform. That is, they rely on partners to bring data sources for customers to leverage. Examples are financial services, traffic, weather, and other common data elements.

REVENUE GUIDANCE

Another key question that came up was around Salesforce’s revenue guidance. Can they (essentially) double their revenue to match guidance? And if so, how. There are a number of factors that I believe will support this.

All in, Salesforce is faced with significant headwinds from both competition and adoption of innovation by enterprises. Bringing partnerships with Google and democratization of newer technologies will do well to carry them forward. There is still a significant amount of potential upside for Salesforce.

CIO · Cloud

The difference between Hybrid and Multi-Cloud for the Enterprise

Cloud computing still presents the single biggest opportunity for enterprise companies today. Even though cloud-based solutions have been around for more than 10 years now, the concepts related to cloud continue to confuse many.

Of late, it seems that Hybrid Cloud and Multi-Cloud are the latest concepts creating confusion. To make matters worse, a number of folks (inappropriately) use these terms interchangeably. The reality is that they are very different.

The best way to think about the differences between Hybrid Cloud and Multi-Cloud is in terms of orientation. One addresses a continuum of different services vertically while the other looks at the horizontal aspect of cloud. There are pros and cons to each and they are not interchangeable.

 

Multi-Cloud: The horizontal aspect of cloud

Multi-Cloud is essentially the use of multiple cloud services within a single delivery tier. A common example is the use of multiple Public Cloud providers. Enterprises typically use a multi-cloud approach for one of three reasons:

  • Leverage: Enterprise IT organizations are generally risk-adverse. There are many reasons for this to be discussed in a later post. Fear of taking risks tends to inform a number of decisions including choice of cloud provider. One aspect is the fear of lock-in to a single provider. I addressed my perspective on lock-in here. By using a multi-cloud approach, an enterprise can hedge their risk across multiple providers. The downside is that this approach creates complexities with integration, organizational skills and data transit.
  • Best of Breed: The second reason enterprises typically use a multi-cloud strategy is due to best of breed solutions. Not all solutions in a single delivery tier offer the same services. An enterprise may choose to use one provider’s solution for a specific function and a second provider’s solution for a different function. This approach, while advantageous in some respects, does create complexity in a number of ways including integration, data transit, organizational skills and sprawl.
  • Evaluation: The third reason enterprises leverage a multi-cloud strategy is relatively temporary and exists for evaluation purposes. This third approach is actually a very common approach among enterprises today. Essentially, it provides a means to evaluate different cloud providers in a single delivery tier when they first start out. However, they eventually focus on a single provider and build expertise around that single provider’s solution.

In the end, I find that the reasons that enterprises choose one of the three approaches above is often informed by their maturity and thinking around cloud in general. The question many ask is: Do the upsides of leverage or best of breed outweigh the downsides of complexity?

Hybrid Cloud: The vertical approach to cloud

Most, if not all, enterprises are using a form of hybrid cloud today. Hybrid cloud refers to the vertical use of cloud in multiple different delivery tiers. Most typically, enterprises are using a SaaS-based solution and Public Cloud today. Some may also use Private Cloud. Hybrid cloud does not require that a single application spans the different delivery tiers.

The CIO Perspective

The important take away from this is to understand how you leverage Multi-cloud and/or Hybrid cloud and less about defining the terms. Too often, we get hung up on defining terms more than understanding the benefits from leveraging the solution…or methodology. Even when discussing outcomes, we often still focus on technology.

These two approaches are not the same and come with their own set of pros and cons. The value from Multi-Cloud and Hybrid Cloud is that they both provide leverage for business transformation. The question is: How will you leverage them for business advantage?

CIO · Cloud · Data

Why are enterprises moving away from public cloud?

IMG_6559

We often hear of enterprises that move applications from their corporate data center to public cloud. This may come in the form of lift and shift. But then something happens that causes the enterprise to move it out of public cloud. This yo-yo effect and the related consequences create ongoing challenges that contribute to several of the items listed in Eight ways enterprises struggle with public cloud.

In order to better understand the problem, we need to work backwards to the root cause…and that often starts with the symptoms. For most, it starts with costs.

UNDERSTANDING THE ECONOMICS

The number one reason why enterprises pull workloads back out of cloud has to do with economics. For public cloud, it comes in the form of a monthly bill for public cloud services. In the post referenced above, I refer to a cost differential of 4x. That is to say that public cloud services cost 4x the corporate data center alternative for the same services. These calculations include fully-loaded total cost of ownership (TCO) numbers on both sides over a period of years to normalize capital costs.

4x is a startling number and seems to fly in the face of a generally held belief that cloud computing is less expensive than the equivalent on-premises corporate data center. Does this mean that public cloud is not less expensive? Yes and no.

THE IMPACT OF LEGACY THINKING

In order to break down the 4x number, one has to understand legacy thinking heavily influences this number. While many view public cloud as less expensive, they often compare apples to oranges when comparing public cloud to corporate data centers. And many do not consider the fully-loaded corporate data center costs that includes server, network, storage…along with power, cooling, space, administrative overhead, management, real estate, etc. Unfortunately, many of these corporate data center costs are not exposed to the CIO and IT staff. For example, do you know how much power your data center consumes and the cost for real estate? Few IT folks do.

There are five components that influence legacy thinking:

  1. 24×7 Availability: Most corporate data centers and systems are built around 24×7 availability. There is a significant amount of data center architecture that goes into the data center facility and systems to support this expectation.
  2. Peak Utilization: Corporate data center systems are built for peak utilization whether they use it regularly or not. This unused capacity sits idle until needed and only used at peak times.
  3. Redundancy: Corporate infrastructure from the power subsystems to power supplies to the disk drives is designed for redundancy. There is redundancy within each level of data center systems. If there is a hardware failure, the application ideally will not know it.
  4. Automation & Orchestration: Corporate applications are not designed with automation & orchestration in mind. Applications are often installed on specific infrastructure and left to run.
  5. Application Intelligence: Applications assume that availability is left to other systems to manage. Infrastructure manages the redundancy and architecture design manages the scale.

Now take a corporate application with this legacy thinking and move it directly into public cloud. It will need peak resources in a redundant configuration running 24×7. That is how they are designed, yet, public cloud benefits from a very different model. Running an application in a redundant configuration at peak 24×7 leads to an average of 4x in costs over traditional data center costs.

This is the equivalent of renting a car every day for a full year whether you need it or not. In this model, the shared model comes at a premium.

THE SOLUTION IS IN PLANNING

Is this the best way to leverage public cloud services? Knowing the details of what to expect leads one to a different approach. Can public cloud benefit corporate enterprise applications? Yes. Does it need planning and refactoring? Yes.

By refactoring applications to leverage the benefits of public cloud rather than assume legacy thinking, public cloud has the potential to be less expensive than traditional approaches. Obviously, each application will have different requirements and therefore different outcomes.

The point is to shed legacy thinking and understand where public cloud fits best. Public cloud is not the right solution for every workload. From those applications that will benefit from public cloud, understand what changes are needed before making the move.

OTHER REASONS

There are other reasons that enterprises exit public cloud services beyond just cost. Those may include:

  1. Scale: Either due to cost or significant scale, enterprises may find that they are able to support applications within their own infrastructure.
  2. Regulatory/ Compliance: Enterprises may use test data with applications but then move the application back to corporate data centers when shifting into production with regulated data. Or compliance requirements may force the need to have data resources local to maintain compliance. Sovereignty issues also drive decisions in this space.
  3. Latency: There are situations where public cloud may be great on paper, but in real-life latency presents a significant challenge. Remote and time-sensitive applications are good examples.
  4. Use-case: The last catch-all is where applications have specific use-cases where public cloud is great in theory, but not the best solution in practice. Remember that public cloud is a general-purpose infrastructure. As an example, there are application use-cases that need fine-tuning that public cloud is not able to support. Other use-cases may not support public cloud in production either.

The bottom line is to fully understand your requirements, think ahead and do your homework. Enterprises have successfully moved traditional corporate applications to public cloud…even those with significant regulatory & compliance requirements. The challenge is to shed legacy thinking and consider where and how best to leverage public cloud for each application.