Business · Cloud

Riverbed extends into the cloud

logo_riverbed_orange

One of the most critical, but often overlooked components in a system is that of the network. Enterprises continue to spend considerable amounts of money on network optimization as part of their core infrastructure. Traditionally, enterprises have controlled much of the network between applications components. Most of the time the different tiers of an application were collocated in the same data center or across multiple data centers and dedicated network connections that the enterprise had control of.

The advent of cloud changed all of that. Now, different tiers of an application may be spread across different locations, running on systems that the enterprise does not control. This lack of control provides a new challenge to network management.

In addition to applications moving, so does the data. As applications and data move beyond the bounds of the enterprise data center, so does the need to address the increasingly dispersed network performance requirements. The question is: How do you still address network performance management with you no longer control the underlying systems and network infrastructure components?

Riverbed is no stranger to Network performance management. Their products are widely used across enterprises today. At Tech Field Day’sCloud Field Day 3, I had the chance to meet up with the Riverbed team to discuss how they are extending their technology to address the changing requirements that cloud brings.

EXTENDING NETWORK PERFORMANCE TO CLOUD

Traditionally network performance management involved hardware appliances that would sit at the edges of your applications or data centers. Unfortunately, in a cloud-based world, the enterprise does not have access to the cloud data center nor network egress points.

Network optimization in cloud requires an entirely different approach. Add to this that application services are moving toward ephemeral behaviors and one can quickly see how this becomes a moving target.

Riverbed takes a somewhat traditional approach to how they address the network performance management problem in the cloud. Riverbed gives the enterprise the option to run their software as either a ‘sidecar’ to the application or as part of the cloud-based container.

EXTENDING THE DATA CENTER OR EMBRACING CLOUD?

There are two schools of thought on how one engages a mixed environment of traditional data center assets along with cloud. The first is to look at extending the existing data center so that the cloud is viewed as simply another data center. The second approach is to change the perspective where the constraints are reduced to the application…or better yet service level. The latter is a construct that is typical in cloud-native applications.

Today, Riverbed has taken the former approach. They view the cloud as another data center in your network. To this point, Riverbed’s SteelFusion product works as if the cloud is another data center in the network. Unfortunately, this only works when you have consolidated your cloud-based resources into specific locations.

Most enterprises are looking at a very fragmented approach to their use of cloud-based resources today. A given application may consume resources across multiple cloud providers and locations due to specific resource requirements. This shows up in how enterprises are embracing a multi-cloud strategy. Unfortunately, consolidation of cloud-based resources works against one of the core value propositions to cloud; the ability to leverage different cloud solutions, resources and tools.

UNDERSTANDING THE RIVERBED PORTFOLIO

During the session with the Riverbed team, it was challenging to understand how the different components of their portfolio work together to address the varied enterprise requirements. The portfolio does contain extensions to existing products that start to bring cloud into the network fold. Riverbed also discussed their Steelhead SaaS product, but it was unclear how it fits into a cloud native application model. On the upside, Riverbed is already supporting multiple cloud services by allowing their SteelConnect Manager product to connect to both Amazon Web Services (AWS) and Microsoft Azure. On AWS, SteelConnect Manager can run as an AWS VPC.

Understanding the changing enterprise requirements will become increasingly more difficult as the persona of the Riverbed buyer changes. Historically, the Riverbed customer was a network administrator or infrastructure team member. As enterprises move to cloud, the buyer changes to the developer and possibly the business user in some cases. These new personas are looking for quick access to resources and tools in an easy to consume way. This is very similar to how existing cloud resources are consumed. These new personas are not accustomed to working with infrastructure nor do they have an interest in doing so.

PROVIDING CLARITY FOR THE CHANGING CLOUD CUSTOMER

Messaging and solutions geared to these new personas of buyers need to be clear and concise. Unfortunately, the session with the Riverbed team was very much focused on their traditional customer; the Network administrator. At times, they seemed to be somewhat confused by questions that addressed cloud native application architectures.

One positive indicator is that Riverbed acknowledged that the end-user experience is really what matters, not network performance. In Riverbed parlance, they call this End User Experience Management (EUEM). In a cloud-based world, this will guide the Riverbed team well as they consider what serves as their North Star.

As enterprise embrace cloud-based architectures more fully, so will the need for Riverbed’s model that drives their product portfolio, architecture and go-to-market strategy. Based on the current state, they have made some inroads, but have a long way to go.

Further Reading: The difference between hybrid and multi-cloud for the enterprise

Business

Droplet Computing makes a big splash at Cloud Field Day 3

droplet.png

Every so often there is a company that catches your eye. Not because of their flashy marketing, but because they are solving a really interesting problem with a clever approach. That happened at Tech Field Day’s Cloud Field Day 3, where Droplet Computingofficially came out of stealth and held the public launch of the company.

LIBERATION OF APPLICATIONS

Droplet Computing’s core value proposition is the containerization of applications in an effort to modernize infrastructure. Essentially, Droplet Computing provides the ability to take an application and create a container around the application. This creates an abstraction layer between the application and the underlying system operating system (OS). Droplet Computing does note that there are components of the original OS that are needed inside the container to support the application, but that they are not the full OS. Once the application is containerized, it can move to other platforms; newer OS, different platform, moved to a mobile device, etc.

The underlying technology uses a combination of Wine& WebAssemblyto containerize the application.

There are many applications still in use across the globe that are not able to be upgraded for a myriad of reasons. Unfortunately, this limits the ability of the operator to move forward in modernizing their entire infrastructure while these older applications are still in use.

The solution is not limited to just older applications. Other applications could use the same technology to provide mobility between different system types, OS and the like. However, there are a number of competing products that provide similar functions for current applications.

Several of the use-cases that the Droplet Computing team mentioned included custom applications associated with CNC machines, MRI devices and the like. One thought was, think of all the WindowsXP based applications and older custom machines that are still in use today.

REDUCING THE CYBERSECURITY FOOTPRINT

There is one clever side-effect to encapsulating the application and providing the ability to upgrade the underlying hardware and OS without having to upgrade the application. It reduces the cybersecurity footprint and risk for that system. Does it eliminate the risk completely? No. But by leveraging modern hardware and OS, it does make a dent into reducing the potential risk.

IN SUMMARY

Droplet Computing is not the silver bullet that will magically modernize your entire environment, but it does give a level of abstraction to those older applications still widely used. This allows enterprises to bring legacy applications forward through modernization.

At the same time, it addresses a core issue that all enterprises are seeking: reduce your cybersecurity footprint. In today’s world where the risks from cyber-attacks are increasing, anything that reduces the footprint is a welcome approach.

Droplet Computing’s product is currently in ‘pre-GA’ and slated to move to GA by mid-May.

Cloud

Containers in the Enterprise

Containers are all the rage right now, but are they ready for enterprise consumption? It depends on whom you ask, but here’s my take. Enterprises should absolutely be considering container architectures as part of their strategy…but there are some considerations before heading down the path.

Container conferences

Talking with attendees at Docker’s DockerCon conference and Redhat’s Summit this week, you hear a number of proponents and live enterprise users. For those that are not familiar with containers, the fundamental concept is a fully encapsulated environment that supports application services. Containers should not be confused with virtualization. In addition, containers are not to be confused with Micro Services, which can leverage containers, but do not require them.

A quick rundown

Here are some quick points:

  • Ecosystem: I’ve written before about the importance of a new technology’s ecosystem here. In the case of containers, the ecosystem is rich and building quickly.
  • Architecture: Containers allow applications to break apart into smaller components. Each of the components can then spin up/ down and scale as needed. Of course automation and orchestration comes into play.
  • Automation/ Orchestration: Unlike typical enterprise applications that are installed once and run 24×7, the best architectures for containers spin up/ down and scale as needed. Realistically, the only way to efficiently do this is with automation and orchestration.
  • Security: There is quite a bit of concern about container security. With potentially thousands or tens of thousands of containers running, a compromise might have significant consequences. If containers are architected to be ephemeral, the risk footprint shrinks exponentially.
  • DevOps: Container-based architectures can run without a DevOps approach with limited success. DevOps brings a different methodology that works hand-in-hand with containers.
  • Management: There are concerns the short lifespan of a container creates challenges for audit trails. Using traditional audit approaches, this would be true. Using newer methods provides real-time audit capability.
  • Stability: The $64k question: Are containers stable enough for enterprise use? Absolutely! The reality is that legacy architecture applications would not move directly to containers. Only those applications that are significantly modified or re-written would leverage containers. New applications are able to leverage containers without increasing the risk.

Cloud-First, Container-First

Companies are looking to move faster and faster. In order to do so, the problem needs reduction into smaller components. As those smaller components become micro services (vs. large monolithic applications), containers start to make sense.

Containers represent an elegant way to leverage smaller building blocks. Some have equated containers to the Lego building blocks of the enterprise application architecture. The days of large, monolithic enterprise applications are past. Today’s applications may be complex in sum, but are a culmination of much smaller building blocks. These smaller blocks provide the nimble and fast speed that enterprises are clamoring for today.

Containers are more than Technology

More than containers, there are other components needed for success. Containers represent the technology building blocks. Culture and process are needed to support the change in technology. DevOps provides the fluid that lubricates the integration of the three components.

Changing the perspective

As with the newer technologies coming, other aspects of the IT organization must change too. Whether you are a CIO, IT leader, developer or operations team, the very fundamentals in which we function must change in order to truly embrace and adopt these newer methodologies.

Containers are ready for the enterprise…if the other aspects are considered as well.