Business · Data

Understanding the value of data integration

To understand the value of data integration, one has to first understand the changing data landscape. In the past few years, more data has been created than existed in all of time prior to that. In 2014, I penned a post asking ‘Are enterprises prepared for the data tsunami’? When it comes to data, enterprises of all sizes and maturity face two core issues: 1) How to effectively manage the sheer volume of data in a meaningful way and 2) How to extract insights from the data. Unfortunately, the traditional ways to manage data start to break down when considering these new challenges.

DIVERSE DATA SETS

In the above-mentioned post, there was reference to an IDC report suggesting that by 2020, the total amount of data will equate to 40,000 exabytes or 40 trillion gigabytes. That is more than 5,200 gigabytes for every man, woman and child in 2020.

However, unlike data in the past, this new data will come from an increasingly varied list of sources. Some of the data will be structured. Other data will be unstructured. And then there is meta data that is derived through analysis of these varied data sets. All of which needs to be leveraged by the transformational enterprise.

In the past, one might have pooled all of this data into a classic data warehouse. Unfortunately, many of the new data types do not fit nicely into this approach. Then came the data lake as a solution to simply pool all of this data. Unfortunately, this approach is also met with challenges as many enterprises are seeing their data lakes turn into data swamps.

Even beyond data generated internally enterprises are increasing their reliance on externally sourced data. Since this data is not created by the enterprise, there are limits on how the data is leveraged. In addition, simply bringing all of this data into the enterprise is not that simple. Nor is it feasible.

Beyond the concept of different data sets, these new data sets create ‘data gravity’ as they grow in size. Essentially, creating a stronger bond between the data set and the application that leverages it. As the size of the data set grows, so does its ‘gravity’ which prevents movement. All of these reasons create significant friction to considering any movement of data.

VALUE OF INTEGRATING DATA

The solution rests with data integration. Essentially, leave data where it resides and leverage integration methods to the various data sets in order to create insights. There are actually two components when considering how to integrate data.

There is a physical need for data integration and one that is more logical in nature. The physical component is how to physically connect the different data sources together. This is easier said than done. It was already challenging when we managed all of the data within the enterprise. Today, the data resides in the hands of many other players and approaches. This can add complexity to the integration efforts. Modern data integration methods rely on Application Programming Interfaces (APIs) to create these integration points. In addition, there are security ramifications to consider too.

The logical integration of data often centers around the customer. One of the core objectives for enterprises today is customer engagement. Enterprises are finding ways to learn more about their customer in an effort to build a more holistic profile that ultimately leads to a stronger relationship. Not all of that data is sourced internally. This really is a case of 1+1=3 where even smaller insights can lead to a larger impact when combined.

THE INTERSECTION OF DATA INTEGRATION AND ADVANCED FUNCTIONS

Data integration is a deep and complicated subject that is evolving quickly. Newer advancements in the Artificial Intelligence (AI) space are leading enterprises to gain greater insights that even they didn’t think about. Imagine a situation where you thought you knew your customer, but the system suggested other aspects that weren’t considered. AI has the opportunity to significantly augment the human capability to create more accurate insights and faster.

Beyond AI, other newer functions such as Machine Learning (ML) and Internet of Things (IoT) present new sources of data to further enhance insights. It should be noted that nether ML nor IoT are able to function in a meaningful way without leveraging data integration.

DATA INTEGRATION LEADS TO SPEED AND INSIGHTS…AND CHALLENGES

Enterprises that leverage AI and ML to augment their efforts find increased value from both the insights and the speed in which they respond. In today’s world where speed and accuracy are becoming a strong differentiation for competitors, leveraging as much data as possible is key. In order to leverage the sheer amount of data, enterprises must leverage data integration to remain competitive.

At the same time, enterprises are facing challenges from new regulations such as the General Data Protection Regulation (GDPR). There are many facets and complexities to GDPR that will only further the complexities for data integration and management.

While enterprises may have leveraged custom approaches to solve the data integration problem in the past, today’s complexities demand a different approach. The combination of these challenges push enterprises to leverage advanced tools to assist in the integration of data to gain greater insights.

 

This post sponsored by:

SAP_Best_R_grad_blk

https://www.sap.com/intelligentdata

Business · Cloud

Riverbed extends into the cloud

logo_riverbed_orange

One of the most critical, but often overlooked components in a system is that of the network. Enterprises continue to spend considerable amounts of money on network optimization as part of their core infrastructure. Traditionally, enterprises have controlled much of the network between applications components. Most of the time the different tiers of an application were collocated in the same data center or across multiple data centers and dedicated network connections that the enterprise had control of.

The advent of cloud changed all of that. Now, different tiers of an application may be spread across different locations, running on systems that the enterprise does not control. This lack of control provides a new challenge to network management.

In addition to applications moving, so does the data. As applications and data move beyond the bounds of the enterprise data center, so does the need to address the increasingly dispersed network performance requirements. The question is: How do you still address network performance management with you no longer control the underlying systems and network infrastructure components?

Riverbed is no stranger to Network performance management. Their products are widely used across enterprises today. At Tech Field Day’sCloud Field Day 3, I had the chance to meet up with the Riverbed team to discuss how they are extending their technology to address the changing requirements that cloud brings.

EXTENDING NETWORK PERFORMANCE TO CLOUD

Traditionally network performance management involved hardware appliances that would sit at the edges of your applications or data centers. Unfortunately, in a cloud-based world, the enterprise does not have access to the cloud data center nor network egress points.

Network optimization in cloud requires an entirely different approach. Add to this that application services are moving toward ephemeral behaviors and one can quickly see how this becomes a moving target.

Riverbed takes a somewhat traditional approach to how they address the network performance management problem in the cloud. Riverbed gives the enterprise the option to run their software as either a ‘sidecar’ to the application or as part of the cloud-based container.

EXTENDING THE DATA CENTER OR EMBRACING CLOUD?

There are two schools of thought on how one engages a mixed environment of traditional data center assets along with cloud. The first is to look at extending the existing data center so that the cloud is viewed as simply another data center. The second approach is to change the perspective where the constraints are reduced to the application…or better yet service level. The latter is a construct that is typical in cloud-native applications.

Today, Riverbed has taken the former approach. They view the cloud as another data center in your network. To this point, Riverbed’s SteelFusion product works as if the cloud is another data center in the network. Unfortunately, this only works when you have consolidated your cloud-based resources into specific locations.

Most enterprises are looking at a very fragmented approach to their use of cloud-based resources today. A given application may consume resources across multiple cloud providers and locations due to specific resource requirements. This shows up in how enterprises are embracing a multi-cloud strategy. Unfortunately, consolidation of cloud-based resources works against one of the core value propositions to cloud; the ability to leverage different cloud solutions, resources and tools.

UNDERSTANDING THE RIVERBED PORTFOLIO

During the session with the Riverbed team, it was challenging to understand how the different components of their portfolio work together to address the varied enterprise requirements. The portfolio does contain extensions to existing products that start to bring cloud into the network fold. Riverbed also discussed their Steelhead SaaS product, but it was unclear how it fits into a cloud native application model. On the upside, Riverbed is already supporting multiple cloud services by allowing their SteelConnect Manager product to connect to both Amazon Web Services (AWS) and Microsoft Azure. On AWS, SteelConnect Manager can run as an AWS VPC.

Understanding the changing enterprise requirements will become increasingly more difficult as the persona of the Riverbed buyer changes. Historically, the Riverbed customer was a network administrator or infrastructure team member. As enterprises move to cloud, the buyer changes to the developer and possibly the business user in some cases. These new personas are looking for quick access to resources and tools in an easy to consume way. This is very similar to how existing cloud resources are consumed. These new personas are not accustomed to working with infrastructure nor do they have an interest in doing so.

PROVIDING CLARITY FOR THE CHANGING CLOUD CUSTOMER

Messaging and solutions geared to these new personas of buyers need to be clear and concise. Unfortunately, the session with the Riverbed team was very much focused on their traditional customer; the Network administrator. At times, they seemed to be somewhat confused by questions that addressed cloud native application architectures.

One positive indicator is that Riverbed acknowledged that the end-user experience is really what matters, not network performance. In Riverbed parlance, they call this End User Experience Management (EUEM). In a cloud-based world, this will guide the Riverbed team well as they consider what serves as their North Star.

As enterprise embrace cloud-based architectures more fully, so will the need for Riverbed’s model that drives their product portfolio, architecture and go-to-market strategy. Based on the current state, they have made some inroads, but have a long way to go.

Further Reading: The difference between hybrid and multi-cloud for the enterprise

Business · Cloud

Morpheus Data brings the glue to multi-cloud management

clover-b4ff8d514c9356e8860551f79c48ff7c

Enterprises across the globe are starting to leverage cloud-based resources in a multitude of ways. However, there is not a one-size-fits-all approach to cloud that makes sense for the enterprise portfolio. This leads to a rise in multi-cloud deployments for the varied workloads any given enterprise uses. Meaning, any given enterprise will use a variety of different cloud-based services depending on the specific requirements of any given workload. It is important to understand the difference between Multi-Cloud and Hybrid Cloud.

This cloud ‘sprawl’ creates an increasingly complicated management problem as each cloud provider uses a different approach to manage their cloud-based services. Layer in management processes, automation routines and management tools and one can quickly understand the challenge. Add to this that any given application may use a different combination of cloud services and one can quickly see how the problem gets exponentially more complicated with each workload.

MORPHEUS DATA PROVIDES THE GLUE

At Tech Field Day’s Cloud Field Day 3, I had the opportunity to meet with the team from Morpheus Data.

Morpheus Data addresses this complicated web of tools and services by providing an abstraction layer on top of the various tools and services. More specifically, Morpheus Data creates abstraction between the provisioning and underlying infrastructure. To date, they support 49 service integrations out of the box that cover a variety of cloud services, governance tools, management tools and infrastructure.

Providing governance and automation is key to any multi-cloud or hybrid-cloud deployment. Leveraging a solution like Morpheus Data will help streamline CloudOps & DevOps efforts through their integration processes.

One interesting aspect of Morpheus Data’s solution is the ability to establish application templates that span a number of different tools, services & routines. The templates assist with deployment and can set specific time limitations on specific services. This is especially handy to avoid one form of sprawl known as service abandonment where a service is left running and accruing cost even though it is no longer used.

Much of Morpheus Data’s efforts are geared toward ‘net-new’ deployments to cloud. Moving legacy workloads will require re-working before fully taking advantage of cloud-based resources. I wrote about the challenges with legacy workloads moving to public cloud in these posts:

LOOKING BEYOND THE TOOL

While Morpheus Data provides technology to address the systemic complexities of technology, it does not address the people component. To be fair, it is not clear that any tool will necessarily fix the people component. Specifically, in order to truly leverage good governance and automation routines, one needs to come to grips with the organizational and cultural changes to support such approaches.

In order to address the people component, it is helpful to break down the personas. The key three are Developer, Infrastructure Administrator and Executive. Each of these personas have different requirements and interests that will impact how services are selected and consumed.

IN SUMMARY

Morpheus Data is going after a space that is both huge and highly complicated. A big challenge for the team will be to focus on the most critical spaces without trying to cover every tool, process and model. This is really a question going broad or going deep. You can’t do both.

In addition, it is clear that Morpheus Data has a good start but would benefit from bringing operational data and costs into the factors that drive decisions on which services to use. The team already has some cost components included but are not as dynamic as enterprises will need moving forward.

In summary, the Morpheus Data solution looks like a great start to the increasingly complicated multi-cloud space. Every enterprise will have some form of complexity dealing with multi-cloud and hybrid cloud. As such, they could benefit from a solution to help streamline the processes. Morpheus Data looks like a good start and will be interesting to see how the company and solution evolve over time to address this increasingly complicated space.

Cloud

Containers in the Enterprise

Containers are all the rage right now, but are they ready for enterprise consumption? It depends on whom you ask, but here’s my take. Enterprises should absolutely be considering container architectures as part of their strategy…but there are some considerations before heading down the path.

Container conferences

Talking with attendees at Docker’s DockerCon conference and Redhat’s Summit this week, you hear a number of proponents and live enterprise users. For those that are not familiar with containers, the fundamental concept is a fully encapsulated environment that supports application services. Containers should not be confused with virtualization. In addition, containers are not to be confused with Micro Services, which can leverage containers, but do not require them.

A quick rundown

Here are some quick points:

  • Ecosystem: I’ve written before about the importance of a new technology’s ecosystem here. In the case of containers, the ecosystem is rich and building quickly.
  • Architecture: Containers allow applications to break apart into smaller components. Each of the components can then spin up/ down and scale as needed. Of course automation and orchestration comes into play.
  • Automation/ Orchestration: Unlike typical enterprise applications that are installed once and run 24×7, the best architectures for containers spin up/ down and scale as needed. Realistically, the only way to efficiently do this is with automation and orchestration.
  • Security: There is quite a bit of concern about container security. With potentially thousands or tens of thousands of containers running, a compromise might have significant consequences. If containers are architected to be ephemeral, the risk footprint shrinks exponentially.
  • DevOps: Container-based architectures can run without a DevOps approach with limited success. DevOps brings a different methodology that works hand-in-hand with containers.
  • Management: There are concerns the short lifespan of a container creates challenges for audit trails. Using traditional audit approaches, this would be true. Using newer methods provides real-time audit capability.
  • Stability: The $64k question: Are containers stable enough for enterprise use? Absolutely! The reality is that legacy architecture applications would not move directly to containers. Only those applications that are significantly modified or re-written would leverage containers. New applications are able to leverage containers without increasing the risk.

Cloud-First, Container-First

Companies are looking to move faster and faster. In order to do so, the problem needs reduction into smaller components. As those smaller components become micro services (vs. large monolithic applications), containers start to make sense.

Containers represent an elegant way to leverage smaller building blocks. Some have equated containers to the Lego building blocks of the enterprise application architecture. The days of large, monolithic enterprise applications are past. Today’s applications may be complex in sum, but are a culmination of much smaller building blocks. These smaller blocks provide the nimble and fast speed that enterprises are clamoring for today.

Containers are more than Technology

More than containers, there are other components needed for success. Containers represent the technology building blocks. Culture and process are needed to support the change in technology. DevOps provides the fluid that lubricates the integration of the three components.

Changing the perspective

As with the newer technologies coming, other aspects of the IT organization must change too. Whether you are a CIO, IT leader, developer or operations team, the very fundamentals in which we function must change in order to truly embrace and adopt these newer methodologies.

Containers are ready for the enterprise…if the other aspects are considered as well.

CIO · Data

HP Software takes on the Idea Economy

The Idea Economy is being touted pretty heavily here at HP Discover in CEO Meg Whitman’s keynote. Paul Muller (@xthestreams), VP of Strategic Marketing in HP Software took us on a journey of how HP Software is thinking about solving today’s problems and preparing for the future state. Unlike many of the other presentations the journey is just as important as the projects. It helps organizations, partners, customers and providers align their vision and understand how best respond to the changing business climate.

The combination of non-digital natives looking at new technology in one way while millennials are approaching technology in a completely fresh way creates a bit of a challenge. Millennials often create and support disruption. Quite a different approach from their non-digital natives. According to HP’s Muller, a full “25% of organizations will fail to make it to the next stage through disruption.” If you’re an existing legacy enterprise, how do you embrace the idea economy while at the same time running existing systems? This presents a serious, but real challenge for any established enterprise today.

Muller then took the conversation of ‘bi-modal IT’ as a potential answer to the problem. Bi-modal IT is being discussed as ‘hybrid IT’ or two-speed IT to address the differences between running existing core systems while innovating with new products and services. In addition to the technology challenges, bi-modal IT creates a number of other challenges that involve process and people. Side note: Look for an upcoming HP Discover Performance Weekly episode we just recorded on the subject of bi-modal IT with Paul Muller and Paul Chapman, CIO of HP Software. In the episode, we take a deeper dive from a number of perspectives.

HP Software looks at five areas that people need to focus on:

  1. Service Broker & Builder: Recognize that the problem is not a buy vs. build question any longer. Today, both are needed.
  2. Speed: The speed in which a company innovates by turning an idea into software is key. Most companies are just terrible at this process. DevOps plays a key role with improving the situation.
  3. BigData & Connected Intelligence: Understand the differences between what customers ask for vs. what they use. BigData can provide insights here.
  4. User Experience: What is the digital experience considering the experience, platforms and functions?
  5. Security: Securing the digital assets are key. 33% of successful break-ins have been related to a vulnerability that has been known for 2 years. (Stuxnet).

HP leverages their Application Lifecycle Management process to address each of these five areas with data playing a fundamental role.

There was some discussion about the maturity cycle of companies regarding BigData. Trends show that companies start with experimentation of data outside the enterprise in the cloud. The data used is not sensitive or regulated. When it’s time to move into production, the function is brought back in-house. The next step in the maturity cycle are those that then move production BigData functions back outside into the cloud. Very few folks are doing this today, but this is the current trend.

And finally a core pain point that is still…still not managed well by companies: Backup and Disaster Recovery. This is nothing new, but an area ripe for disruption.

Overall, it was refreshing to hear more about the thought leadership that goes into the HP Software machine rather than a rundown of products and services.