Business · Cloud · Data

Rubrik continues their quest to protect the enterprise

Screen Shot 2018-04-23 at 9.56.47 AM

Data protection is all the rage right now. With data moving beyond the corporate data center to multiple locations including cloud, the complexity has increased significantly. What is Data Protection? It generally covers a combination of backup, restore, disaster recovery (DR) and business continuity (BC). While not new, most enterprises have struggled for decades to effectively backup their systems in a way that ensures that a) the data is protected, b) can be restored if needed and c) can be restored in a timely fashion when needed. Put a different way: BC/DR is still one of most poorly managed parts of an IT operation. Add cloud to this and one can see where the wheels start to fall off.

The irony is that, while not new, enterprises struggle to effectively balance the needs of DR/BC in a meaningful way. The reasons for this are longer than this blog will permit. This is an industry screaming for disruption. Enter Rubrik.

RUBRIK BRINGS A FRESH PERSPECTIVE TO AN OLD PROBLEM

A couple of weeks back, I caught up with the Rubrik team at Tech Field Day’s Cloud Field Day 3. Rubrik came into the market a few years back and has continued their drive to solve this old, but growing problem.

Unlike traditional solutions, Rubrik takes a modern approach to their architecture. Everything that Rubrik does calls an API. By using this API-centric architecture, Rubrik provides modularity and flexibility to their approach. API-centric architectures are a must in a cloud-based world.

At Cloud Field Day, the Rubrik team went through their new SaaS-based solution called Polaris. Knowing that enterprise data is increasingly being spread across multiple data centers and cloud providers, they need a cohesive way to visually manage their data. Polaris is a SaaS-based solution that does just that. Polaris becomes the overarching management platform in which to effectively manage the growing complexity.

COMPLEXITY DRIVES THE NEED FOR A NEW APPRAOCH

There are two dynamics that are driving these changes: 1) the explosion in data growth and 2) the need to effective management data. As applications and their data move to a myriad of different solutions, so does the need to effectively manage the underlying data.

An increase in compliance and regulatory requirements are just adding further complexity to data management. As the complexity grows, so does the need for systemic automation. No longer are we able to simply throw more resources at the problem. It is time to turn the problem on its head and leverage new approaches.

DATA PROTECTION IS NOT IMPORTANT…UNTIL IT IS

During the discussion, Rubrik’s Chief Technologist Chris Wahl made a very key observation that everyone in IT painfully understands: Data protection is not important…until it is. To many enterprises, the concept of data protection is seen as an insurance policy that you hopefully will not need. However, in today’s world of increasingly regulated and highly complicated architectures with data spreading out at scale, the risks are simply too great to ignore.

While data protection may have been less important in the past, today it is critical.

GOING BEYOND SIMPLY BACKUP AND RECOVERY

If the story about Rubrik were to stop with just backup and recovery, it would still be impressive. However, Rubrik is venturing into the complexity that comes with integration into other systems and processes. One of the first areas is their integration with ServiceNow. Rubrik integrates with ServiceNow by ingesting CMDB data into the system. By doing so, it provides a cohesive look at the underlying components that Rubrik has visibility into.

Looking into the crystal ball, one can start to see how Rubrik is fully understanding that backup and recovery is just the start. The real opportunity comes from full integration into business processes. However, in order to do that, integrations like ServiceNow are needed. Expect to see more as Rubrik continues their quest to provide a solid foundation to the enterprise when they need it most.

CIO · Cloud

Eight ways enterprises struggle with public cloud

img_5313

The move to public cloud is not new yet many enterprises still struggle to successfully leverage public cloud services. Public cloud services have existed for more than a decade. So, why is it that companies still struggle to effectively…and successfully leverage public cloud? And, more importantly, what can be done, if anything, to address those challenges?

There is plenty of evidence showing the value of public cloud and its allure for the average enterprise. For most CIOs and IT leaders, they understand that there is potential with public cloud. That is not the fundamental problem. The issue is in how you get from here to there. Or, in IT parlance, how you migrate from current state to future state. For many CIOs, cloud plays a critical role in their digital transformation journey.

The steps in which you take as a CIO are not as trivial as many make it out to be. The level of complexity and process is palpable and must be respected. Simply put, it is not a mindset, but rather reality. This is the very context missing from many conversations about how enterprises, and their CIO, should leverage public cloud. Understanding and addressing the challenges provides for greater resolution to a successful path.

THE LIST OF CHALLENGES

Looking across a large cross-section of enterprises, several patterns start to appear. It seems that there are six core reasons why enterprises struggle to successfully adopt and leverage public cloud.

  1. FUD: Fear, Uncertainty and Doubt still ranks high among the list of issues with public cloud…and cloud in general. For the enterprise, there is value, but also risk with public cloud. Industry-wide, there is plenty of noise and fluff that further confuses the issues and opportunities.
  2. % of Shovel Ready Apps: In the average enterprise, only 10-20% of an IT organization’s budget (and effort) is put toward new development. There are many reasons for this. However, it further limits the initial opportunity for public cloud experimentation.
  3. Cost: There is plenty of talk about how public cloud is less costly than traditional corporate data center infrastructure. However, the truth is that public cloud is 4x the cost of running the same application within the corporate data center. Yes, 4x…and that considers a fully-loaded corporate data center cost. Even so, the reasons in this list contribute to the 4x factor and therefore can be mitigated.
  4. Automation & Orchestration: Corporate enterprise applications were never designed to accommodate automation and orchestration. In many cases, the effort to change an application may range from requiring significant changes to a wholesale re-write of the application.
  5. Architectural Differences: In addition to a lack of automation & orchestration support, corporate enterprise applications are architected where redundancy lies in the infrastructure tiers, not the application. The application assumes that the infrastructure is available 24×7 regardless if it is needed for 24 hours or 5 minutes. This model flies in the face of how public cloud works.
  6. Cultural impact: Culturally, many corporate IT folks work under an assumption that the application (and infrastructure it runs on) is just down the hall in the corporate data center. For infrastructure teams, they are accustomed to managing the corporate data center and infrastructure that supports the corporate enterprise applications. Moving to a public cloud infrastructure requires changes in how the CIO leads and how IT teams operate.
  7. Competing Priorities: Even if there is good reason and ROI to move an application or service to public cloud, it still must run the gauntlet of competing priorities. Many times, those priorities are set by others outside of the CIOs organization. Remember that there is only a finite amount of budget and resources to go around.
  8. Directives: Probably one of the scariest things I have heard is board of directors dictating that a CIO must move to cloud. Think about this for a minute. You have an executive board dictating technology direction. Even if it is the right direction to take, it highlights other issues in the executive leadership ranks.

Overall, one can see how each of these eight items are intertwined with each other. Start to work on one issue and it may address another issue.

UNDERSTANDING THE RAMIFICATIONS

The bottom line is that, as CIO, even if I agree that public cloud provides significant value, there are many challenges that must be addressed. Aside from FUD and the few IT leaders that still think cloud is a fad that will pass, most CIOs I know support leveraging cloud. Again, that is not the issue. The issue is how to connect the dots to get from current state to future state.

However, not addressing the issues up front from a proactive perspective can lead to several outcomes. These outcomes are already visible in the industry today and further hinder enterprise public cloud adoption.

  1. Public Cloud Yo-Yo: Enterprises move an application to public cloud only to run into issues and then pull it back out to a corporate data center. Most often, this is due to the very issues outlined above.
  2. Public Cloud Stigma: Due to the yo-yo effect, it creates a chilling effect where corporate enterprise organizations slow or stop public cloud adoption. The reasons range from hesitation to flat out lack of understanding.

Neither of these two issues are good for enterprise public cloud adoption. Regardless, the damage is done and considering the other issues, pushes public cloud adoption further down the priority list. Yet, both are addressable with a bit of forethought and planning.

GETTING ENTERPRISES STARTED WITH PUBLIC CLOUD

One must understand that the devil is in the details here. While this short list of things ‘to-do’ may seem straight forward, how they are done and addressed is where the key is.

  1. Experiment: Experiment, experiment, experiment. The corporate IT organization needs a culture of experimentation. Experiments are mean to fail…and learned from. Too many times, the expectation is that experiments will succeed and when they don’t, the effort is abandoned.
  2. Understand: Take some time to fully understand public cloud and how it works. Bottom line: Public cloud does not work like corporate data center infrastructure. It is often best to try and forget what you know about your internal environment to avoid preconceived assumptions.
  3. Plan: Create a plan to experiment, test, observe, learn and feed that back into the process to improve. This statement goes beyond just technology. Consider the organizational, process and cultural impacts.

WRAPPING IT UP

There is a strong pull for CIOs to get out of the data center business and reduce their corporate data center footprint. Public cloud presents a significant opportunity for corporate enterprise organizations. But before jumping into the deep end, take some time to understand the issues and plan accordingly. The difference will impact the success of the organization, speed of adoption and opportunities to the larger business.

Further Reading…

The enterprise view of cloud, specifically public cloud, is confusing

The enterprise CIO is moving to a consumption-first paradigm

The three modes of enterprise cloud applications

Business · CIO · Cloud · IoT

The five most popular posts of 2016

img_5311

While 2016 is quickly coming to a close, it offers plenty to reflect on. For the CIO, IT organizations and leaders who work with technology, 2016 offered a glimpse into the future and the cadence in which it takes. We learned how different industries, behaviors and technologies are impacting business decisions, societal norms and economic drivers.

Looking back on 2016, here is a list of the top-5 posts on AVOA.com.

#5: Understanding the five tiers of IoT core architecture

In this July post, I suggest an architecture to model IoT design and thinking.

#4: Changing the language of IT: 3 things that start with the CIO

This May post attracted a ton of attention from CIOs (and non-CIOs) as part of their transformation journey.

#3: IT transformation is difficult, if not impossible, without cloud

Another May post on the importance of the intersection between transformation and cloud.

#2: Microsoft Azure Stack fills a major gap for enterprise hybrid cloud

Only one of two top-five vendor-related posts digs into the importance of Microsoft’s hybrid cloud play.

And the #1 post…

#1: Is HPE headed toward extinction

This provocative post looks at business decisions by HPE and how they impact the enterprise buyer.

2017 is already shaping up nicely with plenty of change coming. And with that, I close out 2016 wishing you a very Happy New Year and an even better 2017!

Cloud · IoT

Understanding the five tiers of IoT core architecture

Internet of Things (IoT) is all the rage today. Just tagging something as belonging to the IoT family brings quite a bit of attention. However, this tagging has also created quite a bit of noise in the industry for organizations trying to sort through how best to leverage IoT. Call it IoT marketing overload. Or IoT-washing.

That being said, just about every single industry can leverage IoT in a meaningful way today. But where does one begin? There are many ways to consider where to start your IoT journey. The first is to understand the basic fundamentals of how IoT solutions are architected. The five tiers of IoT core architecture are: Applications, Analytics, Data, Gateway and Devices. Using this architecture, one can determine where any given IoT solution fits…and the adjacent components required to compete the solution.

THE FIVE TIERS OF IOT CORE ARCHITECTURE

  • DEVICE TIER

The device tier is the physical device that collects data. The device is a piece of hardware that collects telemetry (data) about a given situation. Devices can range from small sensors to wearables to large machines. The data itself may be presented in many forms from electrical signals to IP-data.

The device may also display information (see Application tier).

  • GATEWAY TIER

The sheer number of devices and interconnection options creates a web of complexity to connect the different devices and their data streams. Depending on the streams, they may come in such diverse forms as mechanical signals or IP-based data streams. On the surface, these streams are completely incompatible. However, when correlating data, a common denominator is needed. Hence, the need for a gateway to collect and homogenize the streams into manageable data.

  • DATA TIER

The data tier is where data from gateways is collected and managed. Depending on the type of data, different structures may be called for. The management, hygiene and physical storage of data is a whole classification onto itself simply due to the four V’s of data (Volume, Variety, Velocity, Veracity).

  • ANALYTICS TIER

Simply managing the sheer amount of data coming from IoT devices creates a significant hurdle when converting data into information. Analytics are used to automate the process for two reasons: Manageability and Speed. The combination of these two present insights to the varied complexity of data coming from devices. As the number and type of devices vary and become increasingly more complex, so will the demand for analytics.

  • APPLICATION TIER

Applications may come in multiple forms. In many cases, the application is the user interface that leverages information coming from the analytics tier and presented to the user in a meaningful way. In other cases, the application may be an automation routine that interfaces with other applications as part of a larger function.

Interestingly, the application may reside on the device itself (ie: wearable).

IoT Architecture

 

Today, many IoT solutions cover one or more of the tiers outlined above. It is important to understand which tiers are covered by any given IoT solution.

CLOUD-BASED IOT SOLUTIONS

Several major cloud providers are developing IoT solutions that leverage their core cloud offering. One thing that is great about these solutions is that they help shorten the IoT development time by providing fundamental offerings that cover many of the tiers outlined above. Most of the solutions focus on the upper tiers to manage the data coming from devices. Three such platforms are: Amazon AWS IoT, IBM Watson IoT, and Microsoft Azure IoT Suite. Each of these emphasize on a different suite of ancillary solutions. All three allow a developer to shorten the development time for and IoT solution by eliminating the need to develop for all five tiers.

THE SECURITY CONUNDRUM

One would be remiss to discuss IoT without mentioning security. Security of devices, data elements and data flows are an issue today that needs greater attention. Instead of a one-off project or add-on solution, security needs to be part of the DNA infused in each tier of a given solution. Based on the current solutions today, there is a long way to go with this aspect.

That being said, IoT has a promising and significant future.

Business · CIO · Cloud · Data

HPE clarifies their new role in the enterprise

IMG_3755

Last week, Hewlett Packard Enterprise (HPE) held their annual US-based Discover conference in Las Vegas. HPE has seen quite a bit of change in the past year with the split of HP into HPE & HP Inc. They shut down their Helion Public Cloud offering and announced the divestiture of their Enterprise Services (ES) business to merge with CSC into a $26B business. With all of the changes and 10,000 people in attendance, HPE sought to clarify their strategy and position in the enterprise market.

WHAT IS IN AND WHAT IS OUT?

Many of the questions attendees were asking circled around the direction HPE was taking considering all of the changes just in the past year alone. Two of the core changes (shutting down Helion Public Cloud and splitting off their ES business) have raised many eyebrows wondering if HPE might be cutting off their future potential.

While HPE telegraphs that their strategy is to support customers with their ‘digital transformation’ journey, the statement might be a bit overreaching. That is not to say that HPE is not capable of providing value to enterprises. It is to say that there are specific aspects that they do provide value and yet a few significant gaps. We are talking about a traditional hardware-focused company shifting more and more toward software. Not a trivial task.

There are four pillars that support the core HPE offering for enterprises. Those include Infrastructure, Analytics, Cloud and Software.

INFRASTRUCTURE AT THE CORE

HPE’s strength continues to rest on their ability to innovate in the infrastructure space. I wrote about their Moonshot and CloudSystem offerings three years ago here. Last year, HPE introduced their Synergy technology that supports composability. Synergy, and the composable concept, is one of the best opportunities to address the evolving enterprise’s changing demands. I delve a bit deeper into the HPE composable opportunity here.

Yet, one thing is becoming painfully clear within the industry. The level of complexity for infrastructure is growing exponentially. For any provider to survive, there needs to be a demonstrable shift toward leveraging software that manages the increasingly complex infrastructure. HPE is heading in that direction with their OneView platform.

Not to be outdone in supporting the ever-changing software platform space, HPE also announced that servers will come ready to support Docker containers. This is another example of where HPE is trying to bridge the gap between traditional infrastructure and newer application architectures including cloud.

CLOUD GOES PRIVATE

Speaking of cloud, there is quite a bit of confusion where cloud fits in the HPE portfolio of solutions. After a number of conversations with members of the HPE team, their solutions are focused on one aspect of cloud: Private Cloud. This makes sense considering HPE’s challenges to reach escape velocity with their Helion Public Cloud offering and core infrastructure background. Keep in mind that HPE’s private cloud solutions are heavily based on OpenStack. This will present a challenge for those considering a move from their legacy VMware footprint. But does open the door to new application architectures that are specifically looking for an OpenStack-based Private Cloud. However, there is already competition in this space from companies like IBM (BlueBox) and Microsoft (AzureStack). And unlike HPE, both IBM & Microsoft have established Public Cloud offerings that complement their Private Cloud solutions (BlueBox & Azure respectively).

One aspect in many of the discussions was how HPE’s Technical Services (TS) are heavily involved in HPE Cloud deployments. At first, this may present a red flag for many enterprises concerned with the level of consulting services required to deploy a solution. However, when considering that the underpinnings are OpenStack-based, it makes more sense. OpenStack, unlike traditional commercial software offerings, still requires a significant amount of support to get it up and running. This could present a challenge to broad appeal of HPE’s cloud solutions except for those few that understand, and can justify, the value proposition.

It does seem that HPE’s cloud business is still in a state of flux and finding the best path to take. With the jettison of Helion Public Cloud and HPE’s support of composability, there is a great opportunity to appeal to the masses and leverage their partnership with Microsoft to support Azure & AzureStack on a Synergy composable stack. Yet, the current focus appears to still focus on OpenStack based solutions. Note: HPE CloudSystem does support Synergy via the OneView APIs.

SOFTWARE

At the conference, HPE highlighted their security solutions with a few statistics. According to HPE, they “secure nine of the top 10 software companies, all 10 telcos and all major branches of the US Department of Defense (DoD).” While those are interesting statistics, one should delve a bit further to determine how extensive this applies.

Security sits alongside the software group’s Application Lifecycle Management (ALM), Operations and BigData software solutions. As time goes on, I would hope to see HPE mature the significance of their software business to meet the changing demands from enterprises.

THE GROWTH OF ANALYTICS

Increasingly, enterprise organizations are growing their dependence on data. A couple of years back, HP (prior to the HPE/ HP Inc split) purchased Autonomy and Vertica. HPE continues to mature their combined Haven solution beyond addressing BigData into the realm of Machine Learning. That that end, HPE now is offering Haven On-Demand (http://www.HavenOnDemand.com) for free. Interestingly, the solution leverages HPE’s partnership with Microsoft and is running on Microsoft’s Azure platform.

IN SUMMARY

HPE is bringing into focus those aspects they believe they can do well. The core business is still focused on infrastructure, but also supporting software (mostly for IT focused functions), cloud (OpenStack focused) and data analytics. After the dust settles on the splits and shifts, the largest opportunities for HPE appear to come from infrastructure (and related software), and data analytics. The other aspects of the business, while valuable, support a smaller pool of prospective customers.

Ultimately, time will tell how this strategy plays out. I still believe there is an untapped potential from HPE’s Synergy composable platform that will appeal to the masses of enterprises, but is often missed. Their data analytics strategy appears to be gaining steam and moving forward. These two offerings are significant, but only provide for specific aspects in an enterprises digital transformation.

Cloud

Can cloud finally help enterprises with DR/BC?

IMG_2716

Disaster Recovery (DR) and Business Continuity (BC) are collectively, using IT lingo, called DR/BC. Both terms go back to the earliest days of computing. Even though neither term is new, neither are done really well. DR/BC is one of the biggest risks for a company and most do not realize it until failure strikes. So, what is DR/BC and what can be done to change this posture?

WHAT IS DR/BC?

First, let’s break it down. What is DR & BC? They are interrelated, but very different in nature. In essence, they cover a spectrum of solutions when failure strikes. DR typically covers the period of failure and return to operations. BC covers the continuity of business operations during a failure. While that seems pretty straight forward, the complexity comes in when you consider dependencies and costs.

Dependencies cover a range of items from physical circuits to inter-related applications and just about everything in-between. One may ask: Why not just create redundancy in all of the dependencies. That is easier said than done. IT is already a complex beast. Think of it like a ball of yarn. Now add a second ball of yarn that is identical in every way. The very introduction of a second ball means that the first ball is no longer unique. And the cost would be exorbitant. To many, DR/BC is often see like an insurance policy. What is the risk and likelihood of failure? How does this compare against the costs? Again, not a trivial matter; hence the failure of most to enact good DR/BC plans.

TYPES OF DR/ BC

In the past, there was, fundamentally, only one way to provide DR/ BC. Today, there are two fundamental methods when considering DR/BC; application-based and infrastructure based.

Infrastructure-based DR/BC is more common in enterprises, especially with legacy enterprise applications. This method is well understood and heavily leverages redundancy among hardware infrastructure components. For example, redundant storage arrays, clusters of compute infrastructure, redundant power supplies, etc. The application takes a stance where it assumes that the infrastructure resources are always there and available. There is often little to no intelligence within the application to protect against infrastructure failure.

Application-based DR/BC is less common in enterprise applications. It is, however, very common in cloud-native applications. Why? Cloud based infrastructure is often based on commodity hardware with little to no redundancy. Cloud native applications, unlike their legacy relatives, have the benefit of leveraging a totally new architecture from the ground up.

While infrastructure-based methods may be more common, application-based methods are more resilient. Why? Even with the most sophisticated Tier IV data center, brand-name systems, storage sub-systems and network topologies, the fact is that infrastructure still fails. Sure, it has gotten better over time. On the other hand, application-based methods do not assume that infrastructure is always there. As such, the applications themselves contain sophistication to ‘heal’ themselves without impacting the user experience. A whole data center (or a component within) can fail and not impact the user experience of the application.

Getting an application from one architecture to the other is not trivial. In addition, the underlying infrastructure must change with the re-architecture of the application. Running an application with self-healing properties on redundant infrastructure would be a waste in most cases. It would serve as redundancy of redundancy.

WHAT OPTIONS DOES CLOUD PROVIDE?

Cloud-based solutions provide a number of clever benefits for enterprises. First, there is flexibility in the underlying infrastructure options. Need redundant infrastructure one day and non-redundant the next? No problem. There are options for that. Trying to manage these shifts within the enterprise data center is both complicated and expensive. Not to mention it provides little direct business value to manage internally.

Cloud also provides a location for DR/BC that is physically separate from the corporate data center and reduces the overall cost. Only use it when you need. Which means you only pay for what you need and when you need it. Solutions from companies like CloudVelox help enterprises with this migration. (Full Disclosure: I serve as an advisor to CloudVelox)

Regardless if whether you use a third-party solution or do it yourself, cloud provides a significant opportunity for enterprises to finally engage meaningful DR/BC plans. It is not a trivial matter to shift from existing infrastructure-based applications to an architecture that supports application-based intelligence. Cloud can still provide benefits to infrastructure-based applications without the complexity and expense of the alternatives.

Cloud

Containers in the Enterprise

Containers are all the rage right now, but are they ready for enterprise consumption? It depends on whom you ask, but here’s my take. Enterprises should absolutely be considering container architectures as part of their strategy…but there are some considerations before heading down the path.

Container conferences

Talking with attendees at Docker’s DockerCon conference and Redhat’s Summit this week, you hear a number of proponents and live enterprise users. For those that are not familiar with containers, the fundamental concept is a fully encapsulated environment that supports application services. Containers should not be confused with virtualization. In addition, containers are not to be confused with Micro Services, which can leverage containers, but do not require them.

A quick rundown

Here are some quick points:

  • Ecosystem: I’ve written before about the importance of a new technology’s ecosystem here. In the case of containers, the ecosystem is rich and building quickly.
  • Architecture: Containers allow applications to break apart into smaller components. Each of the components can then spin up/ down and scale as needed. Of course automation and orchestration comes into play.
  • Automation/ Orchestration: Unlike typical enterprise applications that are installed once and run 24×7, the best architectures for containers spin up/ down and scale as needed. Realistically, the only way to efficiently do this is with automation and orchestration.
  • Security: There is quite a bit of concern about container security. With potentially thousands or tens of thousands of containers running, a compromise might have significant consequences. If containers are architected to be ephemeral, the risk footprint shrinks exponentially.
  • DevOps: Container-based architectures can run without a DevOps approach with limited success. DevOps brings a different methodology that works hand-in-hand with containers.
  • Management: There are concerns the short lifespan of a container creates challenges for audit trails. Using traditional audit approaches, this would be true. Using newer methods provides real-time audit capability.
  • Stability: The $64k question: Are containers stable enough for enterprise use? Absolutely! The reality is that legacy architecture applications would not move directly to containers. Only those applications that are significantly modified or re-written would leverage containers. New applications are able to leverage containers without increasing the risk.

Cloud-First, Container-First

Companies are looking to move faster and faster. In order to do so, the problem needs reduction into smaller components. As those smaller components become micro services (vs. large monolithic applications), containers start to make sense.

Containers represent an elegant way to leverage smaller building blocks. Some have equated containers to the Lego building blocks of the enterprise application architecture. The days of large, monolithic enterprise applications are past. Today’s applications may be complex in sum, but are a culmination of much smaller building blocks. These smaller blocks provide the nimble and fast speed that enterprises are clamoring for today.

Containers are more than Technology

More than containers, there are other components needed for success. Containers represent the technology building blocks. Culture and process are needed to support the change in technology. DevOps provides the fluid that lubricates the integration of the three components.

Changing the perspective

As with the newer technologies coming, other aspects of the IT organization must change too. Whether you are a CIO, IT leader, developer or operations team, the very fundamentals in which we function must change in order to truly embrace and adopt these newer methodologies.

Containers are ready for the enterprise…if the other aspects are considered as well.