CIO · Cloud

Three key changes to look for in 2018

wCZ2RdwyTCyFbOHFQBg4RQ

2017 has officially come to a close and 2018 has already started with a bang. As I look forward to what 2018 brings, the list is incredibly long and detailed. The genres of topics are equally long and cover people, process, technology, culture, business, social, economic and geopolitical boundaries…just to name a few.

Here are three highlights on my otherwise lengthy list…

EVOLVING THE CIO

I often state that after spending almost three decades in IT, now is the best time to work in technology. That statement is still true today.

One could not start a conversation about technology without first considering the importance of the technology leader and role of the Chief Information Officer (CIO). The CIO, as the most senior person leading the IT organization, takes on a very critical role for any enterprise. That was true in the past, and increasingly so moving forward.

In my post ‘The difference between the Traditional CIO and the Transformational CIO’, I outline many of the differences in the ever-evolving role of the CIO. Those traits will continue to evolve as the individual, organization, leadership and overall industry change to embrace a new way to leverage technology. Understanding the psyche of the CIO is something one simply cannot do without experiencing the role firsthand. Yet, understanding how this role is evolving is exactly what will help differentiate companies in 2018 and beyond.

In 2018, we start to see the emerging role of ‘Transformational’ CIO in greater numbers. Not only does the CIO see the need for change, so does the executive leadership team of the enterprise. The CIO becomes less of a technology leader and more of a business leader that has responsibility for technology. As I have stated in the past, this is very different from that of the ‘CEO of Technology’ concept that others have bandied about. In addition, there is a sense of urgency for the change as the business climate becomes increasingly competitive from new entrants and vectors. Culture and geopolitical changes will also impact the changing role of the CIO and that of technology.

TECHNOLOGY HITS ITS STRIDE

In a similar vein to that of the CIO, technology finds its stride in 2018. Recent years have shown a lot of experimentation in the hopes of leverage and success. This ‘shotgun’ approach has been very risky…and costly for enterprises. That is not to say that experimentation is a bad thing. However, the role of technology in mainstream business evolves in 2018 where enterprises face the reality that they must embrace change and technology as part of that evolution.

Executives will look for ways to, mindfully, leverage technology to create business advantage and differentiation. Instead of sitting at the extremes of either diving haphazardly into technology or analysis paralysis, enterprises will strike a balance to embrace technology in a thoughtful, but time-sensitive way. The concept of ‘tech for tech sake’ becomes a past memory like that of the dialup modem.

One hopeful wish is that boards will stop the practice of dictating technology decisions as they have in the past with mandating their organization use cloud. That is not to say cloud is bad, but rather to suggest that a more meaningful business discussion take place that may leverage cloud as one of many tools in an otherwise broadening arsenal.

CLOUD COMES OF AGE IN ALL FORMS

Speaking of cloud, a wholesale shift takes place in 2018 where we pass the inflection point in our thinking about cloud. For the enterprise, public cloud has already reached a maturity point with all three major public cloud providers offering solid solutions for any given enterprise.

Beyond public cloud, the concept of private cloud moves from theory to reality as solutions mature and the kinks worked out. Historically, private cloud was messy and challenging even for the most sophisticated enterprise to adopt. The theory of private cloud is incredibly alluring and now has reached a point where it can become a reality for the average enterprise. Cloud computing, in its different forms has finally come of age.

 

In summary, 2017 has taught us many tough lessons in which to leverage in 2018. Based on the initial read as 2017 came to a close, 2018 looks to be another incredible year for all of us! Let us take a moment to be grateful for what we have and respect those around us. The future is bright and we have much to be thankful for.

Happy New Year!

CIO · Cloud

3 ways enterprises can reduce their cybersecurity risk profile

IMG_5834

If you are an executive (CIO, CISO, CEO) or board member, cybersecurity is top of mind. One of the top comments I often hear is: “I don’t want our company (to be) on the front page of the Wall Street Journal.” Ostensibly, the comments are in the context of a breach. Yet, many gaps still exist between avoiding this situation and reality. Just saying the words is not enough.

The recent Equifax breach brings to light many conversations with enterprises and executive teams about shoring up their security posture. The sad reality is that cybersecurity spending often happens immediately after a breach happens. Why is that? Let us delve into several of the common reasons why and what can be done.

ENTERPRISE SECURITY CHALLENGES

There are a number of reasons why enterprises are challenged with cybersecurity issues. Much of it stems from the perspective of what cybersecurity solutions provide. To many, the investment in cybersecurity teams and solutions is seen as an insurance policy. In order to better understand the complexities, let us dig into a few of the common issues.

Reactive versus Proactive

The first issue is how enterprises think about cybersecurity. There are two aspects to consider when looking at how cybersecurity is viewed. The first is that enterprises often want to be secure, but are unwilling or unable to provide the funding to match. That is, until a breach occurs. This has created a behavior within IT organizations where they leverage breaches to gain cybersecurity funding.

Funding for Cybersecurity Initiatives

Spending in cybersecurity is often seen in a similar vein as insurance and comes back to risk mitigation. Many IT organizations are challenged to get adequate funding to appropriately protect the enterprise. It should be noted that no enterprise will be fully secured and to do so creates a level of complexity and cost that would greatly impact the operations and bottom line of the enterprise. Therefore, a healthy balance is called for here. Any initiatives should follow a risk mitigation approach, but also consider the business impact.

Shifting to Cybersecurity as part of the DNA

Enterprises often think of cybersecurity as an afterthought to a project or core application. The problem with this approach is that, as an afterthought, the project or application is well on its way to production. Any required changes would be ancillary and rarely get granular in how they could be applied. More mature organizations are shifting to cybersecurity as part of their core DNA. In this culture, cybersecurity becomes part of the conversation early and often…and at each stage of the development. By making it part of the DNA, each member of the process is encouraged to consider how to secure their part of the project.

Cybersecurity Threats are getting more Sophisticated

The level of sophistication from cybersecurity threats is growing astronomically. No longer are the traditional tools adequate to protect the enterprise. Enterprises are fighting an adversary that is gaining ground exponentially faster than they are. In essence, no one enterprise is able to adequately protect themselves and must rely on the expertise of others that specialize in this space.

Traditional thinking need not apply. The level of complexity and skills required is growing at a blistering clip. If your organization is not willing or able to put the resources behind staying current and actively engaged, the likelihood of trouble is not far way.

THREE WAYS TO REDUCE CYBERSECURITY RISK

While the risks are increasing, there are steps that every enterprise large and small can invoke to reduce their risk profile. Sadly, many of these are well known, yet not as well enacted. The first step is to change your paradigm regarding cybersecurity. Get proactive and do not assume you know everything.

Patch, Patch, Patch

Even though regular patching is a requirement for most applications and operating systems, enterprises are still challenged to keep up. There are often two reasons for this: 1) disruption to business operations and 2) resources required to update the application or system. In both cases, the best advice is to get into a regular rhythm to patch systems. When you make something routine, it builds muscle memory into the organization that increases the accuracy, lessens the disruption and speeds up the effort.

Regular Validation from Outsiders

Over time, organizations get complacent with their operations. Cybersecurity is no different. A good way to avoid this is to bring in a trusted, outside organization to spot check and ‘tune up’ your cybersecurity efforts. They can more easily spot issues without being affected by your blind spots. Depending on your situation, you may choose to leverage a third-party to provide cybersecurity services. However, each enterprise will need to evaluate their specific situation to best leverage the right approach for them.

Challenge Traditional Thinking

I still run into organizations that believe perimeter protections are the best actions. Another perspective is to conduct security audits with some frequency. Two words: Game Over. While those are both required, security threats today are constant and unrelenting. Constant, evolving approaches are required today.

As we move to a more complicated approach to IT services (SaaS, Public Cloud, Private Cloud, On Premises, Edge Computing, Mobile, etc), the level of complexity grows. Now layer in that the data that we view as gold is spread across those services. The complexity is growing and traditional thinking will not protect the enterprise. Leveraging outsiders is one approach to infuse different methods to address this growing complexity.

 

One alternative is to move to a cloud-based alternative. Most cloud-based alternatives have methods to update their systems and applications without disrupting operations. This does not absolve the enterprise from responsibility, but does offer an approach to leverage more specialized expertise.

The bottom line is that our world is getting more complex and cybersecurity is just one aspect. The rate of complexity and sophistication from cybersecurity attacks is only growing and more challenging for enterprises to keep up. Change is needed, the risks are increasing and now is the time for action.

CIO · Cloud

The difference between Hybrid and Multi-Cloud for the Enterprise

Cloud computing still presents the single biggest opportunity for enterprise companies today. Even though cloud-based solutions have been around for more than 10 years now, the concepts related to cloud continue to confuse many.

Of late, it seems that Hybrid Cloud and Multi-Cloud are the latest concepts creating confusion. To make matters worse, a number of folks (inappropriately) use these terms interchangeably. The reality is that they are very different.

The best way to think about the differences between Hybrid Cloud and Multi-Cloud is in terms of orientation. One addresses a continuum of different services vertically while the other looks at the horizontal aspect of cloud. There are pros and cons to each and they are not interchangeable.

 

Multi-Cloud: The horizontal aspect of cloud

Multi-Cloud is essentially the use of multiple cloud services within a single delivery tier. A common example is the use of multiple Public Cloud providers. Enterprises typically use a multi-cloud approach for one of three reasons:

  • Leverage: Enterprise IT organizations are generally risk-adverse. There are many reasons for this to be discussed in a later post. Fear of taking risks tends to inform a number of decisions including choice of cloud provider. One aspect is the fear of lock-in to a single provider. I addressed my perspective on lock-in here. By using a multi-cloud approach, an enterprise can hedge their risk across multiple providers. The downside is that this approach creates complexities with integration, organizational skills and data transit.
  • Best of Breed: The second reason enterprises typically use a multi-cloud strategy is due to best of breed solutions. Not all solutions in a single delivery tier offer the same services. An enterprise may choose to use one provider’s solution for a specific function and a second provider’s solution for a different function. This approach, while advantageous in some respects, does create complexity in a number of ways including integration, data transit, organizational skills and sprawl.
  • Evaluation: The third reason enterprises leverage a multi-cloud strategy is relatively temporary and exists for evaluation purposes. This third approach is actually a very common approach among enterprises today. Essentially, it provides a means to evaluate different cloud providers in a single delivery tier when they first start out. However, they eventually focus on a single provider and build expertise around that single provider’s solution.

In the end, I find that the reasons that enterprises choose one of the three approaches above is often informed by their maturity and thinking around cloud in general. The question many ask is: Do the upsides of leverage or best of breed outweigh the downsides of complexity?

Hybrid Cloud: The vertical approach to cloud

Most, if not all, enterprises are using a form of hybrid cloud today. Hybrid cloud refers to the vertical use of cloud in multiple different delivery tiers. Most typically, enterprises are using a SaaS-based solution and Public Cloud today. Some may also use Private Cloud. Hybrid cloud does not require that a single application spans the different delivery tiers.

The CIO Perspective

The important take away from this is to understand how you leverage Multi-cloud and/or Hybrid cloud and less about defining the terms. Too often, we get hung up on defining terms more than understanding the benefits from leveraging the solution…or methodology. Even when discussing outcomes, we often still focus on technology.

These two approaches are not the same and come with their own set of pros and cons. The value from Multi-Cloud and Hybrid Cloud is that they both provide leverage for business transformation. The question is: How will you leverage them for business advantage?

Business · CIO · Cloud · Data

HPE clarifies their new role in the enterprise

IMG_3755

Last week, Hewlett Packard Enterprise (HPE) held their annual US-based Discover conference in Las Vegas. HPE has seen quite a bit of change in the past year with the split of HP into HPE & HP Inc. They shut down their Helion Public Cloud offering and announced the divestiture of their Enterprise Services (ES) business to merge with CSC into a $26B business. With all of the changes and 10,000 people in attendance, HPE sought to clarify their strategy and position in the enterprise market.

WHAT IS IN AND WHAT IS OUT?

Many of the questions attendees were asking circled around the direction HPE was taking considering all of the changes just in the past year alone. Two of the core changes (shutting down Helion Public Cloud and splitting off their ES business) have raised many eyebrows wondering if HPE might be cutting off their future potential.

While HPE telegraphs that their strategy is to support customers with their ‘digital transformation’ journey, the statement might be a bit overreaching. That is not to say that HPE is not capable of providing value to enterprises. It is to say that there are specific aspects that they do provide value and yet a few significant gaps. We are talking about a traditional hardware-focused company shifting more and more toward software. Not a trivial task.

There are four pillars that support the core HPE offering for enterprises. Those include Infrastructure, Analytics, Cloud and Software.

INFRASTRUCTURE AT THE CORE

HPE’s strength continues to rest on their ability to innovate in the infrastructure space. I wrote about their Moonshot and CloudSystem offerings three years ago here. Last year, HPE introduced their Synergy technology that supports composability. Synergy, and the composable concept, is one of the best opportunities to address the evolving enterprise’s changing demands. I delve a bit deeper into the HPE composable opportunity here.

Yet, one thing is becoming painfully clear within the industry. The level of complexity for infrastructure is growing exponentially. For any provider to survive, there needs to be a demonstrable shift toward leveraging software that manages the increasingly complex infrastructure. HPE is heading in that direction with their OneView platform.

Not to be outdone in supporting the ever-changing software platform space, HPE also announced that servers will come ready to support Docker containers. This is another example of where HPE is trying to bridge the gap between traditional infrastructure and newer application architectures including cloud.

CLOUD GOES PRIVATE

Speaking of cloud, there is quite a bit of confusion where cloud fits in the HPE portfolio of solutions. After a number of conversations with members of the HPE team, their solutions are focused on one aspect of cloud: Private Cloud. This makes sense considering HPE’s challenges to reach escape velocity with their Helion Public Cloud offering and core infrastructure background. Keep in mind that HPE’s private cloud solutions are heavily based on OpenStack. This will present a challenge for those considering a move from their legacy VMware footprint. But does open the door to new application architectures that are specifically looking for an OpenStack-based Private Cloud. However, there is already competition in this space from companies like IBM (BlueBox) and Microsoft (AzureStack). And unlike HPE, both IBM & Microsoft have established Public Cloud offerings that complement their Private Cloud solutions (BlueBox & Azure respectively).

One aspect in many of the discussions was how HPE’s Technical Services (TS) are heavily involved in HPE Cloud deployments. At first, this may present a red flag for many enterprises concerned with the level of consulting services required to deploy a solution. However, when considering that the underpinnings are OpenStack-based, it makes more sense. OpenStack, unlike traditional commercial software offerings, still requires a significant amount of support to get it up and running. This could present a challenge to broad appeal of HPE’s cloud solutions except for those few that understand, and can justify, the value proposition.

It does seem that HPE’s cloud business is still in a state of flux and finding the best path to take. With the jettison of Helion Public Cloud and HPE’s support of composability, there is a great opportunity to appeal to the masses and leverage their partnership with Microsoft to support Azure & AzureStack on a Synergy composable stack. Yet, the current focus appears to still focus on OpenStack based solutions. Note: HPE CloudSystem does support Synergy via the OneView APIs.

SOFTWARE

At the conference, HPE highlighted their security solutions with a few statistics. According to HPE, they “secure nine of the top 10 software companies, all 10 telcos and all major branches of the US Department of Defense (DoD).” While those are interesting statistics, one should delve a bit further to determine how extensive this applies.

Security sits alongside the software group’s Application Lifecycle Management (ALM), Operations and BigData software solutions. As time goes on, I would hope to see HPE mature the significance of their software business to meet the changing demands from enterprises.

THE GROWTH OF ANALYTICS

Increasingly, enterprise organizations are growing their dependence on data. A couple of years back, HP (prior to the HPE/ HP Inc split) purchased Autonomy and Vertica. HPE continues to mature their combined Haven solution beyond addressing BigData into the realm of Machine Learning. That that end, HPE now is offering Haven On-Demand (http://www.HavenOnDemand.com) for free. Interestingly, the solution leverages HPE’s partnership with Microsoft and is running on Microsoft’s Azure platform.

IN SUMMARY

HPE is bringing into focus those aspects they believe they can do well. The core business is still focused on infrastructure, but also supporting software (mostly for IT focused functions), cloud (OpenStack focused) and data analytics. After the dust settles on the splits and shifts, the largest opportunities for HPE appear to come from infrastructure (and related software), and data analytics. The other aspects of the business, while valuable, support a smaller pool of prospective customers.

Ultimately, time will tell how this strategy plays out. I still believe there is an untapped potential from HPE’s Synergy composable platform that will appeal to the masses of enterprises, but is often missed. Their data analytics strategy appears to be gaining steam and moving forward. These two offerings are significant, but only provide for specific aspects in an enterprises digital transformation.

CIO · Cloud

The opportunities for enterprise and SMB cloud open up

Companies are getting out of the data center business in droves. While much of this demand is headed toward colocation facilities, there is a growing movement to cloud-based providers. A company’s move to cloud is a journey that spans a number of different modes over a period of time.

DataCenterCloudSpectrum

The workloads of a company do not reside only in one state at a time. They are often spread across most, if not all of the states at the same time. However, the general direction for workloads transits from left to right.

One of the bigger challenges in this journey was a missing piece; namely Private Cloud. Historically, the only way to build a private cloud was to do it yourself. That was easier said than done. The level of complexity compared with the value was simply not there. Fast forward a few years and there are now two core solutions on the market that fill this gap. They are:

While there are some similarities at a contextual level, the underlying technology is quite different from a user perspective. Blue Box Local is based on OpenStack while Azure Stack is based on Azure. Even so, both solutions provide continuity between their private cloud versions and their respective larger public cloud offerings.

Now that demand has reached critical mass, the supply side solutions (providers) are finally starting to mature with solutions that solve core needs for most enterprise and small-medium business (SMB) customers. The reality is that most companies are

From the provider perspective, critical mass served as an inhibitor to investment. Data center and cloud-based solutions are expensive to build. Without sufficient demand, providers might be left with assets sitting relatively idle. A costly venture for many considering a move into the cloud provider space.

Today, the economics are quite different. Not only are there a large number of companies moving to colocation and cloud-based but they are looking for solutions that support a varied and ever-changing use-case. As such, the solutions need to provide supportive and flexibility to adapt to this changing climate.

With the recent announcement of Azure Stack’s Technical Preview, a very interesting opportunity opened for MSPs looking to offer a cloud-based solution to customers. At this point, there are really only two companies that truly provide hybrid cloud solutions for both on premise and public cloud. Those are Blue Box/ IBM and Microsoft Azure.

THE MSP MARKET IS HUGE

When discussing cloud, much is discussed about the enterprise and startup/ web-scale markets with little mentioned regarding the Small-Medium Business (SMB) players. Yet, the SMB players make up a significant portion of the market. Why? The further down the scale, the harder it is to reach them. In most cases, SMB clients will leverage Managed Service Providers (MSPs)

For MSPs, the offerings were equally challenging to leverage in a meaningful way. Those days are quickly changing.

FOUR CORE OFFERINGS WITH VARIATIONS

Today, there are four core ways that cloud is offered in the service provider market.

  • Bare Metal: In this case, the client is essentially leasing hardware that is managed by the service provider. It does provide the client to bring their own software and license. But also brings an added level management requirement.
  • OpenStack: OpenStack provide a myriad of different options using an Open Source software platform. The challenge is in the ability for the client to truly support OpenStack. Unlike some may think, Open Source does not equate to free. However, there are solutions (like Blue Box/ IBM) that provide a commercial version for hybrid environments.
  • Azure: Azure comes in two flavors today that provide flexibility between requirements for on premise and public cloud. The former is served by Azure Stack while the latter is core Azure.
  • VMware: While VMware provides the greatest functionality for existing enterprise environments. In this model, companies are able to move their existing VMs to a providers VMware-based platform. Many of the companies that provide solutions in this space come for the colocation world and include QTS and PhoenixNAP.

These four categories are a quick simplification of how the different solutions map out. There are many variations of each of these solutions which makes comparison within a single category, let alone across categories, difficult.

TWO DELIVERY MODELS

MSPs looking to deliver cloud-based solutions were relegated to two options: 1) roll your own solution or 2) try to leverage an existing hosting offering and layer your services on top. Neither was particularly appealing to MSPs. Today, there are two core delivery options that are shaping up:

  • Single Tenant: In this model, a MSP would stand up a cloud solution specifically for a given client. Each client has their own instance that the MSP manages. In many ways, this is really just a simple hosted model of cloud.
  • Multi Tenant: In this model, there is a single instance of a cloud solution that the MSP manages. However, it is shared across many clients.

There are challenges and opportunities to both approaches and they need to match the capabilities of both the MSP and their clients.

As you start to map your cloud journey, the road and onramps are starting to shape up nicely. That is true for both MSPs and enterprise clients alike. There could not be a better time for enterprises to engage with cloud in a holistic way.

CIO · Cloud

Microsoft Azure Stack fills a major gap for enterprise hybrid cloud

Azure Stack is billed as a version of Azure that runs in your corporate data center. Originally announced on May 4, 2015, Microsoft Azure Stack presented a significant change to the enterprise cloud spectrum of options. Prior to Azure Stack, enterprises looking for a private cloud option were left to build their own. While possible, not a trivial feat for most enterprises.

Today, Microsoft announced availability of their Technical Preview 1 (TP1); a series of planned technical previews leading up to Azure Stack’s General Availability later in 2016.

 

Azure Map

A PRIVATE CLOUD FOR ENTERPRISES

Azure Stack represents an on premise version of Microsoft’s Azure public cloud that runs in your corporate data center. If you are familiar with Azure, you are already familiar with Azure Stack.

Unlike many solutions that start small and scale up, Microsoft was challenged with the opposite problem; to scale down Azure. Azure Stack is essentially a scaled down version of Azure and the code between the two versions of Azure is remarkably similar. The further up the stack, the more similar the code base gets. For developers, this means a more consistent experience between Azure and Azure Stack.

Many enterprise customers are hesitant or incapable of making the leap from workloads on premise to full-on public cloud. Those reasons range from cultural resistance to regulatory considerations. Azure Stack provides a solution that fills the gap between full-on public cloud (Azure) and the prospect of creating a private cloud from scratch. Moreover, because of the consistent experience, customers are able to develop applications on Azure Stack and then move them fairly seamlessly to Azure. Many of the services are similar between the solutions, however, there are some obvious differences inherent to public vs. private cloud.

For now, Microsoft has drawn the line with Azure to focus on the Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) cloud tiers. That may be true for Azure, however, Microsoft continues to grow their SaaS based solutions such as Microsoft Office 365. Microsoft stated that they are in the process of moving Microsoft 365 to Azure. It is anticipated that traditional enterprise core services such as Microsoft SQL server in addition to newer solutions like Internet of Things (IoT) will move to Azure in the form of a deployable ‘Template’.

It should not be minimized that the effort to move existing enterprise applications from their legacy footings is not a trivial effort. This is true for applications moving to Azure Stack, Azure, along with cloud solutions including Amazon AWS, Blue Box, and others.

 

Enterprise Cloud Map

THE KEY TO AZURE STACK LOCAL TARGET

The beauty of a local version of public cloud solutions is in its ability to sidestep many of the challenges that public cloud presents. In the case of regulatory or data privacy issues, Azure Stack provides the ability to leverage the benefits of cloud, while adhering to local regulatory issues surrounding location of data.

In the most simplistic form, one could consider Azure Stack another ‘Region’ in which to deploy applications. Microsoft’s management application, Azure Resource Manager (ARM) is able to deploy directly to Azure Stack as another target Region just as one would deploy to West US or East US. In the case of Azure Stack, the Region is Local. Customers do have the option to deploy internal (Local) Regions in a single zone or in separate zones.

DEVELOPING ON AZURE

One of the core benefits to Azure Stack is in the ability to build applications for Azure Stack (or Azure) and deploy them to either solution. Microsoft Visual Studio already has the ability to update locations in real-time from Azure and Azure Stack. The core of Azure deployments come in the form of a Template. There are already a number of Templates on GitHub for immediate download:

Quick Start ARM templates that deploy on Azure Stack: http://aka.ms/AzureStackGitHub

The Software Development Kit (SDK) for Azure Stack supports both PowerShell and Command Line Interface (CLI) just like Azure. In addition, deployment tools such as Chef and Puppet are supported via the ARM API to Azure Stack.

GETTING STARTED WITH AZURE STACK

While the download for Azure Stack TP1 will not be available until January 29th, there are a number of minimum requirements to get started. Keep in mind that this is the first Technical Preview of Azure Stack. As such, there is quite a bit of code to optimize for local use vs. the full Azure cloud. With Azure, the minimum configuration covered a full 20 racks! With Azure Stack, the minimum footprint has shrunk to a cluster of four systems with a maximum of 63 systems per cluster. Jeffrey Snover (Chief Architect, Microsoft Azure and Technical Fellow at Microsoft) outlined the minimum and recommended requirements in his blog post last month.

One may notice the Windows Server certification requirement. That is due to Azure Stack running on a base of Microsoft Windows Server. However, the Microsoft team believes that this will evolve over time. The memory requirements may also evolve. When running Azure Stack, the components take up approximately 24GB of RAM per system. While this may get optimized over time, additional components (such as clustering) may increase the memory consumption.

One may express concern at the very mention of a local cloud based on Windows Server. If anything, for the purposes of patching processes. Azure Stack is built to evacuate workloads off resources prior to patching. But Microsoft is looking at a wholly different approach to patching. Instead of applying the traditional Windows Server patches, Microsoft is looking to complete redeploy a new copy of Windows Server for the Azure Stack underpinnings. It will be interesting to see how this plays out.

There are two ways to get started with Azure Stack:

  1. Do It Yourself: Leverage reference architectures from Dell, HP and others that list the parts needed to support Azure Stack.
  2. Integrated Systems: Purchase a fully assembled, standardized solution.

 

IN SUMMARY

Azure Stack presents a significant game changer for Microsoft and the Enterprise cloud spectrum by filling a gap long-since needed. There are a number of other benefits to both enterprises and Managed Service Providers (MSPs) that Azure Stack brings. We will leave those for a later post.

 

UPDATE: The download for Azure Stack TP1 is live. You can get it here.

Cloud · Data · IoT

IBM and Weather Company deal is the tip of the iceberg for cloud, data and IoT

Technology and how we consume it is changing faster than we know it. Need proof? Just look at the announcement last night between IBM & Weather Company. It was just a short 4.5 months ago that I was sitting in the Amazon AWS re:Invent keynote on Nov 13, 2014 listening to Weather Company’s EVP, CTO & CIO Bryson Koehler discuss how his company was leveraging Amazon’s AWS to change the game. After the keynote, I had the opportunity to chat with Bryson a bit. It was clear at the time that while Amazon was a key enabler for Weather Company, they could only go so far.

The problem statement

Weather Company is a combination of organizations that brings together a phenomenal amount of data from a myriad of sources. Not all of the sources are sophisticated weather stations. Bryson mentioned that Weather Company is “using data to help consumers gain confidence.” Weather Company uses a number of platforms to produce weather results including Weather Channel, weather.com and Weather Underground. Weather Underground is their early testbed for new methods and tools.

Weather Company produces 15 billion forecasts every day. Those forecasts come billions of sensors across the globe. The forecasts for 2.2 million locations are updated every four hours with billions more updated every 15 minutes. The timeliness and accuracy of their forecasts is what ultimately builds consumer confidence.

Timing

The sheer number of devices makes Weather Company a perfect use-case of leveraging Internet of Things (IoT) powered by Cloud, Data and Analytics. Others may start to see parallels between what Weather Company is doing with their own industry. In today’s competitive market, the speed and accuracy of information is key.

IBM’s strategy demonstrated leadership in the cloud and data/ analytics space with their SoftLayer and Watson solutions. Add in the BlueMix platform and one can see how the connection between these solutions becomes clear. Moving to IoT was the next logical step in the strategy.

Ecosystem Play

The combination of SoftLayer, BlueMix and Watson…plus IoT was no accident. When considering the direction that companies are taking by moving up the stack to the data integration points, IoT is the next logical step. IoT presents the new driver that cloud and data/ analytics enable. BlueMix becomes the glue that ties it all together for developers.

The ecosystem play is key. Ecosystems are everything. Companies are no longer buying point solutions. They are buying into ecosystems that deliver direct business value. In the case of Weather Company, the combination of IBM’s ecosystem and portfolio provides key opportunities to producing a viable solution.

Next Steps…

That being said, the move by IBM & Weather Company should not be seen as a one-off. We should expect to see more enterprises make moves like this toward broader ecosystems like IBM’s.