Network

Upgrading to a mesh wifi network

Google_Wifi.max-1000x1000

After several folks asked about a recent tweet I posted about upgrading my home wifi network to a mesh network, I thought I would spend a few words to describe the before and after.

UNDERSTANDING THE BASELINE

Before discussing the details about the wifi networks and implementation, it is important to first set a baseline. As with most decisions, they are not made in a vacuum nor are they made independent of other variables and/or factors. In my case, one of the core factors to understand is that I live in the Apple ecosystem. If I lived in a Windows and/or Android ecosystem, the circumstances would not have been the same. And therefore, past decisions would likely have been different.

MOVING TO MULTIPLE WIFI ACCESS POINTS

Considering that I live in the Apple ecosystem, it made logical sense (at the time) to consider an Apple AirPort Extreme base station as my wifi access point. At the time, one AirPort Extreme base station provided solid coverage to my entire home. In addition, the Apple ecosystem with management software included in the Mac OS made the management really simple.

At some point, however, the size of my home increased and so did the need for a broader wifi network. Apple’s AirPort Extreme base stations allow the ability to create an extended network across multiple access points. The fact that you could create a wireless bridge across two AirPort Extreme base stations was also handy for those devices that didn’t have wireless capabilities. That’s all great. That is, until it isn’t great.

As Apple stopped regularly updating their AirPort devices, the quality of service consistently degraded. First, it was access points that lost connection to other access points. Then performance became an issue. Eventually, it got to the point where performance felt really lagging…especially if another device was streaming on the network. Now keep in mind that my Internet connection is a 150Mbps broadband connection which should provide plenty of bandwidth. Add to that the management required to keep things working and one can see how frustrating it can get.

THE SHIFT TO MESH

For some time, I have been toying with the idea of replacing the three connected Apple AirPort Extreme base stations with a modern mesh network that focused on performance while still keeping things simple. After doing a fair amount of research, it came down to two products: Eero & Google Wifi. Both products had solid reports from users. In the end, I opted for the Google Wifi 3-node system over the Eero for one simple reason: Cost. The three unit Eero system is significantly more expensive than the equivalent 3-node Google Wifi system. And the specs seemed pretty similar.

In my situation, the three wifi units are setup as: 1) Primary, connected directly to cable modem. The second port connects to a switch which connects to other devices that perform better via wired over wireless connections (Smart TVs, DVR, Apple TV, DVD Player, etc). 2) Wireless mesh. And 3) Wired wifi mesh. The last one is closer to my home office which has a wired connection to the core switch in order to provide greater performance while still supporting the wifi mesh. Installation and setup was very quick and easy to do.

EARLY REPORTS ARE IN

Granted the devices have been operational almost 24 hours. However, since installing the new devices, I have seen a marked improvement in wifi stability and performance. I can also see how much bandwidth is being used by different devices and address as needed. Even my wife noted that the Google wifi access points are less obtrusive than then Apple AirPort Extreme base stations. Although only one of the three is visible as the other two sit behind things and out of view. Even when everyone is at home on their devices, I have not noticed a single blip in performance like in the past. In addition, moving around the house between access points also seems quicker and seamless. This is important for those (like us) that live in semi-rural areas where cell coverage is spotty and wifi calling is required.

Only time will tell, but so far I am very pleased with the change.

Business · Cloud · Data

Microsoft empowers the developer at Connect

iagGpMj8TOWl3Br86sxsgg

This week at Microsoft Connect in New York City, Microsoft announced a number of products geared toward bringing intelligence and the computing edge closer together. The tools continue Microsoft’s support of a varied and growing ecosystem of evolving solutions. At the same time, Microsoft demonstrated their insatiable drive to woo the developer with a number of tools geared toward modern development and advanced technology.

EMBRACING THE ECOSYSTEM DIVERSITY

Microsoft has tried hard in the past several years to shed their persona of Microsoft-centricity of a .NET Windows world. Similar to their very vocal support for inclusion and diversity in culture, Microsoft brings that same perspective to the tools, solutions and ecosystems they support. The reality is that the world is diverse and it is this very diversity that makes us stronger. Technology is no different.

At the Connect conference, similar to their recent Build & Ignite conferences, .NET almost became a footnote as much of the discussion was around other tools and frameworks. In many ways, PHP, Java, Node and Python appeared to get mentioned more than .NET. Does this mean that .NET is being deprecated in favor of newer solutions? No. But it does show that Microsoft is moving beyond just words in their drive toward inclusivity.

EXPANDING THE DEVELOPER TOOLS

At Connect, Microsoft announced a number of tools aimed squarely at supporting the modern developer. This is not the developer of years past. Today’s developer works in a variety of tools, with different methods and potentially in separate locations. Yet, they need the ability to collaborate in a meaningful way. Enter Visual Studio Live Share. What makes VS Live Share interesting is how it supports collaboration between developers in a more seamless way without the cumbersome screen sharing approach previously used. The level of sophistication that VS Live Share brings is impressive in that it allows each developer to walk through code in their own way while they debug and collaborate. While VS Live Share is only in preview, other recently-announced tools are already seeing significant adoption in a short period of time that ranges in the millions of downloads.

In the same vein of collaboration and integration, DevOps is of keen interest to most enterprise IT shops. Microsoft showed how Visual Studio Team Services embraces DevOps in a holistic way. While the demonstration was impressive, the question of scalability often comes into the picture for large, integrated teams. It was mentioned that VS Team Services is currently used by the Microsoft Windows development team and their whopping 25,000 developers.

Add to scale the ability to build ‘safe code’ pipelines with automation that creates triggers to evaluate code in-process and one can quickly see how Microsoft is taking the modern, sophisticated development process to heart.

POWERING DATA AND AI IN THE CLOUD

In addition to developer tools, time was spent talking about Azure, data and Databricks. I had the chance to sit down with Databricks CEO Ari Ghodsi to talk about how Azure Databricks is bringing the myriad of data sources together for the enterprise. The combination of Databricks on Azure provides the scale and ecosystem that highlights the power of Databricks to integrate the varied data sources that every enterprise is trying to tap into.

MIND THE DEVELOPER GAP

Developing applications that leverage analytics and AI is incredibly important, but not a trivial task. It often requires a combination of skills and experience to fully appreciate the value that comes from AI. Unfortunately, developers often do not have the data science skills nor business context needed in today’s world. I spoke with Microsoft’s Corey Sanders after his keynote about how Microsoft is bridging the gap for the developer. Both Sanders & Ghodsi agree that the gap is an issue. However, through the use of increasingly sophisticated tools such as Databricks and Visual Studio, Sanders & Ghodsi believe Microsoft is making a serious attempt at bridging this gap.

It is clear that Microsoft is getting back to its roots and considering the importance of the developer in an enterprise’s digital transformation journey. While there are still many gaps to fill, it is interesting to see how Microsoft is approaching the evolving landscape and complexity that is the enterprise reality.

Business · Cloud

One theory on Amazon interest in a second headquarters

Amazon announced that they are in search of a location for their second headquarters. The new headquarters facility is expected to create 50,000 jobs and bidders are welcome to submit their proposals to woo the Amazon opportunity. While that, in itself, sounds great, there may be more in the works than just a new headquarters. Let me share my theory on what this may indicate.

THE LOCATION SHORTLIST

First, companies like Amazon do not go into major decisions like this without already having a pretty good idea of how it will end. There is just too much risk at stake. In this specific case, the physical location of the second headquarters. Prior to making the announcement, I suspect Amazon already done their due diligence and has an internal shortlist of potential locations they would accept.

When evaluating Amazon’s two core businesses, Amazon.com and Amazon Web Services (AWS), both rely heavily on technology. Therefore, a headquarters location must have a strong technology ecosystem that can support their separate growth trajectories.

While just about any major city in the US could support a new headquarters, tech-centric locations on the shortlist may include Silicon Valley, Las Vegas, Phoenix, Austin, Atlanta, New York or Boston. One outlier may include Washington DC/ Virginia. Why? As Amazon continues their spectacular growth, innovation and acquisition of competitors, it will need stronger ties to government in-circles.

So, which location? My theory is that the process is more of a formality and the decision is between a couple of locations that will come down to local/ state tax incentives. If true, the shortlist is a few locations less than outlined above.

IS A SPLIT ON THE HORIZON?

It is not common for companies to suggest a second ‘headquarters’ location. It does happen, but not often. There may be an undercurrent driving this move. Amazon has two core businesses; Amazon.com and AWS. Almost two years ago, Amazon announced that Andy Jassy would be promoted to CEO of AWS. This may be the first market in a longer-term strategy for Amazon.

One challenge Amazon continues to face is conflict between their core Amazon.com business and Amazon Web Services (AWS). Major customers of AWS continue to flee when Amazon.com moves into a competitive role. Essentially, Amazon.com gains are negatively impacting AWS. For example, Walmart is just one of the latest customers to do so. In the enterprise space, prospective customers have expressed concern that AWS (historically) is not Amazon’s core business. The distribution business is their core. Of course, in the past few years, AWS has grown significantly. However, it still presents a challenge. Splitting Amazon into two companies with Andy Jassy taking on new AWS entity could be the solution.

SPLIT DECISIONS

But there is a potential problem with splitting AWS from Amazon. When they operate as a combined company, Amazon is not required to disclose their significant AWS customers as they are not material in revenue to their core business. However, if the two companies were to split, this disclosure could be required and would bring focus to who AWS’ material customers are…in a very public way.

Now, if none of AWS’ customers are material, or contribute a significant amount of value (individually) to their financial revenue, this issue is not relevant. However, I suspect that Amazon.com is a major consumer of AWS’ services. And there may be a couple of other major customers.

If there are significant, material customers in the mix, it could present concerns among shareholders of AWS. Today, we don’t have clarity to this issue due to the economic halo effect of the core Amazon.com business. Splitting the companies brings this potential issue to light…and may be the reason Amazon has not split the two companies yet.

IMPACT TO SEATTLE ECOSYSTEM

The last driver may be the Seattle ecosystem itself. Seattle is a vibrant, technology metropolis that supports several major technology companies like Microsoft and Amazon. In addition, major companies like Boeing and Costco consume a significant footprint too. Big companies bring great opportunities and economic growth to communities. However, they can have a downside too. Cost of living increases, risk of losing a company, limited skilled people are all risks that offset the opportunities. One can look to the SF Bay Area/ Silicon Valley to see how this is playing out, how competitive it is for talent and how hard it is to relocate someone to the Bay Area.

It is probable that with Amazon’s success and growth trajectory, they may feel that the Seattle ecosystem is starting to become limiting or incapable of handing the entirety of a company like Amazon today and moving forward. If this were the case, I suspect the shortlist of potential suitors may not include Silicon Valley, New York or Boston.

MY TAKE

All that being said, my theory is that there is an impending split on the horizon for Amazon. The move of Jassy to CEO, AWS’ continued growth and secondary factors point to this as a possible outcome. That coupled with the ability for AWS having proved it can stand on its own without the core Amazon.com business further support the perspective.

I look forward to hearing what you think. Share your thoughts in the comments below!

Cloud

Containers in the Enterprise

Containers are all the rage right now, but are they ready for enterprise consumption? It depends on whom you ask, but here’s my take. Enterprises should absolutely be considering container architectures as part of their strategy…but there are some considerations before heading down the path.

Container conferences

Talking with attendees at Docker’s DockerCon conference and Redhat’s Summit this week, you hear a number of proponents and live enterprise users. For those that are not familiar with containers, the fundamental concept is a fully encapsulated environment that supports application services. Containers should not be confused with virtualization. In addition, containers are not to be confused with Micro Services, which can leverage containers, but do not require them.

A quick rundown

Here are some quick points:

  • Ecosystem: I’ve written before about the importance of a new technology’s ecosystem here. In the case of containers, the ecosystem is rich and building quickly.
  • Architecture: Containers allow applications to break apart into smaller components. Each of the components can then spin up/ down and scale as needed. Of course automation and orchestration comes into play.
  • Automation/ Orchestration: Unlike typical enterprise applications that are installed once and run 24×7, the best architectures for containers spin up/ down and scale as needed. Realistically, the only way to efficiently do this is with automation and orchestration.
  • Security: There is quite a bit of concern about container security. With potentially thousands or tens of thousands of containers running, a compromise might have significant consequences. If containers are architected to be ephemeral, the risk footprint shrinks exponentially.
  • DevOps: Container-based architectures can run without a DevOps approach with limited success. DevOps brings a different methodology that works hand-in-hand with containers.
  • Management: There are concerns the short lifespan of a container creates challenges for audit trails. Using traditional audit approaches, this would be true. Using newer methods provides real-time audit capability.
  • Stability: The $64k question: Are containers stable enough for enterprise use? Absolutely! The reality is that legacy architecture applications would not move directly to containers. Only those applications that are significantly modified or re-written would leverage containers. New applications are able to leverage containers without increasing the risk.

Cloud-First, Container-First

Companies are looking to move faster and faster. In order to do so, the problem needs reduction into smaller components. As those smaller components become micro services (vs. large monolithic applications), containers start to make sense.

Containers represent an elegant way to leverage smaller building blocks. Some have equated containers to the Lego building blocks of the enterprise application architecture. The days of large, monolithic enterprise applications are past. Today’s applications may be complex in sum, but are a culmination of much smaller building blocks. These smaller blocks provide the nimble and fast speed that enterprises are clamoring for today.

Containers are more than Technology

More than containers, there are other components needed for success. Containers represent the technology building blocks. Culture and process are needed to support the change in technology. DevOps provides the fluid that lubricates the integration of the three components.

Changing the perspective

As with the newer technologies coming, other aspects of the IT organization must change too. Whether you are a CIO, IT leader, developer or operations team, the very fundamentals in which we function must change in order to truly embrace and adopt these newer methodologies.

Containers are ready for the enterprise…if the other aspects are considered as well.

Cloud · Data · IoT

IBM and Weather Company deal is the tip of the iceberg for cloud, data and IoT

Technology and how we consume it is changing faster than we know it. Need proof? Just look at the announcement last night between IBM & Weather Company. It was just a short 4.5 months ago that I was sitting in the Amazon AWS re:Invent keynote on Nov 13, 2014 listening to Weather Company’s EVP, CTO & CIO Bryson Koehler discuss how his company was leveraging Amazon’s AWS to change the game. After the keynote, I had the opportunity to chat with Bryson a bit. It was clear at the time that while Amazon was a key enabler for Weather Company, they could only go so far.

The problem statement

Weather Company is a combination of organizations that brings together a phenomenal amount of data from a myriad of sources. Not all of the sources are sophisticated weather stations. Bryson mentioned that Weather Company is “using data to help consumers gain confidence.” Weather Company uses a number of platforms to produce weather results including Weather Channel, weather.com and Weather Underground. Weather Underground is their early testbed for new methods and tools.

Weather Company produces 15 billion forecasts every day. Those forecasts come billions of sensors across the globe. The forecasts for 2.2 million locations are updated every four hours with billions more updated every 15 minutes. The timeliness and accuracy of their forecasts is what ultimately builds consumer confidence.

Timing

The sheer number of devices makes Weather Company a perfect use-case of leveraging Internet of Things (IoT) powered by Cloud, Data and Analytics. Others may start to see parallels between what Weather Company is doing with their own industry. In today’s competitive market, the speed and accuracy of information is key.

IBM’s strategy demonstrated leadership in the cloud and data/ analytics space with their SoftLayer and Watson solutions. Add in the BlueMix platform and one can see how the connection between these solutions becomes clear. Moving to IoT was the next logical step in the strategy.

Ecosystem Play

The combination of SoftLayer, BlueMix and Watson…plus IoT was no accident. When considering the direction that companies are taking by moving up the stack to the data integration points, IoT is the next logical step. IoT presents the new driver that cloud and data/ analytics enable. BlueMix becomes the glue that ties it all together for developers.

The ecosystem play is key. Ecosystems are everything. Companies are no longer buying point solutions. They are buying into ecosystems that deliver direct business value. In the case of Weather Company, the combination of IBM’s ecosystem and portfolio provides key opportunities to producing a viable solution.

Next Steps…

That being said, the move by IBM & Weather Company should not be seen as a one-off. We should expect to see more enterprises make moves like this toward broader ecosystems like IBM’s.

Business · CIO · Cloud · Data

How Important are Ecosystems? Ecosystems are Everything

The IT industry is in a state of significant flux. Paradigms are changing and so are the underlying technologies. Along with these changes come the way we think about solutions. Over time, IT organizations have amassed a phenomenal number of solutions, vendors, complex configurations and experience. Continuing to support that ever-expanding model is starting to show cracks. Trying to sustain this approach is just not possible…nor should it be. It is time for a change. Consolidation, integration, efficiency and value creation are the current focal points. Those shifts create a significant shift in how we function as IT organizations and providers.

Changes in Buying Habits

In order to truly understand the value of an ecosystem, one first needs to understand the change in buying habits. IT organizations are making a significant shift from buying point solutions to buying ecosystems. In some ways, this is nothing new. IT organizations have bought into the solutions from major providers for decades. The change is in the composition of the ecosystem. Instead of buying into an ecosystem from a single provider, buyers are looking for comprehensive ecosystems that span multiple providers. This lowers the risk for the buyer and creates a broader offering while providing an integrated solution.

Creating the Cloud Supply Chain

Cloud Computing is a great use-case of the importance of building a supply chain within the ecosystem. Think about it. Applications, services and solutions that IT organization provides to users are not single-purpose, non-integrated solutions. At least they shouldn’t be. Good applications and services are integrated with other offerings. When buyers choose a component, that component needs to connect to another component. In addition, alternatives are needed, as one solution does not fit all. In many ways, this is no different from a traditional manufacturing supply chain. The change is to apply those fundamentals to the cloud ecosystem.

Integration

In concert with the supply chain, each component needs solid integration with the next. Today, many point solutions require the buyer to figure out how to integrate solutions. This often becomes a barrier to adoption and introduces risk into the process. One could go crazy coming up with the permutations of different solutions that connect. However, if each solution considered the top 3-4 commonly connected components, the integration requirements become more manageable. And they are left to the folks that understand the solutions best…the providers.

Cloud Verticals

As cloud-based ecosystems start to mature, the natural progression is to develop cloud verticals. Essentially, creating ecosystems with components for a specific vertical or industry. In the healthcare vertical, an ecosystem might include a choice of EHR solutions, billing systems, claims systems and patient portal. For SMB or Mid-Tier businesses, it might be an accounting system, email, file storage and website. Remember that the ecosystem is not just a brokerage of selling the solutions as a package. It is a comprehensive solution that is already integrated.

Bottom Line: Buyers are moving to buying ecosystems, especially with cloud services. The value of your solution comes from the value of your ecosystem.