Cisco AI Summit Expands and Elevates the Enterprise AI Conversation

Today, Cisco held their AI Summit in San Francisco. To say the summit was excellent would be an understatement! The summit featured a day-long series of interviews with an incredibly impressive lineup of who’s who in AI. From OpenAI’s Sam Altman to Intel’s Lip-Bu Tan to AWS’ Matt Garman, Cisco’s Chuck Robbins and Jeetu Patel did a masterful job of interviewing each leader and elevating the conversation in ways that will squarely resonate with enterprise leaders.

Hitting key messages from the start

Cisco CEO Chuck Robbins kicked off the summit with most viewing the summit virtually. Robbins’ key message on AI was about trust. The discussion of trust would come back multiple times during the course of the summit. Trust, with customers and data, is also a topic I regularly hear come up among CIOs. As AI moves from efficiency to customer engagement and employee engagement, trust is a mainstream topic.

After a few opening words, Robbins passed the baton, and much of the interviewing, to President and CPO Jeetu Patel. In Patel’s opening remarks, he touched on three core challenges for AI:

  • Infrastructure constraints
  • Trust deficits
  • Data gap

Patel’s message was clear and direct. We all know there are significant opportunities with AI. However, there are also challenges we must overcome to achieve these outcomes…and in order to overcome these challenges, we must think differently about AI. This set the stage for the series of major AI players to come.

Key Takeaways

It’s hard to distill the sheer number of truth bombs dropped throughout the day. Each participant built on the last and took conversations to places many may not have considered yet. AI is such a vast space with plenty to cover. That being said, there were a few key themes.

Infrastructure: It seems there is an insatiable appetite for more and more computing resources to consume AI. At the same time, AWS’ Garman mentioned that they are still running NVIDIA A100 based systems which are 5-6 years old and run many AI workloads just fine. At the same time, resource constraints are increasingly starting to show up. From natural resources such as power and water to computing resources and chip manufacturing. There doesn’t seem to be enough to fill the demand. Anecdotally, I’ve seen a gap between the capacity and what enterprises are currently consuming. A recent study also started to show this widening gap. Are we building to actual projected demand? Or building for consumption demand that may never materialize. In the meantime, there is a lot of opportunities whether talking about infrastructure for AI or AI for infrastructure.

Data: At the core of AI sits data. It was mentioned that all of the world’s data has been fully consumed by AI today and models are moving on to synthetic data. As a former operator, this is a fascinating statement as there is still data that has largely been untapped. It’s the data in files and spreadsheets buried on a server or laptop. The problem is that this data is also largely unknown, hard to reach and hard to consume.

Considering Cisco’s core business of networking and security, there is no shortage of new data. Digging further into networking for AI and AI for networking shows the power that the combination of AI and networking can have. Due to the sheer amount of data generated at a network and security level, AI is only now able to start opening new possibilities in terms of insights.

Trust and Governance: Both Robbins and Patel made mention of the importance of trust in order for AI to succeed. Part of that trust is rooted in ensuring that the right governance models are used. As much as users must trust the data that AI is presenting, customers also want to ensure that companies (and AI) are not misusing their data. While that is a very simple way to look at trust, we need further conversation to talk about what trust really means and how governance will play out in ensuring trust. Trust may seem very black and white on the surface. However, there is a lot of grey that leads to trust.

AI Tooling: We have known that AI has the potential to greatly impact how companies develop code. Patel shared a metric that at Cisco “70% of all code written for AI functions is written by AI.” Of course humans still need to validate the output. Regardless, that’s an impressive stat for Cisco and an interesting benchmark for enterprise organizations.

People: Last, but not least, the conversation talked about people. While the opportunity to use AI more broadly is there, enterprises are still challenged to bridge the gap. AI is a disruptive force for people in both good and bad ways. So many are focused on the narrative that ‘AI will replace people’. Yet, the real opportunity as discussed today was about how people work with AI and how AI helps people. AI is not a 1:1 replacement for people nor SaaS, but it can augment and accelerate how we leverage both.

In summary

One thing to consider is that this was a Cisco summit on AI. If you took Cisco off the event name you would have thought this event was a Wall Street Journal or CNBC event considering the caliber of discussions. The level of thought leadership on stage both from the participants as well as Cisco executives was impressive and made you think. If you had not seen it before, it’s becoming increasingly clear that Cisco both understands and is operating at a level beyond their infrastructure roots. This should give rise to broadening conversations between CIOs and Cisco beyond just infrastructure. 

From an AI perspective, one of the clear takeaways from the summit is that we need to think differently when it comes to AI. We need to rethink how we leverage technology from the ground up.


Discover more from AVOA

Subscribe to get the latest posts sent to your email.

Discover more from AVOA

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from AVOA

Subscribe now to keep reading and get access to the full archive.

Continue reading