One tech giant after another has waded into the AI waters in the past two years, but until recently Cisco’s AI strategy was a bit of a mystery to some. However, after attending the Cisco Partner Summit at the end of 2024, reviewing the company’s fiscal Q1 2025 earnings and having multiple conversations with Cisco executives, I see clearly how the company has taken its time to clarify a product and market strategy that aims to capitalize on its strengths across networking, security, data management and even compute to give Cisco a prominent place in hyperscaler and enterprise AI infrastructure.
Mind you, it’s early days. In the past quarter, the company made $300 million from AI-specific products, and it says it’s on pace to hit $1 billion in AI revenue in fiscal 2025. That’s a solid start, though it must be understood in the context of a company that’s been above $50 billion in annual revenue for years now, so there’s also a lot of room to grow. Still, Jeetu Patel, who was named the company’s chief product officer last summer, has articulated a strategy meant to give Cisco a “platform advantage” where the company’s different offerings play together cross-functionally for any customer — hyperscaler or enterprise — looking to get the most out of AI in the datacenter. This strikes me as Cisco leaning into what it does best: connected platforms.
Building An AI Strategy On Cisco’s Existing Strengths
The example of platform advantage that Patel cited during his Cisco Partner Summit keynote was his preference for Apple products: He’s carried an iPhone for years, which plays nicely with his MacBook and any other personal electronics like an iPad or Apple Watch. (I do the same thing with Samsung Galaxy products.) The point is that being thoroughly invested in one or two of those products makes it likelier that you’ll choose other products from the same maker.
Obviously, the math and the purchasing decisions are quite different in enterprise B2B technology, but Patel makes a strong point about how well Cisco is positioned for this approach with enterprise AI. Cisco’s enduring stature in networking seems impregnable; it’s been a major player in cybersecurity for decades and is showing renewed vigor in that area; its reach in data management (including observability and security) has only grown through its acquisitions in recent years — most notably the purchase of Splunk; and its ambitions in high-end datacenter compute are at least plausible. Importantly, many enterprises are as tied to Cisco products in the datacenter as Patel is to his Apple products or I am to my Samsung gear.
That said, I was a little surprised to hear Patel during his keynote say “We are going to double down on the compute business” and “We are in the compute business unapologetically.” It absolutely makes sense from a platform perspective to provide a full stack, both for the technical ease of interoperability and the business advantages of vendor loyalty. Plus, Cisco has traditionally been good at full-stack experiences. But I have never before heard this kind of talk about compute from Cisco.
There are limits to where Cisco could or should try to go with compute. In particular, the margins on compute are a lot lower than on any of Cisco’s other businesses, so in my view the company wouldn’t prosper if it tried to compete in datacenter compute at scale with Dell, Lenovo and HPE. I think it’s going to be a matter of picking the right niches to compete, and adding compute alongside other products where it makes sense to round out the offering to a specific customer. The company’s blades and its “better together” philosophy work well for those customers that value this approach — and Cisco needs to find more customers who value it. Enterprise AI is complex, and therefore this focus is smart.
Enabling Large-Scale AI Workloads
With all of that as context, two big infrastructure products Cisco has announced make a lot of sense for the AI datacenter market. The Nexus 9000 switch comes from the deepest part of Cisco’s core competence; it’s a highly scalable and efficient 800-gigabit switch already being used by hyperscalers. And Cisco is using its Unified Computing System approach — which combines compute, networking and storage in a single system — to offer a complementary server with eight Nvidia GPUs for AI training. I consider UCS one of Cisco’s major “easy button” offerings, and putting these two new products together supports Patel’s contention that “We are now in the AI infrastructure business.”
The most lucrative area in AI over the past two years has been training, but Patel and Cisco are clearly also focused on the growth of AI inferencing and enterprise AI application deployment, which should start ramping up seriously in 2025, likely in the second half of the year. No question, it was great to learn from the recent earnings call that Cisco has notched at least one big design win for hyperscaler AI and sees continued momentum in back-end networking for hyperscaler LLM training clusters. But Patel notes that something like $200 billion has been spent industrywide on AI training so far to yield something like $5 billion to $10 billion in revenue. So we have to think that a bigger payoff is coming. As enterprises — which are seldom the first movers in new areas of technology — transition from experimentation to mass deployment of AI apps that genuinely move the needle, we can expect far more revenue coming from inference and enterprise AI in general. Cisco is positioning itself to capture a meaningful part of the IT spending that will drive that revenue.
In line with this, Cisco is embedding AI in all of its products. As Patel told the audience of Cisco partners during his keynote, “You should expect that every product that we build . . . has AI built into the fabric and the way we think about building the product. It’s not an afterthought.”
The other “easy button” from the Partner Summit are its AI PODs for inferencing, which are plug-and-play infrastructure stacks — in this case leveraging Nvidia software — configured for specific industries and use cases. As with UCS, these products combine compute, networking and storage, plus they add cloud management functions. They’re built to be scalable and very quick to spin up, which should increase their appeal for enterprises.
One other thing struck me when my business partner Daniel Newman and I interviewed Patel for a Six Five on the Road segment: He clearly understands the underlying opportunity in the vast, untapped bodies of enterprise data not yet being used to train AI models. It seems like forever that I’ve been reminding people about the large majority — maybe 80% — of enterprise data that’s not in the cloud and not available for LLMs based on publicly available data. This is the data that smart early adopters among enterprises are using to tailor and fine-tune their in-house models. This is also the data that smart vendors from SAP to ServiceNow to AWS to Microsoft to IBM are helping their customers harness for AI. Cisco now seems intent on enabling what Patel calls the “boring” functions in back-end networking, model security and so on that will enable many more enterprises to make the most of this data with AI.
Selling Picks And Shovels To AI Gold Miners
Given Cisco’s size and the complexity of its portfolio, there’s a lot more I could say about its efforts across AI training infrastructure, AI connectivity and AI inference. At some point I may also do a deeper dive on its Silicon One initiative, which helps Cisco incorporate its own silicon into its equipment. That is likely to grow more important over time in the AI sphere, because neither OEMs nor end customers are completely happy with a market in which just one or two semiconductor makers (read: Nvidia, with a side of AMD) dominate the scene.
The more important thing for now is to understand the canny way that Cisco is positioning itself in the AI market, especially for enterprise customers. Patel is fond of pointing out that the people who reliably got rich during the California Gold Rush were not the prospectors and miners, but the suppliers selling picks and shovels. We saw this with the hyperscalers already, and I believe it will also be true in the enterprise datacenter. The implication of his analogy is obvious: Whatever the hyperscalers or, increasingly, enterprises achieve or don’t achieve with generative AI, Cisco can capitalize on those customers’ need for fast, scalable, secure networking architecture that supports AI training, connectivity and inferencing. For a larger company like Cisco, the challenge is always execution, but there’s no question in my mind that the opportunity is there.
While I’m still not sure about how material Cisco’s efforts in compute will be to its AI strategy — I need to see some real-world results before I’m a believer — I have to give the company credit for a smart strategy that builds on its existing portfolio and relationships, and on clever moves from years past. In particular, I would call out the wisdom of its decisions years ago to integrate Nvidia AI into Cisco’s endpoint collaboration equipment. (I have one of these devices sitting right beside me on my desk as I write this.) The quality, security and user experience that Cisco offers is solid, and Patel is right to say that he regards quality as “priority zero,” meaning he intends it to be such a given that people don’t even talk about it anymore.
With its acquisition of Splunk and the top inhibitor to enterprise AI being data management, I would like to see the company double down on a full-scale data management platform. By “full-scale,” I mean one that could compete with Cloudera, Databricks and Snowflake.
As an ex-product guy, I’m biased, but I do love Patel’s product philosophy. He wants to build great products that customers love, and he wants Cisco to focus on the “10x” market opportunities where Cisco can run laps around the competition. Following the lead of his CEO, Chuck Robbins, he also knows that “Tempo matters.” In other words, now that Cisco has taken the time to get its AI product mix and market approach right, it intends to move fast. I think 2025 will provide Cisco with lots of opportunities to prove the wisdom of this approach.