We are LIVE! talking insights into Intel’s 4th gen Xeon family of processors with Lisa Spelman

By Patrick Moorhead - March 15, 2023

On this episode of the Moor Insights & Strategy Insider Podcast, host Patrick Moorhead is joined by Lisa Spelman, a Corporate Vice President in the Data Center AI Group and general manager of the Xeon and Memory Group at Intel Corporation. Their conversation covers:

  • Exploring the real game-changing features revealed at the 4th gen Intel Xeon Scalable Processors Launch
  • How the new 4th gen processors can deliver breakthroughs in areas like elastic computing models, such as containerized microservices and the expanding use of AI
  • How Intel is enhancing data security, regulatory compliance, and data sovereignty
  • What customers can expect with the 4th Gen Xeon processors compared to previous generations
  • Strides made in terms of performance per watt efficiency and how customers will benefit from what Intel has done with built-in accelerators in the 4th Gen Xeon
  • How 4th Gen Xeon is going to help us get to a more software-defined future across industries ranging from telecommunications and retail to manufacturing and smart cities

This is a great conversation you won’t want to miss!

Be sure to subscribe to Moor Insights & Strategy, so you never miss an episode.

You can listen to the episode here:

TRANSCRIPT

Patrick Moorhead: I’ve been traveling a ton, but-

Lisa Spelman: Seems like you’re totally back in it.

Patrick Moorhead: I’m totally back in it, yeah. Okay, let’s do this. Five, four… Hi, this is Pat Moorhead with Moor Insights & Strategy. And we are here for another Moor Insights & Strategy Insider, where we interview the most influential executives from the most important technology companies on the planet. And today we have Lisa Spelman with Intel. Lisa, how are you?

Lisa Spelman: I’m great, Pat. It’s good to see you, and thank you for having me.

Patrick Moorhead: Absolutely. You’ve been on a lot of stages lately. In fact, even with your partner events, I’m pretty sure there’s going to be a Lisa sighting. So we’re just so happy to have you on to do the post-game report from the latest Xeon that came out. And I have to tell you, Lisa, and we talked about this in the green room, not that I base everything on social media interactions, but that is part of the market education that industry analysts have out there. We have reports, we have social media, I have Forbes articles, and I actually got more play on your announcement on social media that I got on what I consider the most exciting consumer announcements at CES. So that says something to me, something is happening there.

Lisa Spelman: Well, that’s cool. It’s good to hear. But I’m most relieved that you haven’t started to base your whole life on your social interactions and you’re still holding credit to the real ones. But I do think there’s a bit of pent-up demand for the information, for the product, and for all the news that got out there.

Patrick Moorhead: Yeah, absolutely. And by the way, some people do live under rocks and they may not have seen it. But I think maybe the best way to spend our time is we can talk about maybe some things that they didn’t see in the announcement. So the first thing is to put this on pause and then go watch Lisa on the announcement reel on YouTube.

But, no, seriously, what are some of the real game changers or even the hidden gems that maybe people didn’t see or didn’t know about the announcement. You spent years working on this product and fortunately or unfortunately, it gets put into a one-hour-long serial video.

Lisa Spelman: Yeah, it is. And we also have to make choices. And as you may have seen, we really wanted to highlight our customers and give them the opportunity to tell their story. Because everything that we did, all of the thought process that went into building the 4th Gen Xeon was around customer requirements, and how to-

Patrick Moorhead: Yes.

Lisa Spelman: … address those super fast growing workloads and those largest workloads, which is where we hear customers are running into bottlenecks. And I have a couple examples of what you were calling the hidden gems, but you’ve heard us talk a lot about artificial intelligence and who doesn’t, right? It’s the fastest growing workload. It is pervading every single industry. It’s everywhere. And we talked about how we’ve integrated AMX to accelerate all sorts of artificial intelligence algorithms, models. We have all the frameworks updated, and you’re getting these huge performance gains. But, again, going back to that hidden gems, what we didn’t talk about was the next generation of advancements we’ve made in AVX-512.

So that has been in our hardware since 2017, and it was really at first focused on high performance computing, but it does require software changes. And as we know, software moves very fast and very slow depending on how you’re looking at it, but getting an entire ecosystem up to date with all of the hardware capabilities can take time. And we work to continuously improve both the software, but also the hardware underneath it so that you continue to see this kind of step function improvement.

And so with AVX-512, we’ve not only seen high performance computing use cases take off over the years, but we saw some really great performance benefits in this generation across a much wider variety of use cases. So we’ve got AI being impacted databases, and even now our 5G support. And I actually have a couple of those 5G hidden gems as well.

Patrick Moorhead: No, I love that. And by the way, I’m super glad you pointed that out, because I didn’t know that either. You had so many different accelerators. I mean, AVX-512 was in there, but part of the beauty of AVX-512 is that it’s programmable and how you can make it do different things. Now, it is a really large investment to on a die area perspective that you’re putting in there, and I’m glad you’re getting more out of it from more use cases.

So, gosh, I think I’ve watched every Xeon launch for the last 15 years probably, and what has changed and what is new. You had talked about some certain use cases, elastic computing models, for instance, not just in the big cloud, but even on-prem today as a service. But you also put some effort into microservices with containers that are really the future of the transport for applications. And you also talked a little bit about AI. Can you talk about how you’re tackling those?

Lisa Spelman: Yeah, it’s a good one, because we talk a lot about the workloads, whether that’s 5G, whether it’s artificial intelligence, high performance computing, things like that. But this move towards a microservices foundation is a pretty big focus for us because of that cloud consumption relevance. And we see so many of our cloud service providers focusing in this area in the way that they offer services, in the way that they provide infrastructure, in the way that they may provide their own SaaS or PaaS services themselves. So it has become a more important area for us to invest and to really put in some of that acceleration.

We didn’t talk too much about our data streaming accelerator, but this is an example of a technology that we built in that boosts your I/O speed and it can do that boosting up to 60%, while also driving down latency. So that’s a really impressive double whammy to get when your speed goes up, your latencies going down, and you start to see that show up in the performance of those micro services. So that is the type of work and type of example that our fellows and our senior fellows in their engagements with our customers on the technical side have those deep, deep conversations about. Literally, what is happening with every single one and zero in your infrastructure? And how can we think through optimizing that?

We talked a lot about QAT as it relates to encryption and as it relates to improving the security opportunities and network traffic, but you also see it in cloud workloads delivering a 2.3x speed up in database backup time, so that’s for a SQL Server as an example. And the ability to more efficiently chomp through your backups and get back to performant application service is really important for a lot of standard enterprise applications.

In-memory databases, we delivered a focus there on acceleration, increasing performance for RocksDB as an example by 3x. But almost as important, and for some customers more important it was a 2.2x higher performance per watt for that RocksDB example using IAA. And so when you think of driving the performance up and improving their performance per watt experience for our customers, it makes the idea of upgrading infrastructure not just go from interesting but too necessary, because of just the amount of data coming in and the need to scale to support it.

Patrick Moorhead: Lisa, how, maybe real is the wrong word, but it does take a process where you have to get the source code to take advantage of the acceleration, then you have to get your customers to take advantage of that certain piece of software that has that acceleration. Where are we on the map of, we’re 70% there, 80% there? Across like 12 accelerators, I know this is a beefy question, but it’s a question that I get which is, “Hey, how real is this? How quickly can I take advantage of the advantages here?”

Lisa Spelman: It depends. So when I look at a brand new accelerator versus one that we’ve been developing on for a while, so again, going back to that AVX-512-

Patrick Moorhead: Yes.

Lisa Spelman: … when we started that we were trying to get 10% of applications that would benefit from it enabled and ready to go. And the good news was the performance benefit was so strong that the ecosystem started to work with us. So it’s on us to prove it-

Patrick Moorhead: Yes.

Lisa Spelman: … and then make it easier for our customers or partners or software vendors to pull through. And so it varies across, so for accelerators like DSA or IAA that I was talking about, we start with very specific use cases and try to do the work to really prove that out in that use case, so that customers can understand it, they can start using it. And then we extend, so kind of the land and expand.

But if you look at something like those microservices improvements we were talking about, when you look at some of those, we had some examples around hotel reservation systems and social networks and getting 60, 80, 90% performance gains, gen on gen, those are just available. That’s not requiring a bunch of software work, that’s just out there. You can prove it with industry standard benchmarks like DeathStarBench, which tries to be a little bit more comprehensive and look at a more holistic system performance. But that’s example of ones that are ready to go, and the performance is there. Other ones can require a bit more of ecosystem readiness.

AMX, our AI accelerator, is one that I’m really excited about because of that multi-year journey we’ve already put in to establishing-

Patrick Moorhead: Yes.

Lisa Spelman: … that XEon foundation for inference and the capability with training. So that’s where if AMX was our first investment in AI acceleration, we wouldn’t be anywhere near where we are today, but because it’s generations worth, we actually have a ton of the frameworks done. We have over 400 models ready to go. I mean it’s usable, it’s delivering out-of-box performance. And we’ve gone from our first investments, which required ninjas into just standard performance available to the user.

Patrick Moorhead: No, that’s great. One area that, and again, there’s only so much you can hit in a one-hour keynote, I didn’t hear a lot about security and zero trust, but I know that Intel is putting a tremendous amount of investments, and I see in particular your cloud partners setting up, “Hey, this is a zero trust version of this.” What did you bring that was new with security, with regulatory compliance, data sovereignty above the cryptography acceleration that you talked about?

Lisa Spelman: We talked about confidential computing as a service that some of those cloud service providers are starting to bring into the environment and really building out that market. We didn’t go into as much detail as that foundation underneath, but security is a multi-year journey as well. And I think where we’ve settled and with our strategy, I think we’ve been able to find a lot of alignment with our customers and the ecosystem around that foundational baseline of hardware investment in order to provide the base layer. And then giving room in the stack for additional services and capabilities to be built on top to deliver essentially that full stack solution for customers.

And so we have multi-year journey around that focus of protecting data at rest, in use, at transit. And we’ve had this foundation of SGX, or the software guard extensions-

Patrick Moorhead: Correct.

Lisa Spelman: … and it’s our most trusted security feature. It is very, very focused and has a lot of users out in the environment, but it also does not deliver that full VM isolation. And so in this generation we brought our first offering of the trust domain extensions or TDX.

Patrick Moorhead: Yes.

Lisa Spelman: And this is new. And we have four cloud partners that are starting out and including it as part of their confidential computing offerings, that’s Microsoft Azure, Alibaba Cloud, IBM Cloud, and Google Cloud. And with TDX, you get, the fact that your confidential VM, you have your guest OS and your application that are isolated from the cloud host from the hypervisor and other VMs on the platform. And that can be really important for customers that are renting out individual VMs and are managing their workload in a very dynamic type of environment.

From our perspective, we think that this layering of capabilities is what customers require. I think of it like a menu to some extent, where you can build up your layers of security based on your view of that application’s criticality, and really the data’s criticality underneath it. So customers are getting more and more choice about how deeply they want to protect each piece of their data.

Patrick Moorhead: Gosh, that’s great Lisa. And as an analyst firm, we cover the services that the hyperscalers bring out and this is not an easy thing to do. And while there are industry standard organizations out there that says, “Hey, this is something that’s compliant,” there still are a lot of different versions out there. So in the end, different cloud providers will have different versions and that’s not necessarily a bad thing. It gives them the ability to differentiate. And I like it.

We started the conversation talking about one of the key acceleration points for fourth generation Scalable Xeon, and that was AI. You had AI acceleration, I think, the first time in the previous version, third generation. Is this year’s just bigger, better, faster, all the above? What have you brought to the table here?

Lisa Spelman: A lot. So our first generation-

Patrick Moorhead: And you did talk AMX, so I’m paying attention.

Lisa Spelman: Yeah, I know. Our first generation actually started coming out in 2017 and it’s kind of a fun backstory. Because you said you’ve been to every kind of data center focused launch or vendor paid attention for 15 years. I said this is my eighth is Xeon launch. But it’s fun to look back at what at the time was a very stressful decision, honestly. So myself and some of our lead technologists and planning folks back as we were getting products ready, planning them for that 2017 and beyond, had a pretty big choice to make about whether we were going to invest in hardware based AI acceleration on Xeon. And at the time the industry was starting to get woken up the potential-

Patrick Moorhead: All about the GPU or something, right?

Lisa Spelman: Yeah, no, the potential for AI was growing, and the GPU was set as the standard. So you really have this question, Pat, around whether you are chasing fool’s gold-

Patrick Moorhead: Yes. Yes.

Lisa Spelman: … by trying to offer it on a CPU, which at the time, I have to say, we were viewed as a little bit skeptically, is this really where you think the market will or could go? And you’re making big decisions, because I mean this as well as anyone, you’re adding die size, you’re taking up space that could be used for something else-

Patrick Moorhead: Yes.

Lisa Spelman: … and you’re going to pay for it whether or not your customers use it.

Patrick Moorhead: That’s right.

Lisa Spelman: So we made that choice and integrated, the first one was the VNNI was that first true hardware-based extension for artificial intelligence acceleration. And that is one that was really focused on inference acceleration, but it did require that ninja type work. We were not standard in those key frameworks. You use TensorFlow, TensorFlow was built to recognize a GPU. It didn’t consider a CPU. And that’s been this journey that we’ve gone on. And I’m really proud of what we have accomplished. When I look at the fact and the reality that the Xeon C P U is the foundation for the industry’s inference. That doesn’t mean isn’t some inference that is done on other pieces of silicon, it absolutely happens.

Patrick Moorhead: And, Lisa, I’ll even go one step further is even when GPUs are going to consume the Earth on AI, which for a lot of areas they have, and you have your set of GPUs and accelerators still, at that point, and this is just an undisputed fact, more inference was done on CPUs than were done on GPUs.

Lisa Spelman: Yep. It’s a more affordable way to do it and it’s built more into the workflow of inference versus that need to go offload. So it was a kind of walk down memory lane of that big decision, and then once you said it, it became a bit of a standard for Xeon. And that’s really gratifying to have seen the software ecosystem come along with us to the point that AMX is default understood in the main branch of the most important and most utilized frameworks.

I know that sounds maybe like an obvious statement, but that represents so much work to have multiple options for customers. So of course they can still work on GPU-based training-

Patrick Moorhead: Sure.

Lisa Spelman: … but now they have another option, especially for models that are a little bit smaller. And if you think about it, ChatGPT as the kind of buzzy biggest model… It’s big, right?

Patrick Moorhead: It’s big.

Lisa Spelman: It’s big. That might be an example where that specialized training is absolutely worthwhile, but the vast, vast majority of models are not at that level, like that size and sophistication.

Patrick Moorhead: So just to clarify, your people will train on the new fourth gen leveraging AMX.

Lisa Spelman: Yeah, I mean-

Patrick Moorhead: Because third gen it was really an inference story, which are, you want the lowest latency on a certain model size third gen, but for fourth gen training, too.

Lisa Spelman: Yeah. I would say over the-

Patrick Moorhead: Cool.

Lisa Spelman: … generations path, we’ve made some mega improvements in the hardware and software for inference, and we’ve made some good but more minor improvements in training. Whereas AMX brings a lot of training capability to the table on CPU. It starts to approach that performance level of a modern GPU. And when I say that, I don’t mean the latest generation, I don’t mean the top of the line-

Patrick Moorhead: Yes.

Lisa Spelman: … offerings, but I mean ones from the last couple of years. And it does that at an incredible performance per dollar, per watt value. So the option to utilize your infrastructure. I don’t see customers going out and taking large models and things they’ve already got built for GPUs and-

Patrick Moorhead: Of course.

Lisa Spelman: … switching them over. But I do see new greenfield AI opportunities having the chance to be not just inferred on Xeon, but also actually trained on Xeon as well.

Patrick Moorhead: And I think we have seen over time there were elements, and I guess I’m going on year 32 in the industry here where things were completely disaggregated, and then they were pulled into the processor itself, like a floating point unit. So it’s natural to think that it would be the same thing with AI, especially with the most popular type of workloads, where you can lay down a specific block that just does AI. So it makes perfect sense to me, and it’s really aligned with the historical sucking sound of things getting pulled in.

No, but it just makes sense. I mean with Moore’s Law and the ability to pull more in the same die area, it just makes sense. So to me it adds an element of hegemony, which is an order. So thank you for doing this. And I don’t feel like you’re going to have to convince as many people as you had to three years ago of this capability and this there. And I think that knowing that you’re not just going to stop here, that you’re going to keep moving forward, and that you also have your own GPU line, you have your own, I’ll just call it fixed function AI line to be able to do the biggest models if you want. I think that’s important, too, even though it wasn’t really discussed at your event.

Lisa Spelman: And Pat, just building on what you said, you called it the giant sucking sound of the CPU, but you have your own experience in this space. CPUs are often described and considered as general purpose and that continues to be true. Xeon CPU can do anything, but AI is becoming general purpose. It is not-

Patrick Moorhead: That’s right.

Lisa Spelman: … this unique thing that is held for just a very few small users in the world. I mean it’s every industry, it’s every workload, it’s being built into workloads. So I think that’s kind of that tipping point that we’ve reached in this space, so [inaudible 00:25:07].

Patrick Moorhead: Well, and it takes a full almost a decade to get-

Lisa Spelman: It does.

Patrick Moorhead: … everybody to do a giant reset. And between third gen and fourth gen, we’re honing in on that. So I love AI. I like that that was when you came in on the stage at the event, and it’s good to see there’s different ways that people can accomplish it.

One big element, we talked a little bit about buzzwords, but sustainability is talked about a lot. I mean, whether you hate it or you love it, it’s here. There are some employees who want to work for a company that is behind sustainability. There are boards of directors who are being held to account certain scores related to… So what I always tell people is, “Hey, even if you can’t stand it’s like here and it needs to be addressed.”

And one thing I think we can all agree with is doing more with less. Everybody can agree with, we can all get around that. What types of gains did you make on 4th Gen Scalable Xeon related to, let’s say, performance per watt? I would have to think that the use of accelerators, which is always the best performance per watt, comes into play as well.

Lisa Spelman: You said sustainability is also pervasive. I am heading to Europe here soon and I noticed that my flight has a carbon emissions score to it. So it’s everywhere and there’s a lot of consumer as well as business education around it. So those who ignore it and don’t pay attention do so very much at their own risk, and we recognize that. And there’s the element of course of the desire to do better and be better. And that has driven a lot of Intel’s investments in our manufacturing. All of our efforts to get to that net zero manufacturing, all the work we’ve done, not just on reducing carbon emissions but improving our wastewater management and the work that we’ve done on conflict-free minerals.

I mean I genuinely believe Intel stands out above and beyond others in the industry for our leadership in this space, and now is the opportunity to take that into the products. And not just because it does matter for the Earth, for citizens of the world, but because it is truly a business requirement.

Patrick Moorhead: Yes.

Lisa Spelman: And we talked absolutely about what you’re saying where the accelerators offer this step function of improvement, because you are taking generational gains and if you’re just relying on IPC gains in each generation, so say you get 20 or 30% or something like that, it’s really hard to get a 2x, 3x, 10x on IPC alone. So acceleration allows you to do that and then you’re spreading that new performance level over that same power envelope, which is great.

So we bucketed some of just the most popular workloads and we said that on average with using both acceleration and just the general performance gains that you improve your performance per watt by 2.9x, and that would’ve been very hard to do without that built-in acceleration. But one of the things we didn’t talk about at launch, and again you should see me cry over the cutting room floor as we work on these things, but we didn’t talk about something that I consider one of my babies. And you know me, I don’t really invent stuff, but I work a lot with the teams and the super smart people that can.

And we were having these great discussions following up from some of our customer reviews about where we were at with the product, and we came up with this idea of power optimized mode. So in prior generations, for some of our biggest customers, we have offered, we’ll write technical papers and go in and help them take down the idle power or active idle power for their solution based on their environment. And we said, “Oh, here’s all the BIOS knobs, here’s the tweaks and tunes you need to do to help them manage it.” So they have been getting better than off the shelf spec sheet performance per watt for generations.

But we have this thought here that we are actually at a place of automation and capability that we can offer this much more broadly. So now we’ve come up with this optimized power mode, the ability to do some more automatic tuning of BIOS and settings, and customers can make that choice. And instead of having to do 20 different things, it’s just an on/off. And so for certain workloads you see that you can reduce the power consumptions by up to 70 watts per-

Patrick Moorhead: I saw that.

Lisa Spelman: … processor, and then you get a very minimal reduction in performance. So say you get a 5% reduction in performance, but you bring down your two socket system by 140 watts, that can be really outstanding, especially for a workload that maybe isn’t super high performance demand at all times.

Patrick Moorhead: Yeah, I’m really interested in getting underneath that a little bit. It’s something I was expecting, which means that I liked it, and it’s super provocative. Because I mean, you wonder how many different ways is there to optimize power, and yet again you found a way to make that happening. I’m really interested to see how customers are going to use this as well.

One other thing I noticed that I didn’t necessarily expect is I think I saw different derivatives of 4th Gen Xeon Scalable for vertical areas or specific workloads. We saw one for telecommunications, we saw one for HPC, and you’re also doing some tuning and optimizing for verticals like retail and manufacturing. Can you talk a little bit about this? Again, everything can’t get the play, but it’s funny in the end everybody buys vertical. Even though the industry might like to talk about, “Hey, this horizontal play,” you’re making optimizations for not just specific workloads but specific industries as well.

Lisa Spelman: Oh, I mean, you’re right. It’s so easy for me to say things like, “AI is everywhere and it’s in every industry,” but online retail really mostly cares about a recommendation engine to make sure that they’re feeding you the next thing. So everyone cares about it in so much as it delivers a result for their business.

And we’ve just taken our generations of knowledge about our customers. So at Intel, obviously, we call on and work so closely together with our major OEMs and our major cloud service providers and major ISPs and ISVs and we take all their feedback. But we also have this awesome end customer sales force that’s out talking with the high frequency-

Patrick Moorhead: Yes.

Lisa Spelman: … traders and the healthcare and manufacturing and across all these industries, and providing us that visibility into who’s actually using what. And it gives us this ability to tweak and tune SKUs. And we’ll do it sometimes even to help an industry match a licensing model of a-

Patrick Moorhead: Of course.

Lisa Spelman: … very important software provider in that industry.

Patrick Moorhead: Yes. Who might that be? Okay, we won’t mention that.

Lisa Spelman: There’s some options out there. But that’s how you deliver value that goes so far beyond just a benchmark or a core count comparison. And I know those are out there and they exist and it’s fine, but that’s just not really that same focus as where you can point to the return on investment and the time to value that you can deliver with and without Xeon.

And the network SKUs have been a really cool journey as well. And this is one where I don’t consider myself a network expert. I consider myself a big fan, but I work really closely with a bunch of the network experts. And I think partially built off of the learning of that AI journey I was telling you about, it built my confidence to make decisions on betting on network acceleration and starting to include that. I mean I covered the network in my talk at launch, but we didn’t even touch on vRAN Boost.

Patrick Moorhead: Yes.

Lisa Spelman: And when you look at how hard we work to help set the standard for 5G, and then building vRAN acceleration into the SOC, into the processor on top of our generations of work in the data playing development kit or DPDK, so that software layer, you start to get super impressive performance gains. And you start to see the ways in which AI and machine learning actually fold into network operations. So it really does come together. And like we mentioned about verticals, it is how people buy. They don’t need a generalized story. They want to hear the proof point that speaks to their challenge.

Patrick Moorhead: That’s right. And, sure, they want a horizontal price, but vertical delivery.

Lisa Spelman: Yeah, that’s well said. I like it.

Patrick Moorhead: And by the way, the play that Intel has been running for a decade inside of carriers, I mean, that is proof positive that this is working. I remember a day there were five architectures inside of a carrier, and now there’s two and then there’s acceleration. But Intel has done very well in the digital transformation of carriers moving them from this monolithic architecture into this as a service, with, my gosh, containers and VMs and ways to deliver services. But I think we’re going to just see more of this, this specialization. I really do.

Now, the other side of my mouth, I’m going to say, “Hey, the industry needs this horizontal volume play to keep costs down and get things moving.” But even the way that packaging is done now, it’s a lot less expensive to put some of the special sauce on different parts. And then when you add things like AVX-512 that are programmable, that you could do different things for, it’s really up to the developers to figure that out. So exciting stuff here.

Lisa Spelman: It is. And I mean, I agree, it is this push/pull of that specialization versus general purpose and the high volume economics delivering across each workload. I’ll tell you, I get more ideas for acceleration that could be brought on to onto Xeon and into the Xeon chip than I know that we have space for or can get value for. And I have sharpened the pencil on what the criteria is to make that jump, because you can’t do it speculatively all over the place or you end up not servicing the right workloads.

And I think some of the area where we’ve improved in this is our ability to test some of this out in software first, and in proof-

Patrick Moorhead: Yes.

Lisa Spelman: … of concepts, so that you build confidence and understand the market size that you’re going to address. And then I think even within that whole environment of continued acceleration, volume economics, yes, this moved towards disaggregated options and opportunities and systems architectures will continue and that will continue to evolve.

I think it kind of goes back to where I started when I was talking about AVX-512. This industry moves so fast and yet also quite slow sometimes. So it’s like we got to stay on the cutting edge, but then make sure that when we’re on that edge we are getting enough broad deployment to make the value really show up.

Patrick Moorhead: The first time you briefed me on the potential of the disaggregated and more modular architecture, I got super excited. And then I’ve thought about it for a long time and I recognize that there still also has to be a spreadsheet that says, “This makes sense.” Even though it is less expensive than a monolithic design with some custom accelerators, it’s still a very expensive thing to do and can’t be done without a tremendous amount of analysis.

And I think what you have done overall at the company with the 3D packaging, Foveros, the way that you integrated different elements, I do think is going UCI… Finally, we have a standard, right?

Lisa Spelman: Yeah.

Patrick Moorhead: That almost everybody is participating in. That’s super exciting to me. If nothing else, it also could potentially enable a huge boom in these types of even companies who might be able to do some of these specialty type of accelerators.

So I’m super excited about it. And, Lisa, I don’t think I’m being dramatic, even though I do love drama, I mean this was one of the most important product launches in the company’s history, in I’d say probably 10 years. I mean, I think if I look at the 30 plus years I’ve been tracking, hey, I was an Intel customer at once. I was an Intel competitor. I’ve been in and around Intel for over 30 years. But this was a big one.

And I’m super excited to see what your developers do with this, your customers do with this. The ISVs and how they rally around to do this. Because quite frankly, Intel is still the lion’s share of what we see in the data center and on the edge right now. So I’m excited and I just want to thank you for coming on.

Lisa Spelman: Well, thank you for having me. I mean, you’re right, it was so great to have Pat there, part of the celebration and all of that. It was so great to have our customers there, and to be this early in the ramp and already have our customers bringing their customers to the table, sharing the results that they’re seeing in their early deployments and early testing.

So I’m with you. I’m excited for everything that’s going to be built on top of this foundation. And from here we’re just going to continue to accelerate, and not ha, ha with the pun, but to accelerate our beat rate of innovation and delivery for our customers. So I’m looking forward to it. It’s an exciting time.

Patrick Moorhead: Yeah, thanks again. We would love to have you on or somebody in your group to talk about how it’s going, I don’t know, six months to a year.

Lisa Spelman: Yeah.

Patrick Moorhead: Would love to do that.

Lisa Spelman: Yep, we’ll do it.

Patrick Moorhead: This is Pat Moorhead with Moor Insights & Strategy. Lisa Spelman from Intel. I want to thank you for tuning in and if you’d like what you heard, hit that subscribe button. If you’d like to give me feedback, you know where to find me on social media. I spend way too much time on there. New Year’s resolution was to spend less time on there, maybe more time reading. But I can’t help myself, I love the interaction out there. But thanks again. Thanks for tuning in. And Lisa, thank you for coming on the show.

Patrick Moorhead
+ posts

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.