Matt Kimball, Author at Moor Insights & Strategy https://moorinsightsstrategy.com/author/matt-kimball/ MI&S offers unparalleled advisory and insights to businesses navigating the complex technology industry landscape. Tue, 28 Jan 2025 18:18:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://moorinsightsstrategy.com/wp-content/uploads/2020/05/cropped-Moor_Favicon-32x32.png Matt Kimball, Author at Moor Insights & Strategy https://moorinsightsstrategy.com/author/matt-kimball/ 32 32 MI&S Weekly Analyst Insights — Week Ending January 24, 2025 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-january-24-2025/ Tue, 28 Jan 2025 00:19:22 +0000 https://moorinsightsstrategy.com/?p=45339 MI&S Weekly Analyst Insights — Week Ending January 24, 2025. A wrap up of what our team published during the last week.

The post MI&S Weekly Analyst Insights — Week Ending January 24, 2025 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.

I’m not surprised that 2025 has started right where 2024 left off, with AI dominating conversations across tech. The Trump administration has only added to this with the announcement of the $500 billion Project Stargate for AI, which our analysts Paul Smith-Goodson, Matt Kimball, and Will Townsend evaluate from different angles in this week’s updates. Look for more about Project Stargate from us in the days to come.

Chuck Robbins at Cisco AI Summit
Cisco CEO Chuck Robbins at the opening session of the Cisco AI Summit (Photo by Will Townsend)

Last week, Will also published his thoughts on the Cisco AI Summit, which he called “the best AI event I have attended to date.” (I’m not surprised; earlier this month I wrote about how thoughtful Cisco’s approach to enterprise AI has been.) Meanwhile, Anshel Sag wrote about NVIDIA’s latest graphics card for gaming PCs, the AI features of which make it, he says, “without a doubt the fastest graphics card in the world.” It’s a good reminder that even though datacenter GPUs are now the biggest financial engine for NVIDIA, the company has never abandoned its roots — nor lost its dominance — in PC graphics.

The team published a lot of research last week and did some travel as well. While I was in Davos, Anshel attended Samsung Galaxy Unpacked in San Jose and the MIT Reality Hack in Boston. This week, Robert is in Las Vegas for Acumatica Summit and then NYC for Microsoft’s AI Tour. February, March and April are already shaping up to be busy travel months for the MI&S team. Look for our thoughts on these events in upcoming installments of the MI&S Weekly Analyst Insights.

Hope you have a great week,

Patrick Moorhead

———

Our MI&S team published 25 deliverables:

This past week, MI&S analysts have been quoted in top-tier publications such as Network World, Yahoo Finance, and Venture Beat with our thoughts on Databricks, Intel, Samsung, and Starlink. Robert was a guest on the WBSRocks Analysts Gone Wild Podcast to discuss enterprise software

MI&S Quick Insights

When I was spending time with a client last week, it really stood out to me how far we’ve come lately in clarifying AI offerings. Thanks to the rampant pace of AI development over the past couple of years, sometimes the fundamentals of product management and product marketing have been deprioritized. For example, plenty of product messaging and feature descriptions have been released when they’re still at a notoriously high level across the board. But that is starting to change. I take this as a reminder that early adopters are far more tolerant of a product’s rough edges than the general market. It’s a positive sign that we are now seeing more maturity and wider adoption of enterprise AI software products. I believe it’s also a sign that things will slow down somewhat as pilots and prototypes move towards production.

I have been spending more time with NVIDIA agentic blueprints, and I can say that the detail and effort taken to document how to get the blueprints up and running is pretty impressive. This also stands out because these agents can be deployed on-premises or in a cloud. By contrast, most agentic efforts so far have been limited to a specific cloud or a SaaS platform. I am hoping that this level of deployment choice is a sign of things to come, rather than an exception.

2025 has started off with a lot of AI investing. Whether it’s ServiceNow’s acquisition of Cuein, new VC rounds at AI startups, or even the massive commitment from the Trump administration for U.S. AI data centers, it seems like investors are finally getting off the sidelines. That’s a good thing in general, but it will signal a shift in priorities for existing product teams. If you are a developer, I would expect more focus on ease of use, consumability, and samples versus net-new innovations and APIs.

Lastly, while this news is a bit old, I did want to address the hoopla that was made when Satya Nadella of Microsoft made his comments about the future of SaaS. It seems that in the tech world there is nothing we like more than declaring things dead. However . . . that’s not what he said. What will really happen is that the business-logic and user-experience layers of SaaS will be massively changed by agents. But the overall value proposition of SaaS platforms will likely remain intact.

When I was spending time with a client last week, it really stood out to me how far we’ve come lately in clarifying AI offerings. Thanks to the rampant pace of AI development over the past couple of years, sometimes the fundamentals of product management and product marketing have been deprioritized. For example, plenty of product messaging and feature descriptions have been released when they’re still at a notoriously high level across the board. But that is starting to change. I take this as a reminder that early adopters are far more tolerant of a product’s rough edges than the general market. It’s a positive sign that we are now seeing more maturity and wider adoption of enterprise AI software products. I believe it’s also a sign that things will slow down somewhat as pilots and prototypes move towards production.

I have been spending more time with NVIDIA agentic blueprints, and I can say that the detail and effort taken to document how to get the blueprints up and running is pretty impressive. This also stands out because these agents can be deployed on-premises or in a cloud. By contrast, most agentic efforts so far have been limited to a specific cloud or a SaaS platform. I am hoping that this level of deployment choice is a sign of things to come, rather than an exception.

2025 has started off with a lot of AI investing. Whether it’s ServiceNow’s acquisition of Cuein, new VC rounds at AI startups, or even the massive commitment from the Trump administration for U.S. AI data centers, it seems like investors are finally getting off the sidelines. That’s a good thing in general, but it will signal a shift in priorities for existing product teams. If you are a developer, I would expect more focus on ease of use, consumability, and samples versus net-new innovations and APIs.

Lastly, while this news is a bit old, I did want to address the hoopla that was made when Satya Nadella of Microsoft made his comments about the future of SaaS. It seems that in the tech world there is nothing we like more than declaring things dead. However . . . that’s not what he said. What will really happen is that the business-logic and user-experience layers of SaaS will be massively changed by agents. But the overall value proposition of SaaS platforms will likely remain intact.

In the opening days of his new term, President Trump announced a massive $500 billion AI project called Stargate. Supercomputers and AI datacenters will be built by key partners that include OpenAI, Oracle, SoftBank, and MGX. The project will use technology created by Arm, Microsoft, NVIDIA, Oracle, and OpenAI. SoftBank is the lead financial manager, OpenAI will handle model development and training, and Oracle will manage the data aspect. The project’s objective is to maintain U.S. leadership in AI and to create advanced AI in the form of artificial general intelligence. AGI will be able to perform a wide array of tasks with human-like intelligence, and potentially revolutionize fields like material science, medicine, and environmental science.

With an initial investment of $100 billion, construction has already begun in Abilene, Texas, where Microsoft is building an AI supercomputer. The investment is slated to ramp up to $500 billion by 2029. The plan is to establish 10 datacenters of 500,000 square feet each, with intentions to expand to another 10 across the U.S. once further site evaluations are complete. This project should create significant economic and security benefits for the U.S., especially because it emphasizes national and military security, aiming to enhance capabilities in data analysis, surveillance, and cybersecurity to safeguard against strategic threats.

To truly evaluate the project, we need more details. It appears that Microsoft initially planned to build a supercomputer exclusively for OpenAI with a $100 billion price tag, and that plan morphed into the much larger $500 billion national plan complete with supercomputer and multiple datacenters. We will need more information to fully understand the plans, how they evolved, and how they are being implemented.

Brightcove is smartly leveraging Amazon Q to address the complex technical queries from its global client base. (If you need a refresher on Amazon Q, take a look at this writeup we did last year.) By using Amazon Q, Brightcove aims to empower its support team to reduce research time and significantly improve the customer service experience, particularly for intricate issues like video embedding.

While acknowledging the potential of generative AI, Brightcove wisely emphasizes a cautious approach with rigorous testing and expert scrutiny to ensure accuracy and build trust. This focus is increasingly important in today’s AI landscape, where many companies get caught up chasing cost cuts and automation. Brightcove seems to recognize that the true potential of generative AI often lies in augmenting human capabilities and fostering deeper customer understanding.

Equipping its support team with Amazon Q will likely speed up response times and enable more effective problem-solving and stronger customer relationships. This is a strong way to leverage AI for a competitive edge. Brightcove’s strategy, with its emphasis on accuracy and human-centered implementation, highlights how accuracy, trust, and scalability can be key to maximizing the benefits of this technology to improve CX.

Adding on to my colleague’s contributions about Project Stargate, here are a few of my thoughts:

  • It didn’t take long after this historic announcement for the questions to raise. If Stargate is going to be an AI venture, what exactly is the product or service being offered? Is it an AI cloud? Purely an R&D platform? Ten separate 500,000-square-foot datacenters racked with AI-specific infrastructure sounds an awful lot like a cloud to me, but maybe it isn’t.
  • Is this more about creating 100,000 high-paying jobs, with the understanding that the market will find use for these datacenters in quick order? While Oracle’s Larry Ellison spoke at a high level about being able to create cancer therapies and vaccines, I’m a little confused about how this plays out specifically.
  • While the investment of $500 billion over five years is incredible, what is the projected time until the first customer, partner, user, or consumer is actually using Stargate? It seems likely that the ROI on this investment will be a bit further out.
  • While we know that Oracle, OpenAI, and Softbank are working in partnership with NVIDIA and Microsoft — what does this environment actually look like?
  • What does Stargate mean for the cloud market? Anything? Is Stargate reserved only for the largest of large use cases that would typically require an on-premises cluster? Or is the net being spread wider to enable a real incubation across the spectrum — from the largest-of-large to the smallest-of-small companies?

I am a big fan of the government recognizing the need for the United States to stay far ahead of its competitors in any area of technology, and certainly AI is hugely important. As a country, we are already investing far more than the rest of the world combined. However, as a person who is inherently skeptical of anything the U.S. government does in the longer term, I would like to understand better the how, why, what, when, and where of Project Stargate.

When HPE launched silicon root-of-trust back in 2017, it was a game-changer in the server market. By examining and responding to the millions of lines of code that execute before a server even boots an operating system, the company provided what was the most secure server in the industry. And by integrating this with its integrated lights out (iLO) management, HPE created servers that could not only detect malware at the lowest levels, but also take corrective actions to mitigate the impact.

Since 2017, the threat landscape has evolved considerably. Quantum lockers and AI-driven malware kits create a new set of challenges that require a new way of securing platforms. Here’s the question: is the infrastructure supporting our most critical workloads evolving to meet these challenges? We’ve seen the silicon vendors respond. Now as server platforms prepare to refresh with the newest CPUs, I’ll be curious to see whether the baseboard management controllers and hardware-based security mechanisms deliver the required protection.

Customer data platforms (CDPs) are like superheroes for enterprises looking to delight their customers. They collect data from various sources and organize it in a centralized location, ensuring that everyone has access to the same insights about each customer. This collaborative approach provides better teamwork and deepens customer understanding. Armed with this comprehensive data, marketing teams can deliver personalized campaigns that make customers feel valued and encourage repeat business. CDPs serve as treasure troves of customer data, simplifying data sharing and utilization for teams.

But CDPs aren’t without their challenges. They can be tricky to scale up, especially for big companies with lots of data. And if the data isn’t accurate, it can mess up everything. Many CDPs don’t have the best analytics tools, which can make it hard to figure out what’s working and what’s not. And integrating CDPs with old systems can be a real pain.

Despite these challenges, the CDP market is growing fast. It’s expected to reach $72 billion by 2033, which is a huge increase from $7.82 billion in 2024. This growth is happening because businesses want to do a better job of engaging with their customers and making them feel valued. They also want to be able to use data to make smart marketing decisions. And they’re starting to use AI, automation, and machine learning to make CDPs even better.

If you’re a business considering CDPs, consider your needs and how you’ll use them. Ensure the CDP can handle your data, integrate with your systems, and provide analysis tools for smart decisions. CDPs are promising solutions for business growth and success. Look for my upcoming Forbes article on the state of enterprise data, which will highlight CDPs.

Ericsson recently integrated large language models into its NetCloud management platform. Thanks to this, AI agents can process network data and technical documentation to generate configuration recommendations. What is unique is that the system performs this functionality without exposing sensitive information to third parties, and thus — by design — provides a higher degree of security and control for datacenter deployments. It is also worth noting that the architecture is agentic in nature and employs multiple agents to solve complex tasks, including troubleshooting connectivity issues, automating infrastructure provisioning, and translating business intent and requirements into network policies. Such tasks have required manual intervention in the past, and if Ericsson can successfully execute in this area, it could lead to incremental enterprise networking revenue opportunities for Ericsson’s customers.

It looks like manufacturers are getting serious about AI in 2025. They’re increasing their AI budgets to become more efficient and competitive. The good news here is that they’re mainly focused on using AI to help their employees, not replace them, aligning with the principles of Industry 5.0. To leverage AI, I suggest manufacturers modernize their ERP systems, improve their data management strategies, and upgrade management processes.

Transforming your business to cloud-based ERP systems is a key for taking advantage of AI. By modernizing, manufacturers can optimize their investments and reap the benefits of the new technology.

Last week I published a case study that highlights this ERP–AI connection and the importance of modernization for making it work: “Hearst Corporation Modernizes Oracle ERP with Strong Change Management and Data Management Practices.” As enterprises adopt AI-driven solutions, it’s crucial to balance the technology advancements with addressing the human and organizational aspects of transformation. Two key pillars — change management and data management — are essential for achieving actionable outcomes. Change management focuses on organizational and human factors, while data management ensures data completeness and quality, enabling accurate and timely insights. Without both, enterprises may struggle to modernize, integrate workflows, or make informed decisions.

For this case study, I had the chance to sit down with David Hovstadius, senior vice president of finance operations at Hearst Corporation, who emphasized the importance of these principles during Hearst’s transition to Oracle Cloud ERP some years ago — which continues to pay dividends as the company embraces generative AI today. By prioritizing change management and data management, the company laid a foundation that not only facilitated its ERP implementation, but also enabled continuous technological and process improvements as AI technologies emerged. For more details, check out the article linked above.

The new Samsung Galaxy S25 smartphone launch happened last week, and it demonstrated how Google, Samsung, and Qualcomm are working in lockstep not only in mobile but also in XR with Project Moohan. Witnessing the interplay of Gemini with the depth of the Moohan experience clearly demonstrates how the three companies are working together to deliver the best AI experience in mobile. For more context on this partnership, see my coverage of last month’s launch of the Android XR spatial OS.

NXP has announced the EdgeLock A30 secure authenticator. It’s a standalone chip compatible with many MCUs and MPUs, including NPX’s MCX and i.MX products. Its minuscule size (“smaller than a grain of rice”) and standard I2C interface make it easy to fit into small devices, and NXP’s comprehensive EdgeLock 2GO certificate services ease the commissioning process. Developers need integrated solutions that conform with new and upcoming security and privacy regulations — and customer concerns. For example, the EU’s Batteries Regulation (2023/1542) requires using a Digital Product Passport by 2027, including supply chain provenance, and the EdgeLock A30 is the basis for a scalable solution. The chip has a RISC-V processor and 16 kB of NVM for credential storage, is Common Criteria EAL6+ certified, and is available now.

Last July, I reported that IBM acquired two Software AG properties – StreamSets and webMethods. Software AG’s streamlining continues with the sale of Alfabet and a management buyout of Cumulocity, the company’s IoT division. Cumulocity, founded in 2012, is once again independent after eight years under Software AG. Founder and CEO Bernd Gross told WirtschaftsWoche, “We are moving towards independence as a scale-up,” and “The big IoT boom is still to come.” I’m expecting strategic changes that better align the company with physical AI trends, making it more of a solution enabler than a solution provider.

Verizon’s new AI strategy leans on its strengths in 5G with mobile edge compute (MEC) and fiber. This creates an opportunity for businesses and even cloud providers to move their AI applications as close to the edge as possible using available compute for inference and low-latency applications. I like to see Verizon leaning in this direction because the company has struggled to differentiate its offerings from those of AT&T and T-Mobile.

Cybersecurity researchers at Sophos have discovered that threat actors have exploited Microsoft Teams to spread malicious links and files, potentially leading to ransomware infections. These attackers use AI for social engineering, making the attacks harder to detect. Microsoft has acknowledged the issue and is working on a solution. While these findings highlight specific threats to Teams, they serve as a broader warning about the increasing risk of similar attacks across all collaboration platforms. The problem is likely not isolated to Microsoft and emphasizes the need for heightened vigilance and robust security measures across the board.

My review of the RTX 5090 graphics card found that NVIDIA continues to innovate in AI. While the 5090 is a very large and power hungry card compared to the 4090, its performance is also considerably higher in 4K with DLSS 4 and 4x frame generation turned on. I also found that the AMP (AI management processor) is a RISC-V core, which is programmable and shows the allure of RISC-V for such applications.

For the first time, all the subsystems necessary to implement universal and fault-tolerant quantum computation have been combined in a photonic architecture. Xanadu has created a photonic quantum computer named Aurora that is a scale model for universal, fault-tolerant quantum computing. Aurora incorporates 35 photonic chips, 84 squeezers, and 36 photon-number-resolving detectors.

The system achieves 12 physical qubit modes per clock cycle, which means it can handle 12 qubits for each processing step, and it has synthesized a cluster state with 86.4 billion modes — reflecting the vast number of different ways 12 qubits can interact with each other. For error correction, it uses a distance-2 repetition code with real-time decoding. Aurora’s architecture is divided into three stages:

  1. Preparing photons to create quantum states
  2. Adjustment of quantum states or entangling the qubits
  3. The QPU performs the computations

The Aurora operates at room temperature and uses fiber-optic networking, which facilitates scalable quantum computing. Xanadu’s design is focused on fault tolerance and scalability. Compared to other photonic quantum computing efforts from makers such as PsiQuantum and Photonics, Aurora stands out with its comprehensive system design, error correction, and scalability. Shared challenges among photonic platforms remain optical loss and high qubit error rates.

Last week the Trump administration instructed the Department of Homeland Security to disband all advisory committees within the agency, including the Cyber Safety Review Board. The CSRB was created under the Biden administration in 2022 and, interestingly, played a role in investigating China-sponsored cyberattacks against U.S. telecom providers. The clean sweep of all advisory committees may simply be a resetting of the guard and a change in policy direction, but it will be interesting to see whether it impacts cyber defense negatively in the short or long term.

Something subtle that I think has been mostly overlooked: Qualcomm’s Snapdragon 8 Elite for Samsung comes with more than just a frequency bump; it also includes customizations in the Qualcomm DSP for some of the new Samsung imaging features, plus an integrated display controller on the SoC for lower power consumption. This is something probably only Samsung could achieve, but it still clearly grows out of Qualcomm’s understanding that Samsung needs something different and custom.

Technology continues to transform sports in more and more ways. One example is the TGL indoor golf league, a tech-enhanced golf league cofounded by Tiger Woods, Rory McIlroy, and Mike McCarley in partnership with the PGA Tour. Recently launched after a year-long delay due to storm damage at its SoFi Center facility in Palm Beach Gardens, Florida, TGL combines virtual and traditional golf. Matches feature six teams of four players competing in a mix of simulator-based and on-course play, including a morphing 3,800-square-foot green. The league’s unique format includes nine-hole team matches, head-to-head play, and overtime closest-to-the-pin contests, with scoring determining playoff seeding. Matches will air live on ESPN and ESPN+.

This is a pretty interesting way to showcase golf with advanced simulators, mechanically altered greens, and innovative visuals. I believe that TGL does a good job of bringing technology together while creating a unique spectator and player experience.

Meanwhile, other sports continue to try out new tech, such as soccer using semi-automated offside technology (SAOT) to make video assistant reviews (VAR) for offsides clearer and faster. But fans aren’t always on board with these changes; VAR in particular has created significant concerns among fans of the Premier League and other top leagues about transparency and how the technology affects the flow and fairness of the game. As I’ve said before, it will always be important for sports to integrate new tech while keeping important traditions alive.

5G mobile and fixed wireless access could play a pivotal role within Project Stargate, the ambitious AI effort announced in the early days of the new Trump administration. As covered elsewhere in this update, the initiative aims to invest $500 billion in infrastructure to build out AI datacenters in the United States. As gen AI becomes more hybrid from the cloud to network edges, mobility could become instrumental in the processing of smaller language models hosted in smaller edge data nodes. 5G has been searching for its killer application beyond fixed wireless access consumer services, and given 5G’s low latency, fast throughput, and massive device support advantages — it may have found it in the rollout of AI.

Podcasts Published

The Enterprise Applications Podcast (Melody Brue, Robert Kramer)

DataCenter Podcast (Will Townsend, Paul Smith-Goodson, Matt Kimball)

Don’t miss future MI&S Podcast episodes! Subscribe to our YouTube Channel here.

Citations

Databricks / Funding and Partners / Patrick Moorhead / Opentools 
Databricks Scores Massive $15.25B Financing to Elevate AI Innovations

Intel / Ways to improve in 2025 / Patrick Moorhead / Network World
What Intel needs to do to get its mojo back

Intel / New CEO / Anshel Sag / Yahoo Finance
Intel races to find its next CEO, but insiders say no clear frontrunners yet

Samsung / Android XR / Anshel Sag / Venture Beat
Samsung teases Android XR devices coming later this year

Starlink / Growth under Trump Administration / Patrick Moorhead / Issues & Insights
Elon Musk’s Starlink Likely To Boom Under Trump Administration

Cohesity / Veritas Acquisition / Robert Kramer / Security Buzz 
Cohesity Acquires Veritas to Become World’s Largest Data Protection Provider

New Gear or Software We Are Using and Testing

  • Kindle Colorsoft (Anshel Sag)
  • Google Pixel Buds 2 Pro (Anshel Sag)
  • XREAL One AR Glasses (Anshel Sag)
  • Google Pixel Watch 3, 41mm (Anshel Sag)
  • Insta360 Link2 4K AI Webcam (Anshel Sag)
  • Google TV streamer – Matter and Thread features (Bill Curtis)
  • Various Matter devices (Bill Curtis)
  • ASUS Zephyrus G16 Gaming Laptop (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Cisco AI Summit, January 15, Palo Alto (Will Townsend)
  • World Economic Forum, January 20-24, Davos, Switzerland (Patrick Moorhead) 
  • Samsung Galaxy Unpacked, January 22, San Jose (Anshel Sag) 
  • MIT Reality Hack, Boston, January 24-17 (Anshel Sag) 
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)
  • Microsoft AI Tour, January 30, New York City (Robert Kramer)
  • Cisco AI Summit, January 15, Palo Alto (Will Townsend)
  • World Economic Forum, January 20-24, Davos, Switzerland (Patrick Moorhead) 
  • Samsung Galaxy Unpacked, January 22, San Jose (Anshel Sag) 
  • MIT Reality Hack, Boston, January 24-17 (Anshel Sag) 
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)
  • Microsoft AI Tour, January 30, New York City (Robert Kramer)
  • ZohoDay25, February 3-5, Austin (Robert Kramer, Melody Brue)
  • Oracle NetSuite SuiteConnect, February 6, New York City (Robert Kramer)
  • Cisco Live EMEA, February 10-13, Amsterdam (Will Townsend)
  • SAP Analyst Innovation Council, February 11-12, New York City (Robert Kramer)
  • RingCentral Analyst Summit, February 24-26, Napa (Melody Brue)
  • Arm Analyst Summit, February 18-21, San Francisco (Matt Kimball)
  • Microsoft Threat Intel Summit, February 25, Redmond (Will Townsend)
  • Siemens Datacenter Analyst Summit, February 25-27, Zug, Switzerland (Matt Kimball)
  • EdgeAI Austin, February 25-27, Austin (Bill Curtis is a speaker)
  • Mobile World Congress, March 2-7, Barcelona (Will Townsend)
  • Susecon, March 10-14, Orlando (Matt Kimball)
  • Fastly Accelerate, March 12, Los Angeles (Will Townsend)
  • Synopsys Panel Moderation, March 15, San Jose (Matt Kimball)
  • Adobe Summit, March 18-20, Las Vegas (Melody Brue)
  • Extreme Networks Connect, May 19-22, Paris (Will Townsend)
  • Zendesk Analyst Day, March 25, Las Vegas (Melody Brue)
  • Oracle Database Summit, March 25, Mountain View (Matt Kimball)
  • IBM Infrastructure Analyst Summit, March 25, NYC (Matt Kimball, Melody Brue)
  • Microsoft FabCon March 31–April 2, Las Vegas (Robert Kramer)
  • Canva Create & Analyst Day, April 8-10, Los Angeles (Melody Brue)
  • Infor Analyst Innovation Summit, April 8-9, NYC (Robert Kramer) 
  • NTT Upgrade, April 9-10, San Francisco (Will Townsend)
  • Google Next, April 9-11, Las Vegas (Robert Kramer)
  • Appian World, April 27-30, Denver (Robert Kramer)
  • RSA Conference, April 28-May 1, Las Vegas (Will Townsend)
  • Nutanix.NEXT May 6-9, Washington DC (Matt Kimball)
  • Informatica World, May 13-15, Las Vegas (Robert Kramer)
  • Fastly Accelerate, May 14, Los Angeles (Will Townsend)
  • Dell Tech World, May 19-22, Las Vegas (Matt Kimball)
  • Zscaler Zenith Live, June 2-5, Las Vegas (Will Townsend)
  • Snowflake, June 2-5, San Francisco (Robert Kramer)
  • Cisco Live US, June 8-12, San Diego (Will Townsend)
  • HPE Discover, June 23-26, Las Vegas (Will Townsend)
  • Techritory, October 22-23, Riga (Will Townsend)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending January 24, 2025 appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending January 17, 2025 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-january-17-2025/ Tue, 21 Jan 2025 15:52:36 +0000 https://moorinsightsstrategy.com/?p=45152 MI&S Weekly Analyst Insights — Week Ending January 17, 2025. A wrap up of what our team published during the last week.

The post MI&S Weekly Analyst Insights — Week Ending January 17, 2025 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.

This week I’m in Davos, Switzerland, at the World Economic Forum, meeting with business leaders from around the globe. Moor Insights & Strategy is also co-sponsoring a special session on “Protecting Press Freedom and Democracy,” moderated by Axios Media. While we at MI&S are not journalists, we rely on the technology press for good information, and in turn are often quoted in press outlets as we contribute our own viewpoints to the public discourse on events unfolding in the tech world.

Axios Views - Twitter Moment

A free press — definitely including social media — is crucial for a thriving tech ecosystem because it ensures that individuals, businesses, and policymakers have access to the information they need to make informed decisions in an increasingly complex technological landscape. We’re proud to support this event as an expression of our deep-rooted commitment to maintaining the freedom of information flow in the tech sector and beyond.

If you or your company executives will be in Davos and you’d like to connect there, please reach out — we’d love to hear from you. 

Hope you have a great week,

Patrick Moorhead

———

Our MI&S team published 16 deliverables:

This past week, MI&S analysts have been quoted in the press about Biden’s AI restrictions, Google drones, international malware security issues, intelligent content management, and as usual, AI. Our insights were included in Fierce Network, Yahoo Finance, Ciso2Ciso, The Deccan Herald, and The Straits Times.

MI&S Quick Insights

I think everyone realizes the impacts AI is having on a wide range of business activities. So it should not be a surprise to anyone that The World Economic Forum’s 2025 Future of Jobs Report projects that almost 90% of companies expect that AI will redefine company operations by 2030.

AI is reshaping workplace dynamics. It is expected to create a net increase of 2 million jobs, resulting in 11 million new jobs while displacing 9 million. It is not surprising that the titles with the greatest job growth will be data specialists and AI/ML technologists.

What about people being let go from companies because they don’t have the necessary AI skills? It’s not as bad as expected because 75% of companies plan to upskill current employees for AI collaboration. That shows a focus on adapting to AI advancement instead of replacing staff. And 70% of companies plan to hire people who already have AI expertise. So, most companies will do a little of both actions. Along with those statistics, 50% of businesses say they will reorganize around AI opportunities, and 40% will use workforce reductions to handle AI expansion.

Just reading the news daily will tell you how quickly AI is being adopted. It’s an instance of the old “early bird gets the worm” saying: many companies believe that those who integrate AI fast and first will have a competitive advantage over those that don’t.

The message is clear: AI is coming, and it’s coming fast. It is expected to cause the largest workplace shift in decades. Management should establish AI priorities and get ready to implement them as necessary. Go AI, and go fast.

I think everyone realizes the impacts AI is having on a wide range of business activities. So it should not be a surprise to anyone that The World Economic Forum’s 2025 Future of Jobs Report projects that almost 90% of companies expect that AI will redefine company operations by 2030.

AI is reshaping workplace dynamics. It is expected to create a net increase of 2 million jobs, resulting in 11 million new jobs while displacing 9 million. It is not surprising that the titles with the greatest job growth will be data specialists and AI/ML technologists.

What about people being let go from companies because they don’t have the necessary AI skills? It’s not as bad as expected because 75% of companies plan to upskill current employees for AI collaboration. That shows a focus on adapting to AI advancement instead of replacing staff. And 70% of companies plan to hire people who already have AI expertise. So, most companies will do a little of both actions. Along with those statistics, 50% of businesses say they will reorganize around AI opportunities, and 40% will use workforce reductions to handle AI expansion.

Just reading the news daily will tell you how quickly AI is being adopted. It’s an instance of the old “early bird gets the worm” saying: many companies believe that those who integrate AI fast and first will have a competitive advantage over those that don’t.

The message is clear: AI is coming, and it’s coming fast. It is expected to cause the largest workplace shift in decades. Management should establish AI priorities and get ready to implement them as necessary. Go AI, and go fast.

Salesforce CEO Marc Benioff recently announced a pause in software engineer hiring for the company, suggesting that AI could automate a growing portion of development tasks. This move, likely motivated by potential cost savings and reported productivity gains via Salesforce’s Agentforce AI tool, raises questions about the evolving role of tech jobs and how companies might manage an AI-augmented workforce.

While some praise Salesforce’s innovative approach, many remain cautious about AI’s ability to completely replace human engineers soon. This decision also highlights a key challenge: Could existing departments like IT or HR oversee this new workforce, or will companies create new roles specifically to manage this digital labor?

Salesforce’s strategy serves as an interesting example of how AI might reshape business operations. It remains to be seen whether other companies will adopt similar strategies and how these trends could impact the tech job market over time. This potential shift in the tech landscape underscores the growing potential of AI to reshape industries and redefine workforce needs, with Salesforce highlighting a key strategy for companies promoting AI adoption: demonstrating its ROI through internal cost savings.

What to make of the outgoing administration’s restrictions on AI chips and models? There are so many different angles to consider. However, the sharing of AI model weights and export controls on semiconductors are the two biggies. While the U.S. government has billed this as diffusing AI innovation, it is at the same time restricting innovation of a couple of players that are on the leading edge of AI development.

Does this stifle AI innovation in the United States? I don’t believe so. Perhaps it reshapes some of our collaborative efforts on a global basis, but the semiconductor, hardware, and software ecosystems are going to continue to accelerate at seemingly exponential rates. I just don’t see that slowing down.

Here’s an interesting take: The primary target of these restrictions — China — has been leveraging open-weight models and is using these to try and gain a global footprint. Models like Alibaba’s Qwen have been showing good performance relative to what we have here in the U.S. — especially in multilingual support. And Qwen has found traction in many countries outside of the U.S. and western Europe. Just as Huawei pivoted after its U.S. blacklisting and gained such a strong global footprint in telecom, Alibaba and others can (with Huawei) deliver their own AI factories.

One of the questions we have to ask is whether we are truly protecting the U.S. and its allies with these protectionist measures. Or are we accelerating investments from adversary governments into AI that perhaps pay out in the longer term? It’s a tough question to answer.

The big news of the week was Lenovo’s announcement that it will acquire high-end storage provider Infinidat. While Lenovo has long been strong in the low end of the storage market, it has struggled to find a foothold in the high end of the market. Adding Infinidat to the portfolio solves for this challenge. However, it will take a bit of rationalization across product, marketing, and sales to find success and compete in the enterprise.

I believe that one of the most important assets to consider in this acquisition are the people who have developed, marketed, and sold Infinidat’s solutions to date. The high-end segment they have sold into — and that Lenovo desires to capture — works differently from the volume/transactional markets (commercial enterprise, SMB) where Lenovo has made its mark in storage. I think those same developers and go-to-market professionals will be essential for the success of this business combination. For more details, check out my full analysis of this deal on Forbes.

Active Directory (AD) is a core piece of enterprise IT, as it handles authentication and access to many important IT assets such as apps, databases, and security systems. Unfortunately, its importance also makes it a prime target for cyberattacks. That makes AD recovery after an attack a high priority, but that’s been a function in need of more innovation. “Recovering Active Directory is foundational to maintaining continuous business after a cyberattack, yet traditional methods are too complex and prone to error,” said Pranay Ahlawat, Commvault’s chief technology and AI officer.

To address this issue, Commvault has recently introduced Cloud Backup & Recovery for Active Directory Enterprise Edition, which aims to make AD forest recovery much simpler and more automated. Read more about this in my latest Forbes article.

Microsoft has introduced Microsoft 365 Copilot Chat, a new AI service for businesses that blends free chat features with consumption-based access to AI agents. This offering leverages AI technology to help users with tasks like document analysis and process automation. Costs vary depending on the complexity of the task, with simple web searches being free and more complex actions involving company data costing more. This flexible approach allows organizations to dip their toes into AI without a hefty upfront investment, scaling their usage as needed.

Hewlett Packard Enterprise Aruba Networking recently announced a portfolio of products tailored to brick-and-mortar retailers. The company’s retail portfolio includes a cellular bridge, a smaller-form-factor switch, and wireless access points that can support more sensors and devices — ultimately providing broader coverage. HPE is also partnering with retail device leaders including Zebra Technologies to ensure an ecosystem approach to its solution delivery. There is a tremendous opportunity in this market to delight customers with automated shelf replenishment and online-like experiences as well as to improve operational efficiency tied to better logistics and reduced shrinkage. From my perspective, the company’s retail portfolio and its AI-infused HPE Aruba Networking Central management console is well positioned to deliver value to retailers and customers alike.

SAP and IBM are continuing their 50-year relationship with a partnership to support the shift of SAP S/4HANA from on-premises to the cloud. This offering looks to facilitate the migration of SAP S/4HANA workloads from on-premises IBM Power Systems. For context, SAP has 10,000-plus customers running SAP on IBM Power servers. The collaboration of SAP and IBM focuses on helping organizations modernize their ERP environments and support AI-powered business processes. The RISE with SAP program provides a structured approach to cloud migration, offering outcome-driven services and platforms to assist organizations in reimagining their operating models.

The longstanding familiarity between SAP and IBM makes the shift less daunting, though adoption will depend on factors such as a given customer’s current SAP setup, budget, and readiness for cloud migration. I’ve talked a lot about modernization and the importance of change and data management, which will be key areas to address during these transitions. Transitioning systems isn’t easy, and any change can add complexity. Still, modernizing is crucial for businesses using ERP systems to stay competitive. This is a good opportunity for companies to make the most of their IBM Power server investments and use this collaboration to bring their ERP systems up to date.

IBM Consulting has announced plans to acquire Applications Software Technology LLC. AST brings expertise in Oracle Cloud applications, specifically with public-sector organizations in government and education and companies in manufacturing, energy, and CPG. AST specializes in implementations of Oracle ERP, HCM, Configure, Price, Quote (CPQ), Oracle Cloud Infrastructure (OCI), JD Edwards ERP, and NetSuite. This move fits with IBM’s strategy and builds on its recent acquisition of Accelalpha, which offers Oracle Cloud consulting services. My thought is that this year is the perfect time for ERP modernizations, especially with the AI craze. In that context, IBM Consulting has set itself up to help businesses transform and succeed.

Epicor Prism is bringing AI agents to the supply chain, making it easier for users to gain relevant insights. Integrated with Epicor Kinetic ERP, Prism uses AI agents to handle tasks such as data analysis, demand prediction, scheduling, inventory optimization, and updates. This should allow supply chain teams to save time and cut down on routine manual tasks so they can spend more time on strategic work. This is part of Epicor’s push to modernize its ERP systems in 2025 and could make life easier for businesses using Epicor in manufacturing, distribution, and retail. Definitely something to keep an eye on.

The Nintendo Switch 2 is precisely the device that I expected Nintendo would launch. It’s a combination of generations-old hardware with significantly improved user experience and UI. I think Nintendo understands clearly that it needs to hit the right balance between a certain price point and a certain game experience, which is what the Switch is all about. I think the people who expected the new model to be like a PC gaming handheld are living in an alternate reality. The handheld gaming market will always have the Switch at the entry level, while PC handhelds are distinctly premium products.

AT&T announced a fiber and wireless guarantee that compensates customers for downtime. I’m watching this strategy from my edge/IoT point of view because of its potential applicability in industrial IoT. Specifically, the business case for private 5G adoption rests on delivering reliable, predictable, scalable connectivity in enterprise and industrial settings. However, unlicensed spectrum alternatives (Wi-Fi et al.) are “good enough” for many use cases — at substantially lower costs. 5G’s advantages must deliver quantifiable ROI to justify the higher cost, and service-level guarantees help make the case for buying more 9s of guaranteed reliability.

The recent Sonos fiasco teaches a valuable lesson about what can go wrong with long-term support for complicated mashups of device firmware, cloud services, and phone apps. In this case, the company released a major app rewrite last May, resulting in usability issues and a cascade of serious bugs. Sonos could not simply revert to the old apps because upgrades to firmware and cloud services broke backward app compatibility. Among other consequences, this fiasco led to the departure of the company’s CEO.

Here’s my take from an edge / IoT perspective: Software-defined products, including vehicles (SDVs), create technical debt that extends throughout the product’s lifetime. Regression tests aren’t sufficient to catch real-world bugs and usability problems. (Last year’s Crowdstrike outage is another example of a catastrophic testing failure.) The lessons are simple:

  1. Don’t bet the farm on internal tests. Experiential tests on deployed products with real users must be part of the plan.
  2. Avoid forklift updates. If unavoidable, budget for significant testing, roll the update out slowly, and have a rollback strategy ready to go.


In another firmware-related incident affecting a software-defined product,
Tesla is recalling more than 239,000 vehicles for a condition where a computer circuit board short circuit causes problems, including loss of the rearview camera image. The fix is a software update that alters the power-up sequence to avoid a potential reverse-voltage situation that causes the short. (Transistors hate reverse voltage.) This is an excellent example of how SDVs can simplify maintenance because the fix is an OTA update that is transparent to the customer.

After 18 months of preparation, the FCC announced the launch of the U.S. Cyber Trust Mark label for IoT consumer devices. The voluntary security and privacy testing program requires eligible products to pass compliance testing by accredited (FCC-recognized) labs. “Voluntary” is the operative word here. Consumers will only look for the mark if it becomes widely used on mainstream products. That might happen, but I’m not holding my breath.

AT&T’s new service guarantee will fundamentally change how carriers operate over time as consumers start to expect actual service-level agreements with their carriers — and compensation when things go wrong. I expect that Verizon and T-Mobile will follow suit if AT&T’s move successfully retains customers, or takes customers away from competitors.

Samsung is teasing its next-generation smartphone — the Galaxy S25 line — this week. It will be really interesting to see how Samsung’s new flagship devices perform in the latest benchmarks against the iPhone as well as other Android phones with Snapdragon 8 Elite processors. I am excited to see what new AI features Samsung introduces to differentiate itself from the other Android OEMs and even Apple.

Microsoft has introduced a new consumption-based pricing model for its 365 Copilot Chat alongside its existing subscription-based option. This model allows organizations to experiment with and scale AI usage according to their needs and budget. The consumption-based pricing facilitates controlled experimentation and proof-of-concept projects. However, potential inconsistencies in functionality and updates across the two models in this tiered system could create user experience disparities.

This disparity may be a strategic move by Microsoft to incentivize customers to upgrade. Still, the consumption model’s flexibility could also attract customers who desire the full feature set of Copilot along with consumption-based pricing. The flexible pricing strategy could potentially drive wider AI adoption, but ensuring a consistent and valuable user experience across both models will be crucial for Microsoft. The offering addresses many of the barriers to AI in the enterprise, including cost and adoption, and it promotes better security for companies by discouraging BYOAI.

Google has announced changes to its Workspace offerings, integrating AI capabilities into its Business and Enterprise plans without requiring paying for additional add-ons. Effective last week, this update includes AI assistance in various Workspace applications such as Gmail, Docs, Sheets, and Meet. The new features encompass Gemini Advanced for complex tasks and NotebookLM Plus for research assistance. By incorporating these AI tools directly into existing plans, Google appears to be lowering barriers to entry for businesses interested in AI. This approach, similar to Microsoft’s recent consumption-based model, could facilitate wider AI adoption and allow Google to demonstrate the value of its AI to customers. The strategy may encourage users to engage more readily with AI features within familiar applications, potentially leading to increased productivity and improved work quality. Google states that it has implemented security measures and compliance certifications for these AI features, addressing potential concerns about data protection and information access control.

CES 2025 has been over for a week now, and it’s quite clear looking back that lots of PC OEMs refreshed their lineups to take advantage of the latest chips from AMD, Intel, and NVIDIA, almost all of which focus on AI performance and experiences. It remains unclear whether AI applications will actually take hold this year, but it’s quite clear that they did not in 2024.

Miami University and Cleveland Clinic have created a partnership that will strengthen Ohio’s efforts to become a leader in quantum computing. The partnership will create Ohio’s first college quantum computing degree program. The collaboration will integrate Miami University with the Cleveland Clinic’s on-site IBM Quantum System One, the first quantum computer fully dedicated to healthcare. (Readers with long memories may recall the Forbes article I wrote a couple of years ago about the debut of that computer.)

Miami University will develop bachelor’s, master’s, and doctoral programs in quantum computing. Cleveland Clinic, in turn, will offer internships and research opportunities for Miami students. Aligning a quantum curriculum with actual healthcare applications will open a pipeline that will probably boost Ohio’s economy.

On a personal note, I’m especially glad to see Miami University move into quantum because my three grown daughters all graduated from that institution.

Last week, Microsoft launched its Quantum Ready program to alert business leaders that quantum computing has made significant progress over the past few years — and that they should get ready to take advantage of that progress. In 2024, several significant quantum breakthroughs and important pieces of research moved the technology forward. The field has gone from theoretical mathematical concepts to an emerging technology on the cusp of making major breakthroughs in multiple modalities. These modalities include superconducting, trapped ions, neutral atoms, photonics, and topological quantum computing.

Other factors have also helped improve quantum. One important advance is that quantum processors have improved significantly over the past five years. Current quantum computers have higher-quality qubits, allowing computations that weren’t possible five years ago. Microsoft’s initiative urges business leaders to get ready to harness the transformative potential of quantum computing coupled with AI.

Microsoft’s commitment extends to the global stage. This year, it is partnering with the United Nations, the American Physical Society, and others to celebrate the 2025 International Year of Quantum Science and Technology. This initiative commemorates a century of quantum innovation while fostering awareness of how quantum applications will revolutionize industries. By leading these efforts, Microsoft aims to empower organizations and communities worldwide to embrace the quantum future effectively.

Cisco recently announced its AI Defense platform, which is slated to be generally available in March. One of the challenges associated with securing algorithmic models is that they are not deterministic and can be easily compromised. As modern AI workloads move from the cloud to network edges, attack surfaces will be greatly expanded, making safety and security more difficult. Time will tell if Cisco’s approach is effective, but I believe that AI Defense has the potential to address AI security at scale with automated validation techniques that can dynamically adjust guardrails to an ever-changing threat landscape.

Nokia is making progress towards its goal of becoming an enterprise network services provider. The company has had challenges broadening its reach beyond the cellular market, but its innovation in delivering autonomous networks has great promise. Last week I published my insights on this topic in a Moor Insights & Strategy research paper.

Research Papers Published

Citations

Biden’s AI Restrictions / Matt Kimball / Fierce Network
Here’s why Biden’s new AI restrictions could backfire

Box AI / Melody Brue / Box Investor Relations (picked up in multiple outlets)
Box Delivers Intelligent Content Management to the Enterprise with New Enterprise Advanced Plan

Google / Drones / Anshel Sag / Yahoo Finance
Google’s next big bet: Taking drone deliveries mainstream

PlugX / Security / Will Townsend / Ciso2Ciso
International effort erases PlugX malware from thousands of Windows computers

Rang Intelligent / AI / Matt Kimball / Deccan Herald
ByteDance’s AI makes tech tycoon Zhou one of Asia’s richest women 

Rang Intelligent / AI / Matt Kimball / The Straits Times
ByteDance’s AI push makes Chinese tycoon one of Asia’s richest women


TV APPEARANCES
AI, restrictions, Google UK Antitrust investigation / Patrick Moorhead / Yahoo Finance
US restrictions on AI chips are a ‘step in the wrong direction’
Watch the Yahoo Finance clip on X

New Gear or Software We Are Using and Testing

  • Kindle Colorsoft (Anshel Sag)
  • Google Pixel Buds 2 Pro (Anshel Sag)
  • XREAL One AR Glasses (Anshel Sag)
  • Google Pixel Watch 3, 41mm (Anshel Sag)
  • Insta360 Link2 4K AI Webcam (Anshel Sag)
  • Google TV streamer – Matter and Thread features (Bill Curtis)
  • Various Matter devices (Bill Curtis)
  • ASUS Zephyrus G16 Gaming Laptop (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Cisco AI Summit, January 15, Palo Alto (Will Townsend)
  • World Economic Forum, January 20-24, Davos, Switzerland (Patrick Moorhead) 
  • Samsung Galaxy Unpacked, January 22, San Jose (Anshel Sag) 
  • MIT Reality Hack, Boston, January 24-17 (Anshel Sag) 
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)
  • Microsoft AI Tour, January 30, New York City (Robert Kramer)
  • Cisco AI Summit, January 15, Palo Alto (Will Townsend)
  • World Economic Forum, January 20-24, Davos, Switzerland (Patrick Moorhead) 
  • Samsung Galaxy Unpacked, January 22, San Jose (Anshel Sag) 
  • MIT Reality Hack, Boston, January 24-17 (Anshel Sag) 
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)
  • Microsoft AI Tour, January 30, New York City (Robert Kramer)
  • ZohoDay25, February 3-5, Austin (Robert Kramer, Melody Brue)
  • Oracle NetSuite SuiteConnect, February 6, New York City (Robert Kramer)
  • Cisco Live EMEA, February 10-13, Amsterdam (Will Townsend)
  • SAP Analyst Innovation Council, February 11-12, New York City (Robert Kramer)
  • RingCentral Analyst Summit, February 24-26, Napa (Melody Brue)
  • Arm Analyst Summit, February 18-21, San Francisco (Matt Kimball)
  • Microsoft Threat Intel Summit, February 25, Redmond (Will Townsend)
  • Siemens Datacenter Analyst Summit, February 25-27, Zug, Switzerland (Matt Kimball)
  • EdgeAI Austin, February 25-27, Austin (Bill Curtis is a speaker)
  • Mobile World Congress, March 2-7, Barcelona (Will Townsend)
  • Susecon, March 10-14, Orlando (Matt Kimball)
  • Fastly Accelerate, March 12, Los Angeles (Will Townsend)
  • Synopsys Panel Moderation, March 15, San Jose (Matt Kimball)
  • Adobe Summit, March 18-20, Las Vegas (Melody Brue)
  • Extreme Networks Connect, May 19-22, Paris (Will Townsend)
  • Zendesk Analyst Day, March 25, Las Vegas (Melody Brue)
  • Oracle Database Summit, March 25, Mountain View (Matt Kimball)
  • IBM Infrastructure Analyst Summit, March 25, NYC (Matt Kimball, Melody Brue)
  • Microsoft FabCon March 31–April 2, Las Vegas (Robert Kramer)
  • Canva Create & Analyst Day, April 8-10, Los Angeles (Melody Brue)
  • NTT Upgrade, April 9-10, San Francisco (Will Townsend)
  • Google Next, April 9-11, Las Vegas (Robert Kramer)
  • Appian World, April 27-30, Denver (Robert Kramer)
  • RSA Conference, April 28-May 1, Las Vegas (Will Townsend)
  • Nutanix.NEXT May 6-9, Washington DC (Matt Kimball)
  • Informatica World, May 13-15, Las Vegas (Robert Kramer)
  • Dell Tech World, May 19-22, Las Vegas (Matt Kimball)
  • Zscaler Zenith Live, June 2-5, Las Vegas (Will Townsend)
  • Snowflake, June 2-5, San Francisco (Robert Kramer)
  • Cisco Live US, June 8-12, San Diego (Will Townsend)
  • HPE Discover, June 23-26, Las Vegas (Will Townsend)
  • Techritory, October 22-23, Riga (Will Townsend)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending January 17, 2025 appeared first on Moor Insights & Strategy.

]]>
RESEARCH PAPER: AI in the Modern Enterprise https://moorinsightsstrategy.com/research-papers/research-paper-ai-in-the-modern-enterprise/ Fri, 17 Jan 2025 17:38:10 +0000 https://moorinsightsstrategy.com/?post_type=research_papers&p=45134 This report explores enterprise IT organizations’ challenges & how hybrid cloud environments with modern AI-ready infrastructure are a solution.

The post RESEARCH PAPER: AI in the Modern Enterprise appeared first on Moor Insights & Strategy.

]]>
We’re in perhaps the most dynamic era of enterprise IT. Modernization initiatives have been rescoped and accelerated to support generative AI projects, which have captured the attention of every executive with good reason. Gen AI promises to transform businesses in ways we haven’t witnessed.

However, as the need to accelerate and alter modernization efforts to support this new wave increases, IT budgets are only rising incrementally at best. Sustainability is another variable in the equation. While AI initiatives require more compute, storage, and other resources, CIOs are tasked with lowering power footprints to drive sustainability goals.

How can enterprise IT organizations simultaneously achieve modernization, AI, and sustainability goals, which seem to directly contradict one another? Moor Insights & Strategy (MI&S) sees the solution as rooted in infrastructure.

Outdated operating stacks powered by outdated hardware and processors unable to deliver the required performance, agility, security, and targeted acceleration are, in some cases, used as the building blocks for the AI-driven workloads running the modern business. This is a recipe for failure.

This research brief will explore enterprise IT organizations’ technical and operational challenges and how technology vendors are responding with hybrid cloud environments powered by modern AI-ready infrastructure. Further, it will evaluate how Nutanix, Dell, and Intel have partnered to deliver the Dell XC Plus running the Nutanix Cloud Platform (NCP) and GPT-in-a-Box powered by AI-accelerated Intel Xeon CPUs.

Click the logo below to download the report:

AI in the Modern Enterprise

 

Table of Contents

  • Summary
  • Modernization and AI — Complementary Yet Competing
  • Can IT Modernization and AI Operationalization Occur Simultaneously?
  • Where Do We Get Enough Power?
  • The Optimal AI Foundation Begins with the Cloud
  • Nutanix Cloud Platform — Simplicity Through Abstraction
  • Dell XC Plus — Performance and Security
  • Intel Xeon — Modernization Starts in Silicon
  • Managing the Modernization-Plus-AI Journey
  • Call to Action

Companies Cited:

  • Nutanix
  • Dell
  • Intel

The post RESEARCH PAPER: AI in the Modern Enterprise appeared first on Moor Insights & Strategy.

]]>
Datacenter Podcast: Episode 35 – Talking Extreme Networks, OpenAI, Oracle, Microsoft, IonQ, Dell https://moorinsightsstrategy.com/data-center-podcast/datacenter-podcast-episode-35-talking-extreme-networks-openai-oracle-microsoft-ionq-dell/ Tue, 14 Jan 2025 16:18:14 +0000 https://moorinsightsstrategy.com/?post_type=data_center&p=45061 On episode 35 of the Datacenter Podcast, Moor Insights & Strategy co-hosts Matt, Will, and Paul talk Extreme Networks, OpenAI, Oracle, & more

The post Datacenter Podcast: Episode 35 – Talking Extreme Networks, OpenAI, Oracle, Microsoft, IonQ, Dell appeared first on Moor Insights & Strategy.

]]>
On this week’s edition of MI&S Datacenter Podcast, Moor Insights & Strategy co-hosts Matt, Will, and Paul analyze the week’s top datacenter and datacenter edge news. They talk Extreme Networks, OpenAI, Oracle, and more!

Watch the video here:

Listen to the audio here:

3:38 Can Extreme Networks Vie for Share in 2025?
12:51 Do We Really Know How To Do It?
19:21 Oracle Exadata X11M – The Real Data Platform
28:51 Microsoft Betting Big on AI Data Centers in 2025
36:50 Entangled Ambitions
42:23 Dell Embraces OCP
50:16 Getting To Know The Team

Can Extreme Networks Vie for Share in 2025?
https://www.extremenetworks.com/resources/blogs/introducing-extreme-platform-one

Do We Really Know How To Do It?
https://blog.samaltman.com/reflections

Oracle Exadata X11M – The Real Data Platform
https://www.oracle.com/news/announcement/oracle-introduces-exadata-x11m-platform-2025-01-07/%C3%82%C2%A0

Microsoft Betting Big on AI Data Centers in 2025
https://blogs.microsoft.com/on-the-issues/2025/01/03/the-golden-opportunity-for-american-ai/

Entangled Ambitions
https://investors.ionq.com/news/news-details/2025/IonQ-Completes-Acquisition-of-Qubitekk-Solidifying-Leadership-in-Quantum-Networking/default.aspx

Dell Embraces OCP
https://moorinsightsstrategy.com/research-papers/evaluation-of-open-compute-modular-hardware-specification/%C3%82

Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.

The post Datacenter Podcast: Episode 35 – Talking Extreme Networks, OpenAI, Oracle, Microsoft, IonQ, Dell appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending January 10, 2025 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-january-10-2025/ Mon, 13 Jan 2025 22:05:05 +0000 https://moorinsightsstrategy.com/?p=44920 MI&S Weekly Analyst Insights — Week Ending January 10, 2025. A wrap up of what our team published during the last week.

The post MI&S Weekly Analyst Insights — Week Ending January 10, 2025 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.

It’s no surprise that my colleagues and I spent much of last week focused on CES. In particular, Anshel Sag—who’s a heck of a device reviewer, besides being a savvy industry analyst—will be publishing a number of pieces this week covering the big PC OEMs, chip makers, and players in the XR industry. Many of my own thoughts from CES made it into Friday’s installment of The Six Five Podcast.

Cisco Desk Pro in Mel Brue office

The Cisco Desk Pro (left) is a slick — albeit somewhat pricey — tool for getting more out of your video meetings. Photo: Melody Brue

Plenty of the announcements at CES are about eye-popping (or wannabe eye-popping) consumer devices, but Melody Brue’s review of the Cisco Desk Pro last week is a good reminder of the difference that high-quality enterprise tech can make for individual productivity. This reality is only going to be reinforced by the increasing adoption of AI agents in 2025 to augment the work of corporate employees, from the shop floor to the C-suite.

If you have a piece of new technology that’s changing the way you or your team work in 2025, I’d love to hear about it. What’s your favorite new gadget that’s moving the needle?

This week, Will is at the Cisco AI Summit in Palo Alto and Mel is attending Zoom’s virtual Work Transformation Summit. The rest of us are busy writing, researching, and advising clients. If there is anything we can help you with to start your year off strong, please reach out.

Let’s do this, 2025!

Patrick Moorhead

———

Our MI&S team published 15 deliverables:

This past week, MI&S analysts have been quoted in multiple syndicated top-tier international publications including CIO, Computerworld, Fierce Electronics, Fierce Networks, InfoWorld, MIT Technology Review, TechTarget, Wired, and others. The media wanted our thoughts on AWS, CES, Dell, HPE, IBM, Intel, Nvidia, Oracle, WordPress, and of course AI and some 2025 predictions

MI&S Quick Insights

I was quite intrigued by the agentic blueprints that NVIDIA announced last week at CES. But it was not necessarily the use cases—which were pretty commonplace—that were the real story. Much more compelling was the vision of what agentic development could be. The first thing that stuck out was that these are partner-driven solutions. This is in contrast to what we have seen so far, which have been siloed and internally developed agentic solutions. With those, you basically have to use a homogeneous stack of technology to realize the value. And, yes, there is still a need to use the NVIDIA AI Enterprise platform to deploy the blueprints that were announced, but the simple thought of co-development is good to see—and hasn’t been highlighted enough.

Second, I am very interested in how NVIDIA is thinking about agentic AI in the physical world. That is what I consider a second leap from what we are seeing so far. Today’s agents are very much bound to a cloud or a platform. The first leap I am hoping to see is a leap to the on-premise compute world. This could mean collaboration between the AIs on devices such as an AI PC or an iPhone and the cloud (edge AI, so to speak). The second leap is the same idea but to physical devices and robotics. Again, it’s refreshing to see NVIDIA paint a picture of the agentic world that is so visionary.

Over the past month I have been researching AI development platforms, and I have some research coming out on that very soon. But the deeper I have gotten into the topic, the more I’ve realized that each platform regards different user roles with different priorities. It is almost as if each vendor started development from a completely different place, yet they all ended up close enough to each other that we now have a new category of solution. This is a very good thing early in a product lifecycle. By having a broad base of solutions to choose from, the market will have a better opportunity to judge what ends up being the best use of the technologies.

To that point, I want to mention that the newly released Azure AI Foundry from Microsoft is, to my mind, the first of these platforms to really take on the IT management aspects of the problem set. And while it may not have all of the very coolest developer features we see from the competition, it does highlight an under-represented set of requirements that will be essential for enterprise deployment and success.

Last week a member of the media reached out to me to discuss a topic that got me thinking—how value is being redefined in the age of generative AI. Here’s an example: for as long as we can all remember, having lots of data was critical to drive decision making. And if you had good and exclusive data, that was highly valuable. But now that AI can assemble and infer data so quickly and so well, has the value moved away from mere data possession towards reasoning and prediction? Or will people now go to greater lengths to hoard the best data? I am leaning towards reasoning winning the day, but I do think it’s a great topic for reflection. (I’ll let you know when the article comes out.) At the very least, I expect that we will see a more distinct break between data and reasoning in the business world—like we already do between training and inference within AI.

I was quite intrigued by the agentic blueprints that NVIDIA announced last week at CES. But it was not necessarily the use cases—which were pretty commonplace—that were the real story. Much more compelling was the vision of what agentic development could be. The first thing that stuck out was that these are partner-driven solutions. This is in contrast to what we have seen so far, which have been siloed and internally developed agentic solutions. With those, you basically have to use a homogeneous stack of technology to realize the value. And, yes, there is still a need to use the NVIDIA AI Enterprise platform to deploy the blueprints that were announced, but the simple thought of co-development is good to see—and hasn’t been highlighted enough.

Second, I am very interested in how NVIDIA is thinking about agentic AI in the physical world. That is what I consider a second leap from what we are seeing so far. Today’s agents are very much bound to a cloud or a platform. The first leap I am hoping to see is a leap to the on-premise compute world. This could mean collaboration between the AIs on devices such as an AI PC or an iPhone and the cloud (edge AI, so to speak). The second leap is the same idea but to physical devices and robotics. Again, it’s refreshing to see NVIDIA paint a picture of the agentic world that is so visionary.

Over the past month I have been researching AI development platforms, and I have some research coming out on that very soon. But the deeper I have gotten into the topic, the more I’ve realized that each platform regards different user roles with different priorities. It is almost as if each vendor started development from a completely different place, yet they all ended up close enough to each other that we now have a new category of solution. This is a very good thing early in a product lifecycle. By having a broad base of solutions to choose from, the market will have a better opportunity to judge what ends up being the best use of the technologies.

To that point, I want to mention that the newly released Azure AI Foundry from Microsoft is, to my mind, the first of these platforms to really take on the IT management aspects of the problem set. And while it may not have all of the very coolest developer features we see from the competition, it does highlight an under-represented set of requirements that will be essential for enterprise deployment and success.

Last week a member of the media reached out to me to discuss a topic that got me thinking—how value is being redefined in the age of generative AI. Here’s an example: for as long as we can all remember, having lots of data was critical to drive decision making. And if you had good and exclusive data, that was highly valuable. But now that AI can assemble and infer data so quickly and so well, has the value moved away from mere data possession towards reasoning and prediction? Or will people now go to greater lengths to hoard the best data? I am leaning towards reasoning winning the day, but I do think it’s a great topic for reflection. (I’ll let you know when the article comes out.) At the very least, I expect that we will see a more distinct break between data and reasoning in the business world—like we already do between training and inference within AI.

Sam Altman, CEO of OpenAI, made a very interesting post on his personal blog about when he believes OpenAI could achieve artificial general intelligence (AGI), an advanced level of AI that can perform at human levels. It was only two years ago that OpenAI made the historic launch of ChatGPT. Just two months after its release, ChatGPT had 100 million active users, heralding AI’s potential as one of the most powerful technologies ever created. Beyond that, the launch transformed OpenAI from a small research lab into a major AI player. Today, an advanced version of the GPT platform handles more than one billion queries daily.

In the blog post, Altman shared personal anecdotes, including his unexpected firing from the company and the governance issues that followed, offering lessons learned in leadership and company management. The rapid growth of OpenAI required him to build the company culture and infrastructure almost from scratch. That led to both successes and setbacks. Altman admitted to his own failures in governance, particularly around his firing. In another lesson learned, he emphasized the importance of having diverse and experienced board members.

Now, here’s the most interesting part of his post. Altman predicts that AGI will be achieved in 2025 in the form of AI agents that will impact workforce productivity. Altman said, “We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.” Moving beyond AGI, Altman talks about focusing on superintelligence, envisioning a future where AI could dramatically enhance human capabilities and societal prosperity.

From my perspective, the trajectory towards more incremental and advanced AI capabilities looks doable. However, AGI needs a level of human-like reasoning and adaptability that can be applied over a wide range of tasks. That is a very complex goal. We are not there yet for the strict definition of AGI, but AI agents in the workforce with a limited form of AGI might be doable this year.

I believe that superintelligence is not possible at this stage or at any time within several decades, if ever. While AI has made tremendous advancements over the past decade, superintelligence involves numerous unknowns, including abstract reasoning, creativity, conscious thoughts, and problem-solving at the highest level. None of those is on the horizon yet—let alone all of them.

Oracle has launched the Exadata X11M data management platform, with a focus on driving extreme performance across three key workloads—AI (vector search), online transaction processing (OLTP), and analytics. Exadata is a combination of tuned hardware and software that enables organizations to accelerate performance of these key workloads while enabling greater levels of consolidation in the datacenter.

I like what Oracle is doing. For decades, Oracle Database has been the data management platform of the enterprise (97% of Fortune 500 companies run Oracle). It only makes sense that the company would take its IP and better enable core workloads that power the enterprise. The numbers are quite compelling across the board: Oracle claims vector search performance increases of up to 55% on storage servers and 45% on compute servers, along with 25% faster OLTP and analytics performance relative to the X10M platform. And this performance is delivered on-prem, in any major cloud, or in hybrid environments.

It can be very difficult for legacy infrastructure companies to pivot and maintain relevance as the market shifts around them. Oracle is unique in how it has smartly pivoted and taken full advantage of its footprint in enterprise data.

The data protection software market grew in 2024, driven in no small part by advanced cyberthreats and stricter regulations. Unsurprisingly, AI tools became much more important for automating governance, ensuring compliance, and detecting threats. Vendors such as Cohesity, Commvault, Rubrik, and Veeam Software improved their market presence through acquisitions, partnerships, going public, and adding new features to their platforms. Observability tools also progressed, integrating system monitoring with data protection for proactive solutions. Read more in my latest Forbes article about what I see ahead for data protection in 2025.

Zoho Analytics has grown into a full-fledged, AI-driven business intelligence platform. Its September 2024 release included more than 100 updates, with a big emphasis on expanding access to data analysis across different job functions. Considering its advancements in AI and machine learning, Zoho Analytics now competes with established BI solutions, enabling a broad range of users in different industries to make more informed decisions. Check out the recent MI&S Research Brief about Zoho Analytics from Melody Brue and me for more.

Extreme Networks launched its Extreme Platform One in early December. Platform One is positioned to allow IT professionals to manage and secure networks faster and more efficiently. The company claims that the offering has been developed based on customer feedback and aims to unify connectivity experiences with a single composable workspace and high degrees of AI-powered automation, and to deliver a simplified licensing structure. I believe that when Platform One becomes available in the second half of 2025, it will allow Extreme to compete with the likes of Cisco, HPE, and others more effectively given its historic focus on providing commodity connectivity infrastructure.

At the National Retail Federation’s show, SAP rolled out some new features for the retail industry. These include the SAP S/4HANA Cloud Public Edition, designed for retail, fashion, and related businesses, as well as an AI-powered shopping assistant. The company also shared plans for a loyalty-management solution for retailers and consumer goods companies, which is set to launch in late 2025. The updates are geared toward helping retailers work more efficiently and better connect with their customers.

In 2025, AI agents are expected to change the game in retail by enabling personalized customer experiences, flexible shopping options, and sustainability initiatives. At NRF, Microsoft highlighted tools such as Copilot and Dynamics 365 ERP agents that can handle routine tasks, improve operations, and make real-time decisions. This gives employees more time to focus on what matters most while improving efficiency, reducing costs, and helping build relationships with customers and suppliers.

Key statistics from Adobe’s 2024 Holiday Shopping Report reveal significant trends in online retail during the holiday season. Online retail spending reached a record $241 billion, representing an 8.4% increase compared to 2023. Additionally, spending on buy now, pay later (BNPL) options grew substantially, exceeding $18 billion and peaking at $993 million on Cyber Monday, which set a new single-day record. Mobile revenue accounted for 53.2% of online shopping, totaling $128 billion. This shift towards mobile spending and the increasing popularity of BNPL options highlight changing consumer preferences in digital payments and financing.

Indeed has released its 2025 U.S. Jobs & Hiring Trends Report, which includes interesting data points and trends for the workplace and workforce in the new year. Two significant trends stood out to me as poised to reshape the workforce in 2025, presenting challenges and opportunities for businesses and workers alike.

1. Demographic shifts and labor shortages: The U.S. is experiencing a decline in its prime working-age population, a trend with profound implications for labor supply. This demographic shift suggests that future workforce growth could hinge on immigration, potentially leading to persistent labor shortages across various sectors. Companies may need to reevaluate their talent acquisition strategies to focus on upskilling existing employees, embrace remote work to access wider talent pools, and implement aggressive retention initiatives.

2. The dual nature of AI: Artificial intelligence is rapidly transforming the workplace, potentially automating existing jobs while creating new roles. While estimates suggest that AI could automate millions of jobs, it’s also projected to generate millions of new positions requiring a blend of technical expertise and uniquely human skills such as empathy, creativity, and critical thinking. This duality underscores the growing importance of adaptability and continuous learning for workers at all levels.

The convergence of these trends presents a complex landscape. A shrinking workforce may accelerate AI adoption to address labor shortages, potentially increasing productivity but also raising concerns about job displacement. To thrive in this evolving environment, businesses and individuals must proactively adapt, embrace learning, and cultivate a workforce equipped for the demands of the future.

After last year’s CES, I predicted that Matter would reach its tipping point in 2025, becoming the preferred connectivity standard for new smart home product designs. I’m doubling down on that prediction this year because Matter ecosystems (platforms) are maturing, and consumer adoption is finally taking off.

1. Matter ecosystems: Certifying products and developing product-specific apps is becoming much easier.

  • Easy product certification — “Works with” compatibility programs from Apple, Google, and Samsung agreed to accept Matter interoperability testing. Apple is already accepting these lab results, and Google and Samsung plan to do the same later this year. What about Amazon? Stay tuned. My take: This announcement validates Matter’s “universal interoperability” brand promise and encourages more device makers to get on board. Meanwhile, the economics are compelling—one interoperability test replaces three or four.
  • Easy app development — As promised earlier this year, Google is opening up Google Home as a developer platform. The company just announced a new set of Matter Home APIs for Android developers, with iOS support coming in a few months. These APIs link partner apps with Google Home hubs to control devices and automation experiences. The apps connect directly to the Google Home runtime package on local, on-premises hubs. The runtime controls Matter devices without a round-trip to the cloud, reducing latency while improving reliability and privacy. Google Home’s installed base is over 40 million hubs, including Nest, Chromecast, Google TVs, and some LG TVs. My take: This is a big deal. Today, Matter standardizes connectivity, but CE manufacturers often require product-specific features at the ecosystem level. Creating new ecosystems is complicated and costly, and consumers don’t want a separate ecosystem for each product, so Google is on the right architectural path here. Google Home hubs connect local devices without round trips to the cloud, and APIs let partners extend the ecosystem with product-specific features and experiences. Other ecosystems (Apple, Amazon, and Samsung) already have comparable APIs, and could add on-premises control logic to their hubs. I hope ecosystem companies consider standardizing APIs or at least using similar design patterns.

2. Matter products: The tech news outlets will review all the new Matter products that debuted at CES, but here are my short takes on a few that caught my eye.

  • Resideo (Honeywell Home) announced the Honeywell Home X2S Matter-enabled smart thermostat ($79.99 MSRP). My take: The low price point proves that adding Matter is cost-effective.
  • GE unveiled two new wall-mounted Matter-based “Cync” dimmer switches with several interesting innovations, including single-device three-way circuits ($44.99 and $25.99). My take: It’s great to see major consumer brands support Matter with innovative, mainstream, reasonably priced products.
  • LG’s over-the-range microwave oven with a 27-inch touchscreen and full Matter support created considerable interest at CES. It’s a smart TV, Matter hub, Thread border router, and home control panel. Oh yeah, it also microwaves food and has three cameras to show it cooking. This product is part of an industry trend to use touchscreens as the UI for appliances, from light switches to washing machines. My take: Some analysts dismiss this trend as silly, but we should take it seriously. LCD panels are inexpensive peripherals for smart appliances, so the question isn’t whether to use them, but how to use them. For instance, I see a rough road ahead for CE companies that envision these screens as advertising billboards or sales tools.
  • Aqara is going all-in with dozens of Matter products and variants. Examples include control panel hubs, dial-based touchscreen controllers, touchscreen switches, light switches, dimmer switches, presence sensors, climate sensors, a doorbell camera, and a Matter hub. My take: Aqara is beating established brand names to the punch with a broad Matter product portfolio.
  • Locks — Several companies introduced innovative Matter-enabled smart locks. Schlage’s first Matter product is the Sense Pro Smart Deadbolt. It uses UWB for hands-free unlocking. ULTRALOQ’s Bolt Fingerprint and Bolt Mission locks have Matter support, and the latter has UWB spatial awareness. My take: I’m pleased to see house locks get some of the great features we’ve had in car locks for years. I’m also bullish on UWB.

3. Industrial automation: There were hundreds of industrial announcements at CES. Here are three examples.

  • NVIDIA — For IIoT and edge tech, the quote of the week was from Jensen Huang: “The ChatGPT moment for general robotics is right around the corner.” He defined three kinds of robots that require no special accommodations to put them into service—agentic (because they’re information workers), self-driving vehicles (because roads are already in place), and humanoid (because they fit directly into our world). I think enterprise and industrial operations technology is a fourth AI embodiment. Industrial IoT systems are increasingly autonomous and adaptive but lack the uniform connectivity and interoperability needed to act “robotic.” This is the new definition of industrial IoT—enabling robotic physical infrastructure.
  • NXP — The company has agreed to acquire TTTech Auto in an all-cash transaction valued at $625 million. NXP plans to upscale TTTech MotionWise in its CoreRide software, accelerating the shift from hardware-based designs to software-defined vehicles (SDV). TTTech stands for time-triggered technology, a set of techniques for synchronizing and scheduling events across distributed systems. My take: This savvy acquisition ensures the CoreRide platform can use standard networks for tightly timed, safety-related distributed automotive applications. CoreRide and TTTech technologies could also apply to manufacturing and other industrial applications, but NXP hasn’t confirmed that.
  • Ceva-MediaTek collaboration — Imagine wearing VR headgear, turning your head, and the audio space remains fixed relative to the 3-D video space. Ceva’s RealSpace immersive spatial audio integrates this and other advanced audio processing techniques into MediaTek’s Dimensity 9400 mobile chipset. My take: Locking the audio space to the virtual visual world is very cool, and not just for gaming. For instance, spatial audio adds realism to Industrial digital twins.

IonQ recently completed its acquisition of Qubitekk, a quantum networking firm. The acquisition provides IonQ with advanced networking technology and a large number of new patents, bringing IonQ’s portfolio to over 600 patents. It also acquired an important networking asset in Qubitekk’s EPB Quantum Network, the first commercial quantum network in the U.S. That will enhance IonQ’s quantum networking capabilities and remote ion-ion entanglement. The integration of Qubitekk’s technology will likely provide IonQ with faster quantum network deployment, which will enhance its secure communications and distributed computing capabilities. This should push IonQ into a leadership position in quantum networking, which could result in new partnerships and/or contracts. Combining IonQ’s quantum expertise with Qubitekk’s networking experience could result in significant advancements in security and computational power. This acquisition is strategic for IonQ, given its dependence on networking for how it plans future scaling of qubits.

Recent negative news related to the Palo Alto Networks Expedition firewall migration tool may be overblown. The tool was offered as a free utility to migrate configurations from third-party firewalls to Palo Alto Networks’ next-generation firewall platform, but it was never intended for production deployments. There is no evidence of active exploitation, but even though the company retired the tool last year, patches have been issued and production migration tools have been provided to its customers.

In 2025, sports tech is sure to keep evolving as part of the ongoing transformation of how fans experience games and connect with their teams. For example, the platform Cosm and Meta’s Xtadium app are bringing sports into virtual reality. Meanwhile, streaming services—as we saw in 2024 with Peacock during the Summer Olympics, Netflix with boxing and the NFL, and AWS with the NFL—are expected to expand with AI features that include personalized highlights and real-time stats. This tech is also branching into other areas of entertainment and music, with AI shaping everything from songwriting to virtual concerts and even influencing events like the Grammys. We can expect platforms like TikTok and YouTube to continue blending sports, music, and entertainment, giving creators and fans new ways to connect and engage.

Last week, Dell announced substantial upgrades to its AI PC portfolio, highlighting enhancements in performance and sustainability. This initiative reflects the increasing importance of sustainability in business, a trend expected to continue influencing industry strategies through 2025. Dell’s approach includes implementing circular design principles, such as modular components and greater use of recycled materials, to extend product lifecycles and minimize e-waste. The company’s initiatives to improve energy efficiency, battery life, and repairability likewise underscore its commitment to addressing environmental concerns while catering to the performance demands of the AI PC market.

AT&T recently announced a customer guarantee for consumers and small businesses that use its wireless and fiber networks. Any customer who experiences a fiber outage of 20 minutes or more or a wireless outage of 60 minutes or more will receive compensation in the form of a billing credit. Additionally, the company is setting a goal for its customers to reach a call-center technical expert within five minutes or receive a callback at a chosen time, as well as a commitment to send a field technician the same day or next day for unresolved issues. AT&T Guarantee is a significant move for the operator, given that it’s the first of its kind for consumers, and I expect many of AT&T’s competitors will respond with similar commitments.

Research Papers Published

Citations

AI / Matt Kimball / AI Business
AI’s New Wave: Great Spaceships, Bumpy Runways

AI in 2025 / Anshel Sag / MIT Technology Review
The Download: our 10 Breakthrough Technologies for 2025

AWS / Graviton / Patrick Moorhead / Medium
AWS Graviton Adoption on the Rise: Half of All Instances Use Custom Silicon

Data Platforms & AI / Jason Andersen / Fierce Networks
Move over, data platforms – this is the dawning of the ‘Age of Intelligence’

Dell / AI PC / Patrick Moorhead / PRNewswire (picked up in several publications)
Dell Technologies Leads AI PC Movement with New, Redesigned PC Portfolio

Dell / AI PC / Patrick Moorhead / Investing.com
Dell unveils new AI-enhanced PC lineup for professionals

Dell / AI PC / Patrick Moorhead / IT Brief
Dell unveils streamlined AI PC portfolio with focus on productivity

C Code / Jason Andersen / InfoWorld
Researchers build a bridge from C to Rust and memory safety

CES 2025 / Anshel Sag / Wired
AI Hardware Is In It’s ‘Put Up or Shut Up’ Era

HPE / HPE acquisition of Juniper Networks / Will Townsend / SDX Central
Can HPE integrate Juniper opportunities?

IBM / RISE with SAP on IBM Power Virtual Server / Robert Kramer / CIO
IBM offers SAP-on-Power users a new way into the cloud

Intel / Company timeline, market issues, resolutions / Patrick Moorhead / Tech Target
Intel’s rise and fall: A timeline of what went wrong

Intel / 2025 Plans & Goals / Patrick Moorhead / Fierce Electronics
Intel takes deep breath, faces new year in upbeat showing at CES

NVIDIA / GenAI / Patrick Moorhead
Nvidia’s new model aims to move GenAI to physical world

Oracle / Data / Matt Kimball / InfoWorld
Oracle offers price-performance boost with Exadata X11M update

Oracle / Data / Matt Kimball / Oracle Blogs
Global Industry Analyst Perspectives on Oracle Exadata X11M

WordPress / Ongoing legal battle / Melody Brue / Computerworld
Matt Mullenweg: WordPress developer hours cutback may or may not slow innovation – Computerworld

New Gear or Software We Are Using and Testing

  • Kindle Colorsoft (Anshel Sag)
  • Google Pixel Buds 2 Pro (Anshel Sag)
  • XREAL One AR Glasses (Anshel Sag)
  • Google Pixel Watch 3, 41mm (Anshel Sag)
  • Cisco Desk Pro (Melody Brue)
  • OnePlus Buds Pro 3 (Anshel Sag)
  • Insta360 Link2 4K AI Webcam (Anshel Sag)
  • Google Pixel 9 Pro Fold (Anshel Sag)
  • Google TV streamer – Matter and Thread features (Bill Curtis)
  • Various Matter devices (Bill Curtis)
  • ASUS Zephyrus G16 Gaming Laptop (Anshel Sag)
  • iPhone 16 Pro (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Cisco AI Summit, January 15, Palo Alto (Will Townsend)
  • World Economic Forum, January 20-24, Davos, Switzerland (Patrick Moorhead)
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)
  • Microsoft AI Tour, January 30, New York City (Robert Kramer)
  • Cisco AI Summit, January 15, Palo Alto (Will Townsend)
  • World Economic Forum, January 20-24, Davos, Switzerland (Patrick Moorhead)
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)
  • Microsoft AI Tour, January 30, New York City (Robert Kramer)
  • ZohoDay25, February 3-5, Austin (Robert Kramer, Melody Brue)
  • Cisco Live EMEA, February 10-13, Amsterdam (Will Townsend)
  • SAP Analyst Innovation Council, February 11-12, New York City (Robert Kramer)
  • RingCentral Analyst Summit, February 24-26, Napa (Melody Brue)
  • Arm Analyst Summit, February 18-21, San Francisco (Matt Kimball)
  • Microsoft Threat Intel Summit, February 25, Redmond (Will Townsend)
  • Siemens Datacenter Analyst Summit, February 25-27, Zug, Switzerland (Matt Kimball)
  • Mobile World Congress, March 2-7, Barcelona (Will Townsend)
  • Adobe Summit, March 18-20, Las Vegas (Melody Brue)
  • Extreme Networks Connect, May 19-22, Paris (Will Townsend)
  • Zendesk Analyst Day, March 25, Las Vegas (Melody Brue)
  • Oracle Database Summit, March 25, Mountain View (Matt Kimball)
  • IBM event, March 25, NYC (Matt Kimball)
  • Canva Create & Analyst Day, April 8-10, Los Angeles (Melody Brue)
  • NTT Upgrade, April 9-10, San Francisco (Will Townsend)
  • RSA Conference, April 28-May 1, Las Vegas (Will Townsend)
  • Nutanix.NEXT May 6-9, Washington DC (Matt Kimball)
  • Dell Tech World, May 19-22, Las Vegas (Matt Kimball)
  • Zscaler Zenith Live, June 2-5, Las Vegas (Will Townsend)
  • Cisco Live US, June 8-12, San Diego (Will Townsend)
  • HPE Discover, June 23-26, Las Vegas (Will Townsend)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending January 10, 2025 appeared first on Moor Insights & Strategy.

]]>
RESEARCH PAPER: Sustainable Performance in the Datacenter https://moorinsightsstrategy.com/research-papers/research-paper-sustainable-performance-in-the-datacenter/ Wed, 08 Jan 2025 13:00:46 +0000 https://moorinsightsstrategy.com/?post_type=research_papers&p=44874 This report explores datacenter power challenges and how Solidigm’s new D5-P5336 SSD helps solve for both performance and power consumption.

The post RESEARCH PAPER: Sustainable Performance in the Datacenter appeared first on Moor Insights & Strategy.

]]>
There is tension between a business’s need to maximize the value of AI across the organization and its need to drive down its energy consumption. On one side is the need for performance—powerful (and power-hungry) GPUs attached to highly performant storage. On the other side is a power budget that is expensive—in terms of carbon and datacenter footprint. Both sides contribute to significant financial costs.

While many datacenter professionals look to GPUs and CPUs as the key contributors, they often overlook the role of storage in this power consumption equation. This Moor Insights & Strategy (MI&S) pulse brief will explore this power challenge and how Solidigm’s new D5-P5336 SSD with a capacity of 122.88 TB helps datacenter operators solve for both performance and power consumption.

Click the logo below to download the research paper and read more.

Sustainable Performance in the Datacenter

 

Table of Contents

  • Summary
  • Sizing the Sustainability Challenge
  • The Storage Performance Tax
  • Solidigm Drives Sustainable Performance and Capacity
  • Call to Action

Companies Cited:

  • Solidigm
  • International Energy Agency
  • Dell Technologies
  • HPE
  • Lenovo

The post RESEARCH PAPER: Sustainable Performance in the Datacenter appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending January 3, 2025 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-january-3-2025/ Mon, 06 Jan 2025 18:00:42 +0000 https://moorinsightsstrategy.com/?p=44775 MI&S Weekly Analyst Insights — Week Ending January 3, 2025. A wrap up of what our team published during the last week.

The post MI&S Weekly Analyst Insights — Week Ending January 3, 2025 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

Happy New Year from Moor Insights & Strategy!

Happy New Year from Moor Insights & Strategy!

(Photo by Jireh Foo on Unsplash)

Welcome to the annual tech trends edition of our Analyst Insights newsletter. 2024 was a year of rapid advances and unexpected developments across the technology landscape, and 2025 promises to bring even more surprises. As we embark on a new year, we’ve gathered insights from all of our MI&S analysts across their specialty areas to provide you with an overview of the key trends that shaped the past year and what our experts anticipate for the year ahead.

It was a pleasure working with you to navigate the transformative trends of 2024, and we look forward to providing valuable insights into the forces shaping the technology landscape in 2025 and beyond.

As always, if there is anything you would like to discuss as you plan for the year ahead, please reach out. Many of us will be in Las Vegas for CES next week—we’d love to connect with you there!

Patrick Moorhead

———

Our MI&S team published 22 deliverables:

Since our last newsletter, MI&S analysts have been quoted in top-tier international publications including OpenTools and Yahoo Tech. Reporters wanted our thoughts on AWS, Google Pixel 9, Nvidia, and smartwatch and wearable trends in 2025.

MI&S Quick Insights

My biggest surprise of 2024 was learning how developers have embraced AI assistance. Developers are a smart and often skeptical group of people. But time and again, I heard stories about devs paying out-of-pocket for assistant technologies to speed up their work. I expected more cynicism about AI’s ability to help with coding — which tells me that the technology must be pretty good.

I have two predictions for 2025:

  • Agentic development will continue to be big in the first half of the year, especially since we now have some highly viable agentic development platforms including Bedrock (AWS), AI Foundry (Azure), and Agentspace (Google). I also expect to see non-cloud competitors to these platforms this year. (Red Hat, can you hear me??)
  • AI governance and controls will be a massive challenge. We are already seeing technologists grapple with the implications of AI usage and apps. But once line-of-business professionals get comfortable with pervasive AI use, we will see IT and legal departments flex their muscles in a meaningful way.

My biggest surprise of 2024 was learning how developers have embraced AI assistance. Developers are a smart and often skeptical group of people. But time and again, I heard stories about devs paying out-of-pocket for assistant technologies to speed up their work. I expected more cynicism about AI’s ability to help with coding — which tells me that the technology must be pretty good.

I have two predictions for 2025:

  • Agentic development will continue to be big in the first half of the year, especially since we now have some highly viable agentic development platforms including Bedrock (AWS), AI Foundry (Azure), and Agentspace (Google). I also expect to see non-cloud competitors to these platforms this year. (Red Hat, can you hear me??)
  • AI governance and controls will be a massive challenge. We are already seeing technologists grapple with the implications of AI usage and apps. But once line-of-business professionals get comfortable with pervasive AI use, we will see IT and legal departments flex their muscles in a meaningful way.

I believe that in 2025, 5G will become an accelerator for AI and gain more prominence as a key component for enabling AI. While many vendors talk about edge AI and running models on devices, the reality is that many models simply cannot run on the device and that hybrid AI will remain in the future for a long time. The only way for hybrid AI to work effectively is with an always-on connection; this is really easy for smartphones but more challenging for PCs, and we might actually see 5G PCs grow as a result of that this year. Additionally, XR is an excellent interface for AI, and—conversely—AI is an accelerant for XR capabilities and growth. I believe we will see the new Android XR spatial OS as a proof point for that interconnection in both MR and AR products and solutions.

AI is a snowball that gathers larger amounts of material and grows bigger and more capable every day. In fact, it is accelerating in functionality and scope. Every morning when I open my inbox, it is filled with more new information about larger models, new features, larger funding, new funding, new startups, better reasoning, and so on.

I was curious about how much information is distributed about AI on a daily basis. I thought Google’s Gemini search might give me a general idea, but after spending a few paragraphs explaining why it couldn’t offer a hard number of publications about AI, Gemini said, “However, I can offer some informed speculation: Considering the immense volume of information Google indexes, the widespread interest in AI, and the constant stream of new content, it’s safe to say the number of publications is extremely large. We’re likely talking about millions, perhaps even tens of millions, of pages.”

So without using any specific numbers, here’s how I see AI’s growth. Most everyone has seen videos of the earth in comparison to the size of other objects in the universe. It starts like this: A basketball-sized Earth is initially shown next to a stadium-sized Sun. Then, the giant star Betelgeuse appears on screen, dwarfing the sun and the earth. Betelgeuse is as big as a city block. But Really Big is yet to come. When the massive star called VY Canis Majoris appears, Betelgeuse shrinks in comparison. What was once a giant star is now an insignificant sandbox compared to an entire beach. Finally, a supermassive black hole covers the screen. Relative to it, the Earth and Sun are nearly invisible specks.

Today, AI is like Earth in the video, but it will grow to the size of the bigger stellar objects over time. At least that’s how I envision its long term growth—AI of today is a speck compared to what it will likely become in 25 or 50 or 100 years. Let’s hope humanity has the wisdom and ability to use it wisely.

In 2024, despite the rise of AI, customer service saw a surprising trend: a renewed emphasis on human interaction. 77% of customers said they preferred an immediate connection with a person, and 81% would rather wait for a live agent than interact with a bot. While businesses strategically blend AI with human agents to enhance efficiency, customers overwhelmingly prefer connecting with real people for a more nuanced and practical experience. In 2025, AI-powered voice data analysis will become crucial, enabling hyper-personalized experiences by detecting emotions and predicting needs in real time. While omnichannel remains necessary, companies must prioritize voice interactions and leverage AI to extract valuable insights from this channel.

Meanwhile, CRM trends in 2024 revealed a shift towards user-friendly, self-service solutions, empowering businesses of all sizes. This trend will continue into 2025, with “CRM à la carte” and low-code/no-code platforms allowing for easier customization and simplified data entry. To combat data silos, companies are increasingly unifying teams under a single CRM system, streamlining communication, reducing errors, and enhancing data-driven decision-making.

2024 saw a dynamic in the compute silicon space that somewhat parallels the storage market: a bifurcation of silicon along AI versus non-AI lines. Bespoke silicon for bespoke workloads and functions has existed since semiconductors have been in existence. However, AI is different. The needs of AI have led to a renewed focus on semiconductors and startups such as Cerebras, Tenstorrent, Untether AI, and so many others. Further, the market has accelerated growth in the custom silicon space as companies like Broadcom and Marvell have benefited greatly from the needs of hyperscalers, which have very specific computational and power requirements around training, inferencing, moving, and securing data. So while NVIDIA has commanded the AI silicon market overall, it has been somewhat surprising to see the amount of VC funding that has gone into the silicon startup space.

I believe that 2025 will see this trend continue. AI inference will take center stage alongside AI training, with increasing focus on the many startups serving this market. Additionally, smaller functions along the AI data journey that currently add significant latency will spawn a new wave of silicon innovation to drive better performance and security. As in recent years, I expect to see significant VC funding going to seed startups that help in the collection, storage, preparation, and movement of data in the AI pipeline.

As in the silicon market, 2024 saw somewhat of a bifurcation of the storage market as high-performance storage vendors such as VAST Data, Weka, and DDN pivoted to address the AI data pipeline and data management. While storage is a critical element of the AI equation, gathering, cataloguing, and readying enterprise data is where the real complexity of AI becomes real for business and IT leaders alike as projects move from conceptual to operational. The early-mover status achieved by VAST and other high-performance computing storage players is logical, as these companies have been focused on the more advanced functions of storage systems for the sake of accelerating workload performance. This is why we saw VAST’s valuation skyrocket through 2024, at the same time the profiles of Weka and DDN rose considerably.

I believe 2025 will see the storage market shift in both technology and messaging as these upstarts continue to increase awareness, share, and valuation. We have already seen NetApp begin its evolution, and both Dell and HPE have quietly made moves that better position their respective portfolios. While AI-washing in terms of messaging is no surprise (because every vendor tries to exploit market trends), the investment in technology being made by these companies is the real tell; it signals that they see AI as fundamentally shifting enterprise IT organizations, in terms of both operations and technology consumption.

One partial outlier in this equation is Pure Storage. While the company continues to broaden its support for enterprise AI through its portfolio, it has not lost sight of the enterprise storage needs that exist outside of this one significant workload. However, the company seems to be taking a more measured approach in terms of allowing the market to come to it and meeting customers where they are. I believe it is this approach that has led to the company to regularly recognize roughly 10% year-over-year growth in its quarterly financial reports.

The enterprise application market experienced significant growth from 2023 to 2024, with the market size increasing from $335 billion to as much as $363 billion, depending on the source. This represents an approximate growth rate of 8.4% year over year. However, despite this growth, customer dissatisfaction with enterprise software vendors rose in 2024. This dissatisfaction primarily arose from perceived unfair pricing strategies and a lack of clear value delivered by vendors. This indicates a market shift in which customers are demanding greater transparency and a better ROI from their software purchases.

In 2025, I expect deeper integration of AI within application ecosystems. At the same time, customer companies will prioritize trust and demonstrable value, seeking clearer ROI and more flexible pricing models from vendors. Unlike the 2024 emphasis on feature expansion, in 2025 we will see more focus on efficiency, interoperability, and user-centric design. This shift reflects a maturing market where vendors must adapt to needs of discerning customers that have multiple buying personas and significant budget constraints.

ERP systems got a shake-up in 2024 with AI and vendor modernization efforts. But what really stood out was the shift in mindset—enterprises realized adopting ERP isn’t just about new tech. It’s also about getting their data organized and making sure their teams are ready for change. Functionality matters more than features. If anything, 2024 made it clear that ERPs aren’t just about keeping the lights on—they’re a critical tool for businesses to grow, adapt, and stay competitive.

The payoff for those who got it right was obvious. Modern ERPs centralize data across departments, automate routine tasks, and deliver sharper insights with improved analytics. Cloud technology—pushed hard by vendors—has made these systems more flexible, mobile, and user-friendly, while also being easier for vendors to support. Even so, with an estimated two-thirds of enterprises still using on-premises setups, there’s an emphasis on moving to hybrid or fully cloud-based systems.

Looking ahead, I see data strategies and ERP systems shaping up to be even more important in 2025. Cloud adoption will keep growing because it’s flexible, cost-effective, and lets enterprises keep their locations connected while enabling their people to work from anywhere. Managing data will still be a big deal, with businesses focusing on keeping the data clean, secure, and well-governed, with better tools for protecting sensitive info.

In many industries, supply chain management will also stay front and center. With IoT increasing in functionality and especially getting better at providing data, ERPs will get better at real-time tracking and analytics, making it easier to handle inventory, logistics, and demand planning. (See Bill Curtis’s “IoT and Edge” entry in this newsletter for more on how this part of the data landscape is changing.) I expect pricing to move toward consumption-based models (versus user-based) to make it easier to bring more employees onto each system.

We’ll also continue to see ERP systems designed specifically for different industries. But here’s the thing: all this technology only works if businesses manage change well. Teams need support to adapt to new workflows, or it won’t stick. And finally, sustainability will be a bigger part of the picture, with ERPs helping businesses track environmental goals and ethical sourcing.

In 2024, operational data was the unsung hero of digital transformation. While LLMs, generative AI, and agentic AI captured headlines, operational technologies (OT) quietly emerged as critical enablers of enterprise digital transformation. Enterprises with significant physical assets discovered that fusing OT and IT data into a company-wide, multimodal, real-time data estate transforms AI-enhanced ERP, SCM, and BI applications from reactive to proactive. (Robert Kramer’s entry on ERP and SCM elsewhere in this newsletter gives more perspective on trends affecting those software vendors and their customers.) This profound change upgrades decision-making, enhances process efficiencies, and provides a holistic context for advanced industrial automation. The rapidly growing ROI for OT-IT integration projects creates insatiable demands for OT data.

However, despite compelling integration business cases, most OT data remains inaccessible due to the complexity, cost, and security risks of connecting OT systems with mainstream enterprise applications. This is the OT-IT gap—the chasm between the uniform, managed world of IT and the heterogeneous, chaotic world of industrial IoT (IIoT).

Motivated by AI-driven demand for operations data, enterprise software suppliers that are scrambling to find more efficient ways to bridge the OT-IT gap are adopting a straightforward “data first” approach. Instead of trying to manage devices from end to end, just grab the data. Replace complicated, costly, hard-coded, application-specific device-to-cloud connectivity and device management solutions with simple cloud interfaces for data, events, and status. This approach provides immediate access to IIoT machine data and enables OT software to evolve independently from cloud-native IT systems. Multimodal AI applications can use many types of unstructured IIoT data as is, further reducing device-side software complexity.

Recent announcements from AWS, Google, Honeywell, Microsoft, Qualcomm, and other major cloud frameworks and ERP suppliers confirm this trend. The goal is clear: feed the rapidly growing market for AI-enhanced business transformation with massive amounts of OT data via standard protocols and simple APIs. In other words, simplify getting OT data from IIoT devices.

For 2025, I’m watching three enterprise edge trends and one consumer trend.

  1. CSP and ERM frameworks simplify and accelerate OT data collection, processing, and correlation for AI-powered enterprise applications. AI is now IIoT’s “killer app.”
  2. IIoT devices transition from customized, end-to-end mashups to scalable platforms supporting multiple enterprise frameworks via simple interfaces.
  3. Middleware companies fill the gaps, providing industry-specific connectivity, data, edge analytics, and device management services.
  4. For “smart home” consumer applications, 2025 is the year Matter reaches its tipping point, with significant design wins and increased adoption. Other vertical industries are carefully watching Matter’s standardization efforts, learning from its successes—and mistakes.

In 2024, platforms such as Zoom, Microsoft Teams, and Webex evolved into essential all-in-one communication and collaboration business tools, integrating features including whiteboarding, collaborative documents, and project management functions. This trend towards unified business platforms will accelerate in 2025, combining previously separate tools. Expect deeper integrations, such as what we’ve seen this year with Adobe Express within Box and the ability to create Jira tickets in the Grammarly extension.

Inclusivity will also be a significant focus in 2025, with accessibility features such as real-time translation and closed captioning becoming standard. Companies such as Ava are leading the way with tools specifically designed for individuals who are deaf or hard of hearing, while companies such as Google continue to prioritize accessibility.

As hybrid work persists and collaboration becomes more complex, security concerns remain paramount. Organizations will demand collaboration tools with robust security features, including end-to-end encryption and compliance with evolving data protection regulations.

With Qualcomm’s introduction into the PC market as a chipset vendor, we’ve seen new levels of competition in the space—something that I don’t think we’ve seen in probably the last 25 years. While the introduction of Copilot+ PCs with Qualcomm’s Oryon-based Snapdragon X Elite processors wasn’t necessarily the smoothest (lots of Arm app compatibility needed to get worked out), it did present an alternative offering that pushed the incumbents to accelerate their roadmaps and improve their execution; as a result, the PC market is now far more competitive and faster paced. I expect that this trend will continue to accelerate in 2025 as PC OEMs continue to negotiate with the chip vendors for better products and pricing, which I believe will ultimately benefit the consumer and accelerate the uptake of the AI PC.

Over the past five years, quantum computing has made significant progress—with 2024 as a big contributor to that progress. IonQ has become a public company. IBM has created a roadmap with corresponding technologies to push superconducting qubits past the 1,000 mark. Atom Computing has firmed up its neutral atom technology and has begun pushing aside barriers with its own 1,000-qubit machine. Quantinuum’s H-2 quantum processor has an unbelievably high quantum volume. Microsoft and Quantinuum are both advancing topological computing. Finally, the ecosystem has made several breakthroughs in quantum error correction. In fact, Google’s latest Willow chip actually reduces the error rate as more qubits are added.

In 2025, the trajectory of quantum computing will continue to be shaped by technological breakthroughs, increased investment, and the integration of quantum into broader technological ecosystems. We’ll also see IonQ begin to network its quantum processors together for increased power and scaling. IBM will continue to move forward with advancements in post-quantum cryptography. (More on that in this article.)

I also expect to see some early movement in using quantum for financial applications, such as applying QAOA (the Quantum Approximate Optimization Algorithm) for portfolio optimization and possibly some real-time analysis. JPMorgan Chase has a large portfolio of financial operations where quantum computing could replace parts of classical systems.

Meanwhile, PsiQuantum and Photonic are well on their way to creating photonic quantum computers. We will also see the beginning of real supercomputers that integrate AI, Quantum, and HPC.

Overall, I expect 2025 to be a year of early proofs-of-concepts.

The CrowdStrike global IT outage in 2024 was a seminal moment not only for the cybersecurity industry but also for developer operations in general. The million-dollar question—more likely a billion-dollar question, given the collateral damage—is what could have prevented such a devastating occurrence. Modern continuous integration and continuous delivery/deployment pipelines coupled with test environments are designed to provide a failsafe mechanism that catches bad code and allows rollback before catastrophe strikes. Integrations will continue among software platforms to provide the highest levels of endpoint security. I believe the CrowdStrike incident will serve as a learning experience for other IT solution providers.

Cybersecurity in 2025 will be defined by its ability to adapt to a rapidly evolving threat landscape. Identity access solutions will embrace zero trust architectures, automation, and seamless integration. Endpoint security will rely on AI-powered analytics and lightweight architectures. Cisco’s ongoing momentum with its security cloud, plus recent innovations from Microsoft, Okta, Palo Alto Networks, and others, demonstrate an industry move toward unified, scalable, and AI-enhanced platforms. Organizations must stay ahead of bad actors by investing in modern cybersecurity infrastructure, employing a culture of security awareness, and adopting an integrated approach to cyber defense that facilitates improved security outcomes.

2024 saw both progress and contradictions in tech sustainability. While green datacenters and energy-efficient AI emerged, the industry’s footprint remained significant. “Greenhushing” highlighted the need for transparency as companies became more cautious about publicizing their environmental efforts. In 2025, sustainability will shift from an optional good deed to a core business imperative driven less by a sense of virtue and much more by the energy demands of advanced technologies, regulatory pressures, and investor scrutiny. Companies must integrate sustainability into all operations, as it will become a key differentiator, separating leaders from laggards.

Citations

Arm / PC / Anshel Sag / PCWorld
Why 2025 will be the year Arm dominates PCs

AWS / CPUs / Patrick Moorhead / Network World
Graviton progress: 50% of new AWS instances run on Amazon custom silicon

AWS / Layoffs / Patrick Moorhead / Opentools
AWS Reshuffles with Layoffs in Tech Sales Division Amid Reorganization

Dell / AI / Patrick Moorhead / Yahoo Finance
Dell embodied 2 of the corporate world’s biggest themes in 2024: AI and RTO. It’s paying off.

NVIDIA / AI Chips / Matt Kimball / Singularity Hub
Here’s How Nvidia’s Vice-Like Grip on AI Chips Could Slip

AWS / Cut back on ZT Systems Spendings / Patrick Moorhead / OpenTools
AWS Trims Spending on ZT Systems Amid In-House Hardware Boom

Google / Pixel 9 / Anshel Sag / Yahoo Tech
2025 could be very different for Google and Samsung — here’s why

NVIDIA / EU clears acquisition of Run:ai / OpenTools
Nvidia Gets EU Thumbs Up for Run:ai Acquisition!

Smartwatches / Ansehl Sag / Yahoo Tech
Here’s everything we expect and want from wearables and smartwatches in 2025

New Gear or Software We Are Using and Testing

  • Kindle Colorsoft (Anshel Sag)
  • Google Pixel Buds 2 Pro (Anshel Sag)
  • XREAL One AR Glasses (Anshel Sag)
  • Google Pixel Watch 3, 41mm (Anshel Sag)
  • Cisco Desk Pro (Melody Brue)
  • OnePlus Buds Pro 3 (Anshel Sag)
  • Insta360 Link2 4K AI Webcam (Anshel Sag)
  • Google Pixel 9 Pro Fold (Anshel Sag)
  • Google TV streamer – Matter and Thread features (Bill Curtis)
  • Various Matter devices (Bill Curtis)
  • ASUS Zephyrus G16 Gaming Laptop (Anshel Sag)
  • iPhone 16 Pro (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • CES, January 7-10, Las Vegas (Patrick Moorhead, Anshel Sag, Will Townsend) 
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)
  • CES, January 7-10, Las Vegas (Patrick Moorhead, Anshel Sag, Will Townsend) 
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)
  • ZohoDay25, February 3-5, Austin (Robert Kramer, Melody Brue)
  • RingCentral Analyst Summit, February 24-26, Napa (Melody Brue)
  • SAP Analyst Council, February, New York City (Robert Kramer)
  • Adobe Summit, March 18-20, Las Vegas (Melody Brue)
  • Zendesk Analyst Day, March 25, Las Vegas (Melody Brue)
  • IBM event, March 25, NYC (Matt Kimball)
  • Canva Create & Analyst Day, April 8-10, Los Angeles (Melody Brue)
  • Nutanix .NEXT May 6-9, Washington DC (Matt Kimball)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending January 3, 2025 appeared first on Moor Insights & Strategy.

]]>
Platform Security Startup Axiado Secures Series C Funding https://moorinsightsstrategy.com/platform-security-startup-axiado-secures-series-c-funding/ Sat, 21 Dec 2024 21:26:15 +0000 https://moorinsightsstrategy.com/?p=44950 Axiado's TCU SoC incorporates multiple security measures at the hardware level for hyperscaler server environments. New funding should help the company expand its reach.

The post Platform Security Startup Axiado Secures Series C Funding appeared first on Moor Insights & Strategy.

]]>
The Axiado TCU SoC incorporates multiple security measures at the hardware level for hyperscaler server environments. Axiado

Axiado, a startup in the cybersecurity space, has announced that it has closed its Series C funding round to enable the company to expand strategic partnerships and scale its operations. According to Axiado, the funding, led by Maverick Silicon, signals a recognition of the market need for stronger resilience across the platforms that power AI—particularly in the datacenter. I want to put this announcement in context by digging deeper into platform security and what has made Axiado an interesting play for investors.

Platform Security Is The First Line Of Defense

By now, cybersecurity is an obvious and critical priority for every IT organization. As a general rule, complexity scales with size—meaning the bigger the datacenter, the more challenging it is to secure. Many people still think of security through the lens of firewalls and the tools that protect against intrusion into an operating environment—the walls put up to keep bad actors out. However, many overlook the platform-level protections that ensure infrastructure is secured from the moment the proverbial power button is pushed.

Platform security delivers this protection through a combination of trusted platform modules, root of trust, baseboard management controllers and other tools that ensure environments are booting and operating in a secure environment. This is important because some of the most malicious tools can burrow themselves down below the operating environment and slowly siphon data over the course of months before being detected. Known as rootkit attacks, these threats are extremely difficult to detect and counteract.

Standard servers populating the enterprise datacenter from the likes of Dell, HPE and Lenovo have platform security tools (both silicon and software) built onto the motherboard; these are managed through their respective consoles (e.g., OpenManage from Dell).

However, when looking at the hyperscale market, the game changes a little bit. In this setting, different servers from a variety of vendors are deployed across hundreds of datacenters. Further, those servers tend to be from lesser-known vendors. To address this reality, the Open Compute Project released a standard for developing a secure control module so that platform security functionality can be offloaded to a dedicated card. This means that a datacenter can deploy different servers from various vendors and still have a single platform security tool—with no vendor-specific security chips or other tools that must be configured and managed separately.

This is where Axiado comes into play.

Axiado And The TCU

Axiado addresses these challenges with its trusted control/compute unit. This card takes all of the disparate platform security functions and pieces of silicon that may reside on a motherboard and puts them on a dedicated system-on-a-chip. It utilizes AI to further scan for threats and abnormal behavior across the system.

As the diagram at the head of this article shows, the Secure Vault is where platform integrity begins. It is in this vault that validated and signed firmware, immutable code and other security components come together with RoT and TPM to ensure a secure and pristine boot environment. The Secure Vault is also designed to provide critical capabilities for re-establishing a known and trusted state when it’s necessary to recover after being compromised.

While the Secure Vault aims to enable a trusted environment, Secure AI is an inference processor that is tasked with detecting and blocking attacks by looking for suspicious behavior, such as the very low-level rootkit attacks previously mentioned. The combination of these capabilities delivers a system security posture that is both broad and deep. And it does this in a way that is simple for datacenter operators because it works across server platforms, from ODMs to OEMs.

In addition to these security capabilities, Axiado has now embedded its Dynamic Thermal Management technology into the TCM. As AI and other compute-intensive workloads populate the hyperscale datacenter more and more, power and cooling have become significant challenges. DTM looks at application and system performance in real time and adjusts cooling requirements. This could mean substantial cost savings if it shaves off even a few percentage points of power use across a hyperscale datacenter.

Who Are Axiado’s Customers?

The typical commercial enterprise will not likely deploy Axiado TCUs anytime soon. That customer segment tends to acquire servers from OEMs such as Dell, HPE and Lenovo. As touched on earlier, those servers are equipped with vendor-specific silicon and tools for secure boot and management.

The ideal customer for Axiado is a hyperscaler that’s operating datacenters deploying heterogeneous servers across a global environment. These environments scale in the hundreds of thousands of servers, powered by a variety of CPUs and GPUs. In fact, the more diverse the environment, the stronger the value prop of Axiado becomes as the potential TCO savings grow more significant.

Who Are Axiado’s Competitors?

This is a tricky question. One could rightly argue that hyperscalers already have tools that achieve what Axiado does through its TCU. It’s up for debate which is more comprehensive, but security rooted in hardware is a critical building block for any hyperscaler that wants to stay in business. However, Axiado does quite well in integrating all these security elements into a single SoC. This integration brings performance, power and cost benefits that make It an interesting play.

Are other companies delivering hardware solutions that compete with Axiado? Certainly. The security market continues to grow at a torrid pace, and new entrants jump in seemingly daily. However, Axiado’s focus on solving these challenges around security, resilience and power at scale makes it unique. Maybe that’s why this funding round was oversubscribed.

The Path Forward For Axiado

Axiado is in an interesting place. It has created a solution sorely needed in the hyperscale market. However, hyperscale is a tough market to sell into. It’s very engineering-driven and cost-sensitive. I suspect that Axiado will look to use some of this $60 million to ramp up its sales and marketing organization to bring what is a very compelling solution to the hyperscale market in a full-throated way.

What Axiado brings to bear with its TCU is complementary to the major chip companies such as AMD, Nvidia and Intel. Even though there is some overlap in terms of capabilities, there seems to be a lot of potential for strategic partnerships. Either way, the company’s technology is certain to find a home in the cloud.

The post Platform Security Startup Axiado Secures Series C Funding appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending December 13, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-december-13-2024/ Mon, 16 Dec 2024 22:42:24 +0000 https://moorinsightsstrategy.com/?p=44544 MI&S Weekly Analyst Insights — Week Ending December 13, 2024. A wrap up of what our team published during the last week.

The post MI&S Weekly Analyst Insights — Week Ending December 13, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.


Last week Time Magazine published its big “CEO of the Year” feature about Lisa Su of AMD. I was quoted in the article, which I recommend to you as a good refresher about the state of play in the chip market today—and how far AMD has come in the decade since Su became CEO. To me it’s one more reminder of the accelerated pace of change in the tech industry, with generative AI only adding more fuel to the fire. For more on my own views of Su and AMD, take a look at my deep dive on Forbes from last month.

 

Android XR header
Moor Insights & Strategy principal analyst Anshel Sag was one of just two industry analysts invited to preview Google’s new Android XR spatial OS, which could help unify the XR industry.

This week I also want to shine a spotlight on the analysis of the brand-new Android XR spatial computing OS written by our own Anshel Sag. Anshel’s been covering the world of spatial computing and XR for more than a decade, and for my money he’s as good as any analyst in the world in this area. Google must think so, too, because he was one of only two analysts invited to preview Android XR. His analysis was published on Forbes last Thursday, a few minutes after Google officially announced the OS.

If you have a CEO we should be talking with or a big new launch you think we ought to know about, please don’t hesitate to let us know.

The holidays are coming soon, but we’re still not quite ready for a break from conference season. Jason, Mel, Robert, and I will attend the Salesforce Agentforce 2.0 virtual event this week. We’re all wrapping up research and advisory sessions and will enjoy a quick respite with our families before CES in January. 

Last week, I was in Northern California for the Lattice Developer Conference and then Marvell’s Industry Analyst Day, where Matt joined me. Robert and Jason were in Boston with IBM, and Anshel attended T-Mobile’s Analyst Summit. Will was in New York at HP’s Security Analyst Summit. Mel, Jason, and Robert tuned into the ServiceNow Global Industry Analyst Digital Summit.

Read more about significant tech trends and events in this week’s Analyst Insights. 

Have a great week,

Patrick Moorhead

———

Our MI&S team published 20 deliverables:

Over the past week, MI&S analysts have been quoted in multiple syndicated top-tier international publications, including Time, PC World, Tom’s Hardware, Benzinga, and UC Today. The media wanted our takes on AMD, Arm, AWS, and Intel. Mel made an appearance on RingCentral’s State of AI in Business Communications webinar, and her UC musings were listed in UC Today’s Top 10 Predictions for 2025.

MI&S Quick Insights

Google has just announced Agentspace, which is a no-code-ish environment geared towards personal work productivity. Many of Google’s competitors are also playing in this market, so it’s no surprise Google is getting into it. Each of the major cloud vendors has both an AI development platform (in Google’s case Vertex AI) and associated tools for different personas. Google’s foray into power users is quite interesting in that it leverages both Vertex AI and the viral NotebookLM project. That said, it’s different from many of the other agentic approaches out there. In some ways it may be ahead of its time. Stay tuned for more on this topic soon.

Last week I got to spend some time with Matt Gierhart, who leads the custom app dev practice at IBM Consulting. While we spoke a bit about tools and AI assistants we both like, we then shifted focus to IBM Garage, which is a collaboration space for delivering projects with IBM clients. What stood out most was how IBM can quickly present multiple scenarios for a customer decision-making process—“Why should we do one feature versus another?,” for example. Developing these scenarios often takes time and data, but using gen AI is a way to accelerate the gathering and preparation process.

Finally for now, I think we have hit a point in the maturity of AI platforms where we can start to define and compare them in a meaningful way. What’s interesting is how much the diversity of tools has driven both the awareness of the need for a platform as well as the functionality. In 2025, I predict that competition at the AI platform level will overtake the LLM wars. More to come in this area, particularly looking at Amazon Bedrock, Microsoft Azure AI Foundry, and Google Vertex AI.

Google has just announced Agentspace, which is a no-code-ish environment geared towards personal work productivity. Many of Google’s competitors are also playing in this market, so it’s no surprise Google is getting into it. Each of the major cloud vendors has both an AI development platform (in Google’s case Vertex AI) and associated tools for different personas. Google’s foray into power users is quite interesting in that it leverages both Vertex AI and the viral NotebookLM project. That said, it’s different from many of the other agentic approaches out there. In some ways it may be ahead of its time. Stay tuned for more on this topic soon.

Last week I got to spend some time with Matt Gierhart, who leads the custom app dev practice at IBM Consulting. While we spoke a bit about tools and AI assistants we both like, we then shifted focus to IBM Garage, which is a collaboration space for delivering projects with IBM clients. What stood out most was how IBM can quickly present multiple scenarios for a customer decision-making process—“Why should we do one feature versus another?,” for example. Developing these scenarios often takes time and data, but using gen AI is a way to accelerate the gathering and preparation process.

Finally for now, I think we have hit a point in the maturity of AI platforms where we can start to define and compare them in a meaningful way. What’s interesting is how much the diversity of tools has driven both the awareness of the need for a platform as well as the functionality. In 2025, I predict that competition at the AI platform level will overtake the LLM wars. More to come in this area, particularly looking at Amazon Bedrock, Microsoft Azure AI Foundry, and Google Vertex AI.

Synopsys has become the first silicon company to introduce IP for UALink, the new scale-up specification that can connect up to 1,024 accelerators in support of LLM training, HPC, and other workloads. This is a significant announcement as it gives considerable weight to the recent launch of the consortium’s version 1.0 spec, which we covered in detail last month.

I would expect that this could mean that UALink-ready solutions might hit the market by mid-2026. In this connection, I will be tracking companies like AMD, Intel, Arm, and Astera Labs—along with any developments in NVIDIA’s NVLink connectivity spec.

Is storage cool again? Along with the rush of AI adoption comes an extreme focus on data. And of course, data management is highly dependent on storage. Because of this, the market has seen the arrival of a number of storage companies that index heavily on data management. And we also see a lot of traditional storage players evolving their products and messaging to orient around data management and data protection.

I’ve been in several engagements in this second half of 2024 to discuss and advise storage vendors on everything from product strategy to positioning and messaging. I say this to highlight how much companies are gearing up for the data wars of 2025 and beyond.

Each engagement ends with a similar set of takeaways: remove complexity, drive toward an autonomous state, ensure scale, and consider (and speak to) the full range of enterprise requirements—not just AI. Some companies do this better than others, and we see the results as they continue to grab market share.

Congratulations to Cohesity and Veritas Technologies on the successful completion of their business combination. It represents a significant shift in the data security and management industry—something I covered in detail in my research paper on the combination a couple of months ago.

Cohesity’s president and CEO, Sanjay Poonen, notes, “This deal combines Cohesity’s speed and innovation with Veritas’ global presence and installed base.” The combined entity will serve over 12,000 customers, including 85 of the Fortune 100, with projected revenues of around $2 billion for the 2025 fiscal year. You can read more in the announcement about the deal’s completion.

Adobe’s 2025 Creative Trends Forecast predicts four major design trends for the upcoming year. “Fantastic Frontiers” emphasizes surreal and imaginative visuals influenced by AI and gaming. “Levity and Laughter” underscores the growing importance of humor in engaging audiences. “Time Warp” blends futuristic and historical elements to create a nostalgic yet modern aesthetic. And “Immersive Appeal” focuses on multisensory experiences that combat screen fatigue and prioritize deeper brand engagement.

These trends reflect consumer desires for both escapism and authentic connection. The predictions are informed by a notable rise in experiential spending, illustrating how these trends resonate with the longing for adventure and genuine experiences. Adobe’s data insights provide a solid foundation for these predictions, and I look forward to seeing how creative trends play out in the new year.

The introduction of CameoX, a new onboarding policy by the fan-connection app Cameo, aims to make it easier for content creators including YouTubers to join the platform. By simplifying the enrollment process to a basic form and identity verification, Cameo hopes to attract a broader range of talent, potentially offering an alternative revenue stream for creators who may not be able to sustain themselves solely on platforms like YouTube, Instagram, or Twitch. However, it remains to be seen whether this will be enough to lure YouTubers away from that platform’s established audiences and revenue streams. The success of CameoX will depend on its ability to provide significant financial incentives and unique engagement opportunities that differentiate it from other popular creator platforms. With over 31,000 new creators joining through CameoX and contributing to millions in earnings, the platform is taking a promising step. Still, its long-term impact on attracting and retaining top talent is yet to be seen.

At Microsoft Ignite 2024, significant updates to Microsoft Fabric stood out for me. These updates improve data management for faster AI development, but that’s just the tip of the iceberg. During Ignite, I had the chance to sit down with Arun Ulag, corporate vice president for Microsoft Azure Data, about how tools such as OneLake and Fabric Databases unify workflows and simplify data access and support AI solutions. Read more of my analysis on Microsoft Fabric in my latest Forbes article.

HPE’s recent Q4 earnings rang in a record for top-line quarterly revenue, and the company’s guidance for total revenue, earnings per share, and free cash flow for the fiscal year are all above guidance. Networking remains an area with opportunities for improvement, but I expect that the completion of the Juniper Networks acquisition—expected in early 2025—will bring material synergies thanks to a fortified engineering effort and combined IP portfolio. If the company can crack the code on delivering more sustainable AI infrastructure across the board, especially by leveraging its applied research with HP Labs and Juniper’s Beyond Labs, it could provide significant tailwinds for future top-line revenue and margin improvement.

I attended IBM’s Strategic Analyst Forum in Boston last week. One of the highlights was how IBM is partnering with competitors such as Oracle, SAP, Microsoft, AWS, and others to help customers achieve IT transformation success. I was particularly impressed with IBM’s Garage methodology, an approach to digital transformation designed to develop solutions that address real business needs. It stresses building systems step by step. By working closely with IBM and its partners, clients can create solutions that effectively address challenges and are ready for real-world implementation. One area I felt could benefit from more focus is effective change management—a topic you can read more about in an article I wrote earlier this year.

One of the most significant 2025 IoT tech trends is the maturation of embedded application development. Traditional embedded developers constructed custom platform software stacks, including board support, OS, I/O, network, security, connectivity, and device management. However, the days of building complete software stacks for IoT devices are ending as silicon suppliers and independent software vendors offer complete platforms that support application development right out of the box. Developers can now begin writing application code immediately after unboxing the development kit. And building those applications using standards-based open-source components further reduces undifferentiated overhead. My advice: “If it’s not differentiated, don’t build it—buy it.”

I see more evidence every week that this “platform-based IoT” trend is accelerating. Here are three examples. First, Synaptics posted a demo of a contextual AI voice assistant application operating entirely on-device with no cloud dependencies. The demo uses an off-the-shelf Synaptics Astra Machina SL1680 development board. Developers access all the software required for this demo via GitHub, including the Yocto Linux OS and all necessary support software. According to the company, a developer can get the demo up and running “in a day.” This is a great example of a silicon supplier providing a complete software stack that lets developers focus on applications immediately and avoid writing undifferentiated code.

Second, Nordic Semiconductor just launched an impressive prototyping platform with a catchy name—the “Thingy:91 X.” This board features the nRF9151 system-in-package cellular module (LTE-M, NB-IoT, DECT NR+, GNSS positioning) with an Arm Cortex M33 system processor. The nRF002 companion chip adds SSID-based Wi-Fi location tracking. Expansion options from Qwiic, STEMMA QT, and Grove plug right in. The battery-powered board comes bundled with preloaded SIM cards from Onomondo and Wireless Logic, so it’ll connect to the nRF cloud right out of the box. Nordic supports developers with a comprehensive SDK and courses from its Nordic Developer Academy. Getting started with IoT cellular development is a rough road, but this “thingy” promises to make it smoother.

Finally, Matthias Bösl, head of hardware engineering at Tado (a home energy management company), posted this insightful comment: “In the past, a solid 30 percent of our development team was occupied with connectivity and the platform alone. The open source concept of Matter and Thread plus standardization ensures that we can concentrate better on things that offer our customers real added value.” Off-the-shelf IoT platforms also reduce the technical debt associated with long-term support, so the total cost savings are probably much higher in the long run.

For the past several years, the MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) framework has provided an analysis of cybersecurity threat actor tactics, techniques, and procedures. In the process, it has measured how well endpoint security solutions detect and prevent cyber threats. This year’s Round 6 focuses on ransomware emulation and MacOS infiltration by a North Korean threat actor profile, including adversary behaviors and defensive capabilities.

Palo Alto Networks performed exceptionally well compared to other participants this year. The company’s Cortex XDR set a record as the first participant to achieve 100% detection with technique-level detail and no configuration changes for a second year in a row. Additionally, it prevented eight out of 10 attack steps while maintaining zero false positives with that newly introduced metric. Those are impressive results, and they reinforce my positive impressions of the company’s cybersecurity platform strategy, execution, and Unit 42 Threat Research Center capabilities after my meetings with the executive leadership team in November.

As I think back about the best uses of technology in sports this year, my mind keeps returning to Intel’s involvement in this summer’s Olympic Games. Intel’s AI technologies played a significant role at the 2024 Paris Olympics, enhancing the event in multiple ways. The Athlete365 platform employed AI to provide real-time, multilingual support for athletes, facilitating communication and knowledge sharing. AI-powered systems also automatically generated personalized highlights for fans, creating a more engaging viewing experience. Furthermore, Intel’s technology enabled the creation of 3-D videos and AR clips, offering interactive media experiences. The company’s processors facilitated 8K live streaming with low latency, ensuring high-quality broadcasting. These innovations showcase the growing potential of AI not only to enhance sporting events but also to transform various aspects of daily life by optimizing workflows, accelerating innovation, and improving performance across diverse fields.

I appreciate Intel’s transparency in showcasing the technology behind these advancements. As Robert Kramer and I often discuss on the Game Time Tech podcast, understanding how technology shapes significant events like the Olympics is essential. With these sports technology partnerships, it’s refreshing to see the story behind the tech portrayed in an accessible and relatable way. This level of transparency also helps educate the public about the increasing impact of AI on our lives.

What are the top sports technology trends to look for in 2025? Throughout this year on our Game Time Tech podcast, Melody Brue and I have explored how the sports industry is adopting new technologies to improve athlete performance, reduce injuries, engage fans, and modernize team management. Looking ahead to 2025, some key developments include AI coaching apps and computer vision for real-time movement analysis and injury prevention, wearable devices that monitor performance data, and AR-enhanced broadcasts that provide real-time stats and multiple viewing angles. Other advancements include improvements to the use of video assistant referees (VARs) for fairer in-game decisions, AI systems such the NFL’s Digital Athlete for injury prevention, data-driven tools for scouting talent, and VR simulations that offer realistic training environments. In 2025, Mel and I will continue discussing how these technologies are shaping both professional and amateur sports. Read more about 2025’s top trends, and be sure to check out our latest GTT podcasts.

AMD has shared updates on its progress toward achieving its 30×25 energy-efficiency goal for AI and HPC processors by 2025. The company reports significant advancements in chip architecture, such as 3.5D CoWoS packaging and high-bandwidth memory, bringing it close to reaching its target. The report emphasizes the critical role of software optimizations, particularly through AMD’s ROCm open software stack, in enhancing both performance and energy efficiency.

AMD has adopted a comprehensive energy-efficiency approach that optimizes both hardware and software to advance AI development. Key hardware innovations improve performance and facilitate the use of larger AI models. On the software side, the ROCm open software stack continually optimizes performance and energy efficiency by supporting lower-precision math formats, leading to substantial performance gains. This combined approach results in higher performance, greater accessibility to AI, more efficient training and inference, and a reduced environmental impact. AMD says it is confident that it will surpass its ambitious 30×25 energy efficiency goal—and that it is actively seeking additional improvements at the system level.

Huawei used the Ultra Broadband Forum held in Istanbul this fall to announce its autonomous mobile network platform. The embattled infrastructure provider is positioning it as a level-four offering, analogous to the stage of autonomous driving in which a vehicle can navigate without the intervention of a human driver. It is another example of the company making bold claims with little substantiation. To no one’s surprise, AI factors heavily into Huawei’s claims of a latency-aware topology, but removing human operators from the loop is not a realistic scenario for any mobile network operator deployment.

Research Papers Published

Research Notes Published

Citations

AMD / Lisa Su CEO Of The Year / Patrick Moorhead / Time
https://time.com/7200909/ceo-of-the-year-2024-lisa-su/ 

AMD / Lisa Su CEO Of The Year / Patrick Moorhead / Taiwan News
Time names Taiwanese American Lisa Su CEO of the Year

Arm / PCs / Anshel Sag / PC World
2025 will be the year Arm dominates PCs

Axiado / Series C Funding / Patrick Moorhead / Digital Infra Network
Axiado raises $60 mn to boost AI platform security and energy efficiency

Intel / 18A / Patrick Moorhead / Tom’s Hardware
Gelsinger fires back at recent stories about 18A’s poor yields

Intel / 18A / Patrick Moorhead / WCCFTECH
“Ousted” Intel CEO Steps In To Defend The Firm’s 18A Process, Says Yield Rate % Isn’t The Right Metric To Measure Semiconductor Progress

Marvell / HBM & XPUs / Patrick Moorhead / ABC27
Marvell Announces Breakthrough Custom HBM Compute Architecture to Optimize Cloud AI Accelerators

Marvell / HBM & XPUs / Patrick Moorhead / Hosting Journalist
Marvell Unveils Custom HBM Architecture to Enhance Cloud AI Accelerators

Marvell / HBM & XPUs / Patrick Moorhead / Investing.com
Marvell unveils custom HBM compute architecture

NVIDIA / China Antitrust Investigation / Patrick Moorhead / Benzinga
Chinese Antitrust Investigation Into Nvidia ‘All Speculative’: Tech Expert

UC Space Updates (Microsoft, Avaya, Zoom, Wildix) / Melody Brue / UC Today
The Latest News on Microsoft Ignite, Avaya’s Layoffs, Zoom and Wildix CEO on RTO

New Gear or Software We Are Using and Testing

  • Kindle Colorsoft (Anshel Sag)
  • Google Pixel Buds 2 Pro (Anshel Sag)
  • XREAL One AR Glasses (Anshel Sag)
  • Google Pixel Watch 3, 41mm (Anshel Sag)
  • Cisco Desk Pro (Melody Brue)
  • OnePlus Buds Pro 3 (Anshel Sag)
  • Insta360 Link2 4K AI Webcam (Anshel Sag)
  • Google Pixel 9 Pro Fold (Anshel Sag)
  • Google TV streamer – Matter and Thread features (Bill Curtis)
  • Various Matter devices (Bill Curtis)
  • ASUS Zephyrus G16 Gaming Laptop (Anshel Sag)
  • iPhone 16 Pro (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Salesforce Agentforce 2.0, December 17 (Jason Andersen, Melody Brue, Robert Kramer, Patrick Moorhead)
  • Salesforce Agentforce 2.0, December 17 (Jason Andersen, Melody Brue, Robert Kramer, Patrick Moorhead)
  • CES, January 7-10, Las Vegas (Patrick Moorhead, Anshel Sag, Will Townsend) 
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)
  • ZohoDay25, February 3-5, Austin (Robert Kramer, Melody Brue)
  • RingCentral Analyst Summit, February 24-26, Napa (Melody Brue)
  • SAP Analyst Council, February, New York City (Robert Kramer)
  • Zendesk Analyst Day, March 35, Las Vegas (Melody Brue)
  • IBM event, March 25, NYC (Matt Kimball)
  • Canva Create & Analyst Day, April 8-10, Los Angeles (Melody Brue)
  • Nutanix .NEXT May 6-9, Washington DC (Matt Kimball)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending December 13, 2024 appeared first on Moor Insights & Strategy.

]]>
Datacenter Podcast: Episode 34 – Talking HPE, Google, AWS, VMware, Microsoft, Storage https://moorinsightsstrategy.com/data-center-podcast/datacenter-podcast-episode-34-talking-hpe-google-aws-vmware-microsoft-storage/ Mon, 16 Dec 2024 15:00:13 +0000 https://moorinsightsstrategy.com/?post_type=data_center&p=45065 On episode 34 of the MI&S Datacenter Podcast, hosts Matt, Will, and Paul talk HPE, Google, AWS, VMware, and more!

The post Datacenter Podcast: Episode 34 – Talking HPE, Google, AWS, VMware, Microsoft, Storage appeared first on Moor Insights & Strategy.

]]>
On this week’s episode of the MI&S Datacenter Podcast, hosts Matt, Will, and Paul analyze the week’s top datacenter and datacenter edge news. This week they are talking HPE, Google, AWS, VMware, and more!

Watch the video here:

Listen to the audio here:

4:15 HPE Q4FY24 Earnings
10:49 Willow: A Window On The Multiverse?
18:05 Cloud – The New Silicon Giants
26:19 Observability As A VMware Ripcord?
31:58 It Sees What I See
35:42 Storage Is Cool Again
40:21 Getting To Know Us – Ghost of Christmas Past

HPE Q4FY24 Earnings
https://x.com/WillTownTech/status/1864787679906291962

Willow: A Window On The Multiverse?
https://blog.google/technology/research/google-willow-quantum-chip/

Cloud – The New Silicon Giants
https://moorinsightsstrategy.com/research-notes/some-thoughts-on-aws-reinvent-aws-silicon-and-q-developer/%C3%82

Observability As A VMware Ripcord?
https://www.forbes.com/sites/moorinsights/2024/12/05/the-power-of-deep-observability-in-facilitating-vmware-migrations/

It Sees What I See
https://www.microsoft.com/en-us/microsoft-copilot/blog/2024/12/05/copilot-vision-now-in-preview-a-new-way-to-browse/

Storage Is Cool Again
https://moorinsightsstrategy.com/research-notes/modernizing-your-datacenter-start-with-storage/

Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.

The post Datacenter Podcast: Episode 34 – Talking HPE, Google, AWS, VMware, Microsoft, Storage appeared first on Moor Insights & Strategy.

]]>
RESEARCH PAPER: An Evaluation of the Open Compute Modular Hardware Specification https://moorinsightsstrategy.com/research-papers/evaluation-of-open-compute-modular-hardware-specification/ Wed, 11 Dec 2024 15:18:17 +0000 https://moorinsightsstrategy.com/?post_type=research_papers&p=44452 This report explores the history of OCP, how the DC-MHS delivers real value to enterprise IT organizations, and Dell's MHS portfolio.

The post RESEARCH PAPER: An Evaluation of the Open Compute Modular Hardware Specification appeared first on Moor Insights & Strategy.

]]>
The Open Compute Project’s Modular Hardware System (OCP-MHS or simply MHS) sub-project under the Server Project Group has one mission: interoperability among key elements of datacenter, edge, and enterprise infrastructure. It achieves this by creating common standards for the physical, signaling, and protocol interfaces of server components, making it easier for datacenter architects to build and integrate datacenter infrastructure.

Of the three MHS projects, the datacenter MHS (DC-MHS) is particularly interesting because it significantly impacts major server vendors servicing both the hyperscale and enterprise server market segments. This project focuses on delivering a modular server hardware specification that enables hardware vendors to more easily and quickly source and manufacture the server infrastructure that powers the datacenter.

The progression of DC-MHS is noteworthy, given the accelerated pace of semiconductor and hardware innovation in response to the AI explosion. This Moor Insights & Strategy (MI&S) research brief will explore three key areas:

  1. The history of OCP and how the DC-MHS might be the realization of a vision laid out more than 10 years ago
  2. How the DC-MHS delivers real value to enterprise IT organizations by embracing a cloud operating model at a time when access to the latest hardware technology is critical to keep pace in the marketplace
  3. Dell’s and Intel’s roles in the DC-MHS and Dell’s recent release of DC-MHS-compliant servers

Click the logo below to download the research paper to read more.

 

This report explores the history of OCP, how the DC-MHS delivers real value to enterprise IT organizations, and Dell's MHS portfolio.

Table of Contents

  • Summary
  • OCP—A Short History of Innovation
  • OCP—Like Lego Bricks for Servers
  • How MHS Benefits the Market
  • Exploring Dell’s MHS Portfolio
  • MHS Implications for the Market
  • Bringing Value to Both CSPs and Enterprise IT

Companies Cited:

  • Dell Technologies
  • Intel

The post RESEARCH PAPER: An Evaluation of the Open Compute Modular Hardware Specification appeared first on Moor Insights & Strategy.

]]>
RESEARCH NOTE: Some Thoughts On AWS re:Invent — AWS Silicon and Q Developer https://moorinsightsstrategy.com/research-notes/some-thoughts-on-aws-reinvent-aws-silicon-and-q-developer/ Tue, 10 Dec 2024 21:20:27 +0000 https://moorinsightsstrategy.com/?post_type=research_notes&p=44447 Amazon Web Services held its annual re:Invent customer event last week in Las Vegas. With over 200 analysts in attendance, the event focused on precisely what one would expect: AI, and how the largest cloud service provider on the planet is building infrastructure, models, and tools to enable AI in the enterprise. While my Moor […]

The post RESEARCH NOTE: Some Thoughts On AWS re:Invent — AWS Silicon and Q Developer appeared first on Moor Insights & Strategy.

]]>
The Trainium2 chip from AWS

Amazon Web Services held its annual re:Invent customer event last week in Las Vegas. With over 200 analysts in attendance, the event focused on precisely what one would expect: AI, and how the largest cloud service provider on the planet is building infrastructure, models, and tools to enable AI in the enterprise.

While my Moor Insights & Strategy colleagues Robert Kramer and Jason Andersen have their own thoughts to share about data and AI tools (for example here), this research note will explore a few areas that I found interesting, especially regarding AWS chips and the Q Developer tool.

The AWS Silicon Evolution

AWS designs and builds a lot of its own silicon. Its journey began with the Nitro System, which handles networking, security, and a bit of virtualization offload within an AWS-specific virtualization framework. Effectively, Nitro offloads a lot of the low-level work that connects and secures AWS servers.

From there, the company moved into the CPU space with Graviton in 2018. Since its announcement, this chip has matured to its fourth generation and now supports about half the workloads running in AWS.

AWS announced Inferentia and Trainium in 2019 and 2020, respectively. The functionality of each AI accelerator is easy to deduce from its name. While both pieces of silicon have been available for some time now, we haven’t heard as much about them—especially in comparison to the higher-profile Graviton. Despite not being as well-known as Graviton, Inferentia and Trainium have delivered tangible value since their respective launches. The first generation of Inferentia focused on deep learning inference, boasting 2.3x higher throughput and 70% lower cost per inference compared to the other inference-optimized instances on EC2 at the time.

Inferentia2 focused on generative AI (Inf2 instances in EC2) with a finer focus on distributed inference. Architectural changes to silicon, combined with features such as sharding (splitting models and distributing the work), allowed the deployment of large models across multiple accelerators. As expected, performance numbers were markedly higher—including 4x the throughput and up to 10x lower latency relative to Inferentia1.

Trainium2, from the Chip to the Cluster

Based on what we’ve seen through the Graviton and Inferentia evolutions, the bar has been raised for Trainium2, which AWS just released into general availability. The initial results look promising.

As it has done with other silicon, the Annapurna Labs team at AWS has delivered considerable gains in Trainium2. While architectural details are scant (which is normal for how AWS talks about its silicon), we do know that the chip is designed for big, cutting-edge generative AI models—both training and inference.

Further, AWS claims a price-performance advantage for Trainium2 instances (Trn2) of 30% to 40% over the GPU-based EC2 P5e and P5en instances (powered by NVIDIA H200s). It is worth noting that AWS also announced new P6 instances based on NVIDIA’s hot new Blackwell GPU. A point of clarification is worth a mention here. Unlike Blackwell, Trainium2 is not a GPU. It is a chip designed for training and inference only. It is important to note this because such chips, though narrow in functionality, can deliver significant power savings relative to GPUs.

The Trainium2 accelerator delivers 20.8 petaflops of compute. For enterprise customers looking to train and deploy a large language model with billions of parameters, these Trn2 instances are ideal, according to AWS. (A Trn2 instance bundles 16 Trainium2 chips with 1.5TB of high-bandwidth memory, 192 vCPUs, and 2TB RAM.)

Going up the performance ladder, AWS also announced Trainium2 UltraServers—effectively 64 Trainium2 chips across four instances to deliver up to 83.2 petaflops of FP8 precision compute. These chips, along with 6TB of HBM and 185 TBps of memory bandwidth, position the UltraServers to support larger foundational models.

To connect these chips, AWS developed NeuronLink—a high-speed, low-latency chip-to-chip interconnect. For a parallel to this back-end network, think of NVIDIA’s NVLink. Interestingly, AWS is part of the UALink Consortium, so I’m curious as to whether NeuronLink is tracking to the yet-to-be-finalized UALink 1.0 specification.

Finally, AWS is partnering with Anthropic to build an UltraCluster named Project Rainier, which will scale to hundreds of thousands of chips to train Anthropic’s current generation of models.

What does all of this mean? Is AWS suddenly taking on NVIDIA (and other GPU players) directly? Is this some big move where AWS will push—or even nudge—its customers toward Trn2 instances instead of P5/P6 instances? I don’t think so. I believe AWS is following the Graviton playbook, which is simple: put out great silicon that can deliver value and let customers choose what works best for them. For many, having the choice will mean they continue to consume NVIDIA because they have built their entire stacks around not just Hopper or Blackwell chips, but also NVIDIA software. For some, using Trn2 instances along with Neuron (the AWS SDK for AI) will be the optimal choice. Either way, the customer benefits.

Over time, I believe we will see Trainium’s adoption trend align with that of Graviton. Yes, more and more customers will select this accelerator as the foundation of their generative AI projects. But so too will many for Blackwell and the NVIDIA chips that follow. As this market continues to grow at a torrid pace, everybody wins.

It’s worth mentioning that AWS also announced Trainium3, which will be available in 2025. As one would expect, this chip will yet again be a significant leap forward in terms of performance and power efficiency. The message being sent is quite simple—AWS is going to deliver value on the enterprise AI journey, and the company is taking a long-term approach to driving that value.

Q Developer and Modernization

One of the other areas that I found very interesting was the use of AI agents for modernizing IT environments. Q is an AWS generative AI assistant that one could consider similar to the more well-known Microsoft Copilot. Naturally, Q Developer is a tool for creating and managing Q assistants.

While Q Developer is interesting for several reasons, digital transformation is the area of assistance I found most compelling. At re:Invent, AWS rolled out Q Developer to modernize three environments: Microsoft .NET, mainframe applications, and VMware environments. In particular, the VMware transformation is of great interest to me as there has been so much noise about VMware in the market since its acquisition by Broadcom. With Q Developer, AWS has built agents to migrate VMware virtual machines to EC2 instances, removing dependencies. The process starts by collecting (on-premises) server and network data and dropping it into Q Developer. Q Developer then outputs suggestions for migration waves that an IT staff can accept or modify as necessary. This is followed by Q Developer building out and continuously testing an AWS network. And then the customer selects the waves to start migrating.

While Q Developer is not going to be perfect or remove 100% of the work for decoupling from VMware, it will help get through some of the most complex elements of migrating. This can save months of time and significant dollars for an enterprise IT organization. This is what makes Q Developer so provocative and disruptive. I could easily see the good folks at Azure and Google Cloud looking to build similar agents for the same purpose.

Q Developer for .NET and mainframe are also highly interesting, albeit far less provocative. Of the two, I think the mainframe modernization effort is quite compelling as it deconstructs monolithic mainframe applications and refactors them into Java. I like what AWS has done with this and can see the value—especially given the COBOL skills gap that exists in many organizations. Or, more importantly, the skills gap that exists in understanding and attempting to run both mainframe and cloud environments in parallel.

With all this said, don’t expect Q Developer to spell the end of the mainframe. Mainframes are still employed for a reason—70 years after the first commercial mainframes, and 60 years after IBM’s System/360 revolutionized the computing market. It is not because they are hard to migrate away from. The reason is tied to security and performance, especially in transaction processing at the largest scales. However, Q Developer for mainframe modernization is pretty cool and can certainly help, especially as a code assistant for a workforce that is trying to understand and maintain COBOL code written decades ago.

Final Impressions

I’ve been attending tech conferences for a long time. My first was Macworld in 1994. Since then, I’ve attended every major conference at least a few times. AWS re:Invent 2024 was by far the largest and busiest conference I’ve attended.

While there was so much news to absorb, I found these announcements around Trainium and Q Developer to be the most interesting. Again, my Moor Insights & Strategy colleagues Robert Kramer and Jason Andersen, along with our founder and CEO Patrick Moorhead, will have their own perspectives. Be sure to check them out.

The post RESEARCH NOTE: Some Thoughts On AWS re:Invent — AWS Silicon and Q Developer appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending December 6, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-december-6-2024/ Tue, 10 Dec 2024 01:30:21 +0000 https://moorinsightsstrategy.com/?p=44280 MI&S Weekly Analyst Insights — Week Ending December 6, 2024. A wrap up of what our team published during the last week.

The post MI&S Weekly Analyst Insights — Week Ending December 6, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.


The time between Thanksgiving and Christmas is hectic but rewarding for me, with lots of events and client meetings between the rounds of holiday cheer with the family. Last week, I was fortunate to be with a few members of my team in Las Vegas for AWS re:Invent. Matt, Jason, Robert, and I were all there covering our various specialty subjects—and figuring out how it all fits together for a company with the reach of AWS. It’s always great to see the analysts in action, and Jason recorded his first Six Five video. You can check that out, along with all the other re:Invent Six Five coverage, here.

 

Andy Jassy at AWS
Andy Jassy presents at re:Invent. (Photo: Patrick Moorhead)

While we were in Vegas, Will was in Dallas for AT&T’s Analyst & Investor Day, and Anshel traveled to a couple of NDA meetings with clients that will feed some of his future articles. This week, I’m in Northern California for the Lattice Developer Conference and then Marvell’s Industry Analyst Day, where Matt will join me. Robert and Jason will be in Boston with IBM, and Anshel will be attending T-Mobile’s Analyst Summit. Mel, Jason, and Robert will all be tuning into the ServiceNow Global Industry Analyst Digital Summit.

It’s a lot of travel, but also a lot of learning and many chances to connect with and advise our clients. We wouldn’t have it any other way. We’ll be starting 2025 with a bang, too, as many of us attend CES. If you are going to be there and we don’t already have something scheduled, please reach out and let’s set something up.

Have a great week,

Patrick Moorhead

———

Our MI&S team published 20 deliverables:

Over the last two weeks, MI&S analysts have been quoted in multiple syndicated top-tier international publications such as Barron’s, ComputerWorld, Fierce Network, Investor’s Business Daily, The Register, and VentureBeat. The media wanted MI&S’s take on Avaya, AWS, Intel and more. Pat made several network television appearances, including CNBC to discuss Intel’s CEO departure.

MI&S Quick Insights

Last week was all about the Amazon Web Services re:Invent conference, for which 60,000 people descended on Las Vegas. It was quite an event, and AWS came out swinging with announcements up and down its stack. Here were the biggest news items on the developer side:

My personal favorite announcement of re:Invent was the new transformation capabilities in Q Developer. For context, Chapter 1 (out of 4) in my career was in systems integration and consulting; the big takeaway from that chapter was that migrations are hard—and maybe I should consider moving on to Chapter 2. One key lesson learned was that anything that can alleviate the pain of migrations and upgrades will make enterprises more secure and more efficient, and give them more time and money to innovate. AWS’s solid use case shows the enterprise potential of agentic apps and gen AI. Check out the piece that I published in Forbes about AWS agents.

For a Six Five on the Road videocast, I got to interview Sherry Marcus, Ph.D. about gen AI and AWS Bedrock. I have met Sherry before and was so happy that this time we got it on video. Bedrock has significantly expanded its capabilities in the newest release to support agentic applications. Bedrock is looking a lot like a great enterprise solution, but it is not alone, given the release of Microsoft’s AI Foundry at the Ignite conference a couple weeks ago. What we are starting to see is the formation of a new type of middleware category that I am calling agentic development frameworks. This is the type of technology that will get us past productivity agents operating within application platforms (which still do have their place) and get us into integrated high-scale agentic solutions. Want to know more? Check out the Forbes piece I wrote about these agentic frameworks.

While AWS generated a lot of focus, re:Invent is also something of an ecosystem show for the cloud. So, partners and partner announcements were everywhere as well. I got to speak to product leaders from IBM, which made multiple announcements of their own products running upon and in some cases integrating with AWS cloud. It’s an interesting combination, as IBM is a major champion for hybrid cloud, and its tools could help create a bridge for management, governance, and observability anywhere that applications are deployed.

There was also a lot of talk about AWS SageMaker’s new unified toolset. At first glance, it may seem like it’s simply a means to unify the data-scientist experience, but after doing some digging I was able to find out that what was announced this week was only step one. I think we will be seeing a great deal of work integrating SageMaker with Amazon Bedrock in 2025.

Last week was all about the Amazon Web Services re:Invent conference, for which 60,000 people descended on Las Vegas. It was quite an event, and AWS came out swinging with announcements up and down its stack. Here were the biggest news items on the developer side:

My personal favorite announcement of re:Invent was the new transformation capabilities in Q Developer. For context, Chapter 1 (out of 4) in my career was in systems integration and consulting; the big takeaway from that chapter was that migrations are hard—and maybe I should consider moving on to Chapter 2. One key lesson learned was that anything that can alleviate the pain of migrations and upgrades will make enterprises more secure and more efficient, and give them more time and money to innovate. AWS’s solid use case shows the enterprise potential of agentic apps and gen AI. Check out the piece that I published in Forbes about AWS agents.

For a Six Five on the Road videocast, I got to interview Sherry Marcus, Ph.D. about gen AI and AWS Bedrock. I have met Sherry before and was so happy that this time we got it on video. Bedrock has significantly expanded its capabilities in the newest release to support agentic applications. Bedrock is looking a lot like a great enterprise solution, but it is not alone, given the release of Microsoft’s AI Foundry at the Ignite conference a couple weeks ago. What we are starting to see is the formation of a new type of middleware category that I am calling agentic development frameworks. This is the type of technology that will get us past productivity agents operating within application platforms (which still do have their place) and get us into integrated high-scale agentic solutions. Want to know more? Check out the Forbes piece I wrote about these agentic frameworks.

While AWS generated a lot of focus, re:Invent is also something of an ecosystem show for the cloud. So, partners and partner announcements were everywhere as well. I got to speak to product leaders from IBM, which made multiple announcements of their own products running upon and in some cases integrating with AWS cloud. It’s an interesting combination, as IBM is a major champion for hybrid cloud, and its tools could help create a bridge for management, governance, and observability anywhere that applications are deployed.

There was also a lot of talk about AWS SageMaker’s new unified toolset. At first glance, it may seem like it’s simply a means to unify the data-scientist experience, but after doing some digging I was able to find out that what was announced this week was only step one. I think we will be seeing a great deal of work integrating SageMaker with Amazon Bedrock in 2025.

Amazon’s newly announced Amazon Nova is a line of three new foundation models: Nova Pro, Nova Lite, and Nova Micro. These models have frontier intelligence and can handle difficult language tasks, as proven by benchmarks like MMLU and VATEX. Their key advantages are speed, agentic workflows, and the ability to be customized. With these models, Amazon has focused on price-performance, which could disrupt the AI market with a new standard and the ability to democratize advanced AI capabilities for small models.

Similar to Dell’s AI Factory, Microsoft announced the Azure AI Foundry to facilitate AI implementation. The foundry includes an SDK that integrates Azure AI capabilities with GitHub and Visual Studio. Azure AI Foundry’s objective is to simplify the implementation of AI through a unified platform that streamlines AI development. Azure AI Foundry includes tools to help determine AI’s effectiveness and monitor ROI, with the goal of determining AI’s impact and ensuring it is focused on the proper business objectives.

Verint has introduced a new automated Scoring Bot tool to give organizations a more accurate and efficient method to assess the quality of their customer and employee experiences. This bot likely employs advanced analytics and AI technologies to analyze data from various interactions, such as customer feedback, service calls, and employee engagement metrics. By automating the scoring process, the bot can reduce manual effort and increase the speed at which insights are generated. This should enable businesses to make more informed decisions and swiftly implement improvements.

Integrating customer and employee experience scoring into a single tool reflects the growing recognition of the interdependence between the two. Engaged and satisfied employees are often more likely to deliver better customer service, enhancing overall customer satisfaction.

I want to talk about a key announcement out of re:Invent—Q Developer for modernization. This agent does exactly as one would think: modernize environments for IT. It focuses on three environments—.NET, mainframe, and VMware. While all are compelling, the VMware agent is incredibly disruptive, and maybe a little provocative.

Here’s the gist: through Q Developer, organizations can take their on-prem VMware environments and workloads and migrate them to cloud-native environments—on AWS, of course—all with a “few clicks.” This includes everything from re-mapping networks to workload migration.

Do I think this tool is going to be as simple as entering a few bits of data and clicking the mouse? No. But if this gets an organization, say, 75% of the way to successful migration, we are talking many months of saved time. And many, many dollars.

I’m curious to see how this will take off with customers; I can’t wait to see those first success stories and understand what was really involved in getting across the finish line.

One of the stars of AWS re:Invent was silicon—specifically Trainium2. This AI training silicon, boasting 20.8 petaflops of peak performance, is the company’s answer for meeting the needs of cost-effective, performant instances. In fact, the company claims a 30% to 40% price-performance advantage over its own GPU-based training instances running NVIDIA’s H200. The Trainium2, now in general availability under the trn2 instance, is also built into an AWS UltraServer (four chips) and UltraCluster (hundreds of thousands of chips).

So what is AWS doing? Is it taking on NVIDIA and other GPU players with Trainium (and Inferentia)? People at the company will tell you no. They will tell you that they are simply providing customers with choice, just as they have with Graviton. I understand this position, and the logic behind it. Further, I have no doubt the company is simply providing choice across the AI journey.

That said, I also believe that AWS is going to see success with Trainium like it has with Graviton. This means a first-generation part that played more as a proving ground than anything else. Then a second generation that delivered significant price-performance improvements, and a third (and fourth) generation that continued to build adoption. And one day we will all take note that about half of the workloads run on AWS silicon.

Trainium will be a little harder to grow as aggressively as Graviton, however, because migrating a workload to a new virtualized CPU is considerably easier. However, I believe that success will come. With that said, there is no loser—and NVIDIA’s astronomical growth in the training space will not be slowed.

Salesforce’s AgentForce platform has made a strong initial market entry, as shown by contracts with prominent companies such as FedEx and IBM within its first quarter since launch. CEO Marc Benioff projects significant growth, estimating that Salesforce customers will deploy one billion AI agents within the next year. This expected expansion in AI integration has led to increased revenue forecasts for Salesforce.

The platform’s success will likely depend on Salesforce’s ability to provide reliable model training and address potential issues such as hallucinations or memory inconsistencies, which it seems to be managing effectively. However, the widespread adoption of AgentForce involves more than just technological reliability. Organizations face the complex challenge of integrating a new digital workforce into their existing structures and workflows. Successfully implementing a digital workforce requires a considerable change-management effort. This entails a shift in mindset, organizational culture, and operational processes. Organizations must be ready to invest the necessary time, resources, and leadership commitment to navigate this transition and unlock the full potential of AI agents.

I look forward to seeing more progress from Salesforce and AgentForce during the AgentForce 2.0 event in mid-December.

Data has always been essential for businesses, but it’s not enough to just collect it—you need to act on it. At re:Invent, AWS CEO Matt Garman said, “The next big leap in value is not just about getting great data, but about taking actions and doing something with it.” That idea stuck with me because it gets to the heart of what businesses need today.

AWS just introduced the next generation of Amazon SageMaker, which brings data, analytics, and AI into one platform. It includes tools such as SageMaker Unified Studio for accessing and working with data, SageMaker Catalog for managing and finding data, and SageMaker Lakehouse for combining analytics and AI. The existing SageMaker service has been renamed SageMaker AI, focusing on building, training, and deploying AI and ML models. It can still be used on its own, but it’s also part of the larger integrated platform for those who need everything in one system.

These updates show that AWS is innovating to simplify things for businesses. By unifying these tools, AWS looks to guide enterprises in turning data into results.

As we all know, data is essential for innovation and business transformation. This fall, I attended events hosted by Teradata, AWS, Infor, and LogicMonitor, each offering different approaches to data management. Although all four had different methods, they shared a consistent perspective on the importance of data management—which is often overlooked in discussions dominated by AI. Read my latest Forbes article, where I highlight these companies’ views on how data management supports IT integration, analytics, real-time monitoring, and AI-based processes.

Broadcom’s acquisition of VMware and subsequent price increases have introduced significant challenges for enterprises reliant on VMware’s virtualization solutions. The financial and technical concerns associated with these changes dictate a reassessment of IT strategies, especially to avoid vendor lock-in. Deep data and network observability can serve a critical function in managing hybrid workloads, optimizing resources and utilization, and ensuring seamless operations across public and private clouds.

Over time, Broadcom’s pricing strategy for VMware may spur innovative approaches as its current customers adopt competitive offerings and diversify their virtualization solutions. As businesses adjust to a new normal, I believe this evolution will shine a light on the importance of network observability solutions within a migration journey for those customers that wish to explore alternatives. For more insights on this, take a look at my recent Forbes contribution on the role of deep observability in VMware migrations.

At AWS re:Invent 2024, SAP introduced GROW with SAP on AWS, designed to make it easier for enterprises to adopt SAP S/4HANA Cloud ERP. As you may know, ERP transformations can be challenging. The goal of GROW with SAP is to reduce the upfront costs and complexity of adopting cloud ERP, enabling businesses to complete deployments in months instead of years. This solution integrates SAP data with AWS’s generative AI tools—especially Amazon Bedrock—to improve operations. It also uses SAP’s Joule AI copilot to work across SAP applications to make processes more efficient.

I’m interested to see how GROW with SAP on AWS plays out when it is released soon. I understand the promise of easier access to resources, but knowing how challenging ERP transformations can be, especially in terms of both data management and change management, I’m cautious about how well it will deliver.

PayPal and Venmo are updating their platforms to remain competitive in the digital payment market. Paypal is introducing features such as money pooling to enhance user experience and attract a broader audience. This strategy also indirectly promotes the PayPal brand to younger Venmo users, potentially fostering future loyalty to the larger PayPal ecosystem.

At the same time, PayPal is facilitating online holiday shopping with its new “Fastlane by PayPal” feature. Launched in August 2024 and now available throughout the U.S., Fastlane aims to simplify guest checkout by using e-mail recognition and one-time passcodes to fill in shipping and payment information automatically. This reduction in manual entry could lead to faster checkouts and increased sales. Currently, Fastlane is available to U.S. merchants and shoppers using U.S. dollars and integrates with various e-commerce platforms, including Adobe Commerce, BigCommerce, and Salesforce Commerce Cloud. These developments indicate a two-pronged approach by PayPal, improving the user experience across different platforms and transaction types to boost engagement and expand market share.

SoFi Invest has expanded its offerings through a partnership with Templum, providing accredited investors with access to private-market investments. This move includes new funds such as those focused on SpaceX, Pomona Investment Fund, and StepStone Private Markets Fund. The partnership leverages Templum’s technology to facilitate these alternative investments, aligning with SoFi’s strategy to enhance its investment options and cater to the growing demand from retail investors interested in privately held companies.

There are rumors that Sony is working with Apple to bring games from PlayStation VR to the Vision Pro. This would be a huge win for both companies, since Sony ultimately doesn’t care about the hardware as much as it does selling games and Apple desperately needs more-immersive and better-quality games for its headset. A big part of this rumored partnership is that Apple would ensure that Sony’s PSVR Controllers would work with the Apple Vision Pro. This would also mean that developers could develop for the Vision Pro and PSVR using the same control scheme. I believe that this is partially an admission from Apple that shipping a headset without controllers and expecting people to game on it is a fool’s errand.

It appears that Valve is not only building a VR headset codenamed Deckard, but that it is also creating new VR controllers and a living room console to accompany the Steam Deck as part of its upcoming VR hardware offerings powered by SteamOS. I talked about Valve’s efforts in the XR space in my recent State of XR Part 2 report, but it seems that Valve’s hardware ambitions are both deeper and broader than originally anticipated. That said, “Valve time” is a very real thing, and these products could launch in a few months or a few years—it’s anyone’s guess.

AWS re:invent attendees had to look behind the session titles to find IoT news. That’s because CSPs (including AWS) and enterprise ERP providers are following the money—using AI to extract significant business value from the operations data collected by IoT systems. Here are two examples from re:invent: using AI to monitor and analyze IoT data and using Greengrass to deploy machine learning on edge devices.

AWS announced the general availability of IoT SiteWise Assistant in November and demonstrated its capabilities at re:invent. IoT SiteWise Assistant adds a layer of intelligence on top of the SiteWise Monitor Dashboard to simplify industrial data collection, organization, and monitoring. The new Assistant enables enterprises to gain actionable insights into complex operational situations involving disparate data sources and types. Operators can ask simple, natural-language questions to identify problems, troubleshoot root causes, and take corrective actions.

Digging deeper, here’s why this is a game-changer for IoT: Using AI to look for patterns across heterogeneous data lowers one of the most significant industrial IoT solution barriers—data ingestion and transformation. The trend is to use data pretty much as-is rather than remodeling it into a unified, centralized, managed data fabric. This approach fits with AWS’s “data as a product” strategy, combining the benefits of mesh and fabric data architectures.

The re:Invent workshop “Unleash edge computing with AWS IoT Greengrass on NVIDIA Jetson” demonstrated deploying ML models directly on edge devices, facilitating on-device intelligence with data paths connected to AWS services. The workshop used the NVIDIA Jetson Orin platform to accelerate complex AI workloads such as image recognition, Edge Impulse to streamline cloud-to-edge AI workflows, and Greengrass to connect the data and orchestrate software deployment on fleets of devices. I’m doubling down on my prediction that AI at the edge is a 2025 “megatrend.”

Apple’s 5G modem ambitions are no secret to anyone; the company wants to ramp its 5G modem capabilities with low-cost and low-volume products. This is meant to protect the company from any potential defects or delays that could arise. I also believe it gives Apple time to refine its modem design and add capabilities while shrinking die space and power consumption. We could even see a 5G RedCap modem from Apple for the Apple Watch or other wearables like an AR headset. I would expect a more powerful modem inside new MacBooks or iPads that’s more in-line with what Apple will use in the iPhone.

A comprehensive global study, the 2024 HP Work Relationship Index, reveals that only 28% of knowledge workers have a healthy relationship with their work. This represents a mere one-point increase from 2023. However, the study highlights two promising solutions to improve this dynamic: integrating artificial intelligence and offering personalized work experiences.

The study’s key findings include:

  • AI usage among knowledge workers has surged to 66% in 2024, up from 38% last year, with AI users reporting 11 points higher satisfaction with their relationship to work.
  • Approximately two-thirds of workers desire personalized work experiences, with 87% willing to forgo part of their salary to achieve this.
  • AI is crucial for making jobs easier, improving work-life balance, and opening new opportunities for enjoyment and career advancement.

For the study, HP surveyed 15,600 respondents across 12 countries. The results underscore the evolving expectations of employers and employees and the potential of smart technology, particularly AI, to drive better work relationships and overall job satisfaction. Learn more about how these trends are shaping the future of work in my recent Forbes article.

Microsoft finally pushed its Recall feature to x86 Copilot+ PCs with chips from Intel and AMD that have capable enough NPUs. This update came through the Windows Insider Dev channel, so it’s not quite a broad release, but it is good to see Microsoft deliver on its promise of bringing Copilot+ to x86 PCs. I did think it was a bit odd that Recall didn’t come as part of the November update that brought Copilot+ to x86 PCs, but at least it wasn’t a long wait. I believe that we’ll continue to see a feature delta between Snapdragon-based systems and x86 systems, given Qualcomm’s six-month head start.

IonQ recently announced IonQ Quantum OS, a quantum operating system to increase the efficiency and scalability of quantum computing. It reduces classical and cloud overheads, provides improved qubit calibration, and increases security for enterprise-level applications. IonQ is currently using the OS in the IonQ Forte system and plans to use it in the IonQ Forte Enterprise in Switzerland.

IonQ also announced the IonQ Hybrid Services suite, which is designed to blend quantum and classical computing. It contains a new tool called the Workload Management & Solver Service, which makes cloud integration of quantum tasks easier. It also announced “Sessions” for optimized QPU time management.

Nile Secure’s latest Trust Service leverages its core architecture to simplify zero trust access and policy management with continuous updates as well as monitoring and enforcement capabilities. I also like its SSE integrations with Microsoft Security, Palo Alto Networks, and Zscaler for extended cloud workload protection. The company’s approach to delivering secure networking as a service with SLA guarantees is somewhat unconventional, and this latest announcement has the potential to provide additional value for its customers.

At this year’s AWS re:Invent, there was of course a lot of great technology to discuss. But one thing that stands out to me is how these tools and solutions aren’t just geared toward enterprises—they’re also making an impact in sports. I attended an AWS Sports session that highlighted work being done with the National Football League, National Hockey League, PGA Tour, and Deutsche Fußball Liga.

This session gave context to our Moor Insights & Strategy Game Time Tech podcast. Melody Brue and I had a chance to catch up with Julie Neenan Souza, head of global sports strategy at AWS, to discuss how AWS technologies are impacting organizations, players, and fans. Check out this great conversation.

Renewable energy company RWE has chosen Hewlett Packard Enterprise’s Private Cloud AI to enhance its weather modeling and energy resource management. This collaboration is part of HPE’s NVIDIA AI Computing portfolio and will enable RWE to leverage advanced AI to improve forecast accuracy and optimize its global renewable energy operations.

RWE’s AI Research Laboratory will use the AI-optimized private cloud—which HPE says can be deployed in just a few clicks—to evaluate, fine-tune, and deploy weather models. The solution’s ability to handle large datasets and automate processes should streamline this work. It will integrate with the HPE GreenLake cloud, which should allow RWE’s researchers to concentrate on model development and accelerate their time-to-market.

This initiative aligns with RWE’s growth strategy, “Growing Green,” which aims to expand its renewable energy portfolio and achieve net zero emissions by 2040. The advanced AI capabilities should also give RWE a competitive advantage in the renewable energy market.

I had the opportunity to attend AT&T’s Analyst & Investor Day in Dallas. The company’s continued significant fiber investment serves as the bedrock for it to provide converged network services that include highly performant broadband and mobility at scale. In the process, it is bridging the digital divide and providing digital literacy through its network of connected learning centers. It’s a model to follow for other companies that have the capital and operational resources to marshal. One statistic that was shared at the event stood out for me: Converged services are lifting the operator’s lifetime subscriber value by 15%. That flies in the face of convention that subscribers bundle services purely to get discounts. This positions AT&T to offer additional adjacent solutions that have great potential for lifting ARPU over time.

Research Papers Published

Research Notes Published

Citations

Avaya / Growth / Melody Brue / UC Today
What Might Be Avaya’s Best Path Forward? – UC Today

AWS / AI Models / Patrick Moorhead mentioned / Yahoo Finance
Amazon unveils new AI models, Trainium chips. AWS CEO discusses.

AWS / AWS & SageMaker / Jason Andersen / InfoWorld
Better together? Why AWS is unifying data analytics and AI services in SageMaker

AWS / AWS re:Invent / Matt Kimball / Fierce Network
Here’s what analysts make of AWS’ big re:Invent news

AWS / AWS re:Invent / Matt Kimball / NetworkWorld
AWS tries to lure users to its cloud via storage ease of use

Axiado / Series C Funding / Patrick Moorhead / Axiado
Axiado Raises $60M in Series C Funding to Boost AI Platform Security and Energy Efficiency

Axiado / Series C Funding / Patrick Moorhead / PR Newswire
Axiado Raises $60M in Series C Funding to Boost AI Platform Security and Energy Efficiency

Cadence / Racial Equity Fund / Melody Brue / Cadence Blog
Bridging Gaps with the Cadence Racial Equity Fund

Dell / Stock / Patrick Moorhead referenced/ Investor’s Business Daily
Amid Booming AI Server Sales, Dell Stock Rallies Furiously Ahead Of Results

Dell / Earnings / Patrick Moorhead / CNBC
Dell is a leader in preparedness for Trump’s potential policies and have potential upside: Analyst

HCI Modernization / Matt Kimball / StateTech
Law Enforcement Agencies Follow Different Paths to Optimize Their Data Centers

Intel / Intel CEO Departure / Patrick Moorhead / Barron’s
Intel’s Next CEO? Why Apple, TSMC, Marvell Executives Could Be in the Mix.

Intel / Intel CEO Departure / Patrick Moorhead / ComputerWorld
Intel CEO Pat Gelsinger retires

Intel / Intel CEO Departure / Patrick Moorhead / Fierce Electronics
Intel CEO Gelsinger is out, focus put on new Products group

Intel / Intel CEO Departure / Patrick Moorhead / Hot Hardware
Intel’s David Zinsner Discusses Core Chip Strategy And Hunt For Next CEO

Intel / Intel CEO Departure / Patrick Moorhead / Investor’s Business Daily
Intel Board Blasted For Handling Of CEO’s Sudden Exit As Stock Falls Again

Intel / Intel CEO Departure / Patrick Moorhead / The Register
Cost of Gelsinger’s ambition proves too much for Intel

Intel / Intel CEO Departure / Patrick Moorhead / VentureBeat
Intel CEO Pat Gelsinger resigns without a permanent successor

Qualcomm / Revenue & TAM / Patrick Moorhead / Fierce Electronics
Qualcomm expects 50% of revenue to be IoT and Automotive by 2030

TV APPEARANCES
Intel / Intel CEO Departure / Patrick Moorhead / CNBC
Patrick Moorhead on the search for Intel’s next CEO

New Gear or Software We Are Using and Testing

  • Kindle Colorsoft (Anshel Sag)
  • Google Pixel Buds 2 Pro (Anshel Sag)
  • XREAL One AR Glasses (Anshel Sag)
  • Google Pixel Watch 3, 41mm (Anshel Sag)
  • Cisco Desk Pro (Melody Brue)
  • OnePlus Buds Pro 3 (Anshel Sag)
  • Insta360 Link2 4K AI Webcam (Anshel Sag)
  • Google Pixel 9 Pro Fold (Anshel Sag)
  • Google TV streamer – Matter and Thread features (Bill Curtis)
  • Various Matter devices (Bill Curtis)
  • ASUS Zephyrus G16 Gaming Laptop (Anshel Sag)
  • iPhone 16 Pro (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • IBM Strategic Analyst Event, December 9, Boston (Robert Kramer, Jason Andersen)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Lattice Developer Conference, December 9-10, San Jose (Patrick Moorhead) 
  • Marvel Industry Analyst Day, December 10, Santa Clara (Patrick Moorhead, Matt Kimball)
  • ServiceNow Global Industry Analyst Digital Summit, December 10 (Jason Andersen, Melody Brue, Robert Kramer – virtual)
  • IBM Strategic Analyst Event, December 9, Boston (Robert Kramer, Jason Andersen)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Lattice Developer Conference, December 9-10, San Jose (Patrick Moorhead) 
  • Marvel Industry Analyst Day, December 10, Santa Clara (Patrick Moorhead, Matt Kimball)
  • ServiceNow Global Industry Analyst Digital Summit, December 10 (Jason Andersen, Melody Brue, Robert Kramer – virtual)
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)
  • ZohoDay25, February 3-5, Austin (Robert Kramer, Melody Brue)
  • RingCentral Analyst Summit, February 24-26, Napa (Melody Brue)
  • Zendesk Analyst Day, March 35, Las Vegas (Melody Brue)
  • Nutanix .NEXT May 6-9, Washington DC (Matt Kimball)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending December 6, 2024 appeared first on Moor Insights & Strategy.

]]>
RESEARCH NOTE: Modernizing Your Datacenter? Take a Look at Your Storage https://moorinsightsstrategy.com/research-notes/modernizing-your-datacenter-start-with-storage/ Thu, 05 Dec 2024 01:42:05 +0000 https://moorinsightsstrategy.com/?post_type=research_notes&p=44236 When discussing modernizing the datacenter, storage is one of the foundational elements that, while critical to success, is often overlooked. Legacy storage infrastructure can and will impact the performance of data-driven environments. In fact, I’ll go so far as to say that storage must be the first consideration of any modernization effort. This research looks […]

The post RESEARCH NOTE: Modernizing Your Datacenter? Take a Look at Your Storage appeared first on Moor Insights & Strategy.

]]>

When discussing modernizing the datacenter, storage is one of the foundational elements that, while critical to success, is often overlooked. Legacy storage infrastructure can and will impact the performance of data-driven environments. In fact, I’ll go so far as to say that storage must be the first consideration of any modernization effort.

This research looks at the role of block storage in the cloud environment and how companies like Lightbits Labs deliver performance, scale, and cost savings realized by some of the largest organizations.

To Modernize Or Not: Storage Is Fundamental

The acquisition of VMware by Broadcom nearly a year ago kicked off a discussion around modernization. This highly disruptive act has caused many organizations to reconsider the future state of their datacenter, with or without VMware.

There are many different estimates for how many enterprise IT organizations are having these internal conversations. Based on the estimates I’ve seen, it’s safe to say the vast majority are at least considering significant datacenter modernization projects. In fact, I can say that every IT leader I’ve spoken with is considering what their move-forward plan in this area looks like. By this point, it’s less about VMware specifically and more about the broader need for modernization and cloud-native environments. It’s undoubtedly a healthy and necessary debate for internal IT organizations, as a sense of complacency and incrementalism seems to have crept in over the last 10 years or so.

There are two key questions facing the enterprise: What does our modernization plan look like? And what technologies should we deploy to meet the needs of today and tomorrow across the organization?

For those organizations that have decided to embark on the modernization journey, the first decision is whether to build a cloud or deploy a cloud. In other words, is it best to build a cloud from the ground up using OpenStack, or deploy a cloud environment on Nutanix, Red Hat OpenShift, or some other solution stack? In either case, virtualized and containerized environments are only as performant and responsive as the supporting infrastructure. In turn, Infrastructure is only as performant as its storage environment.

Unfortunately, storage is often treated as a secondary consideration, and many organizations fail to realize the full potential of their modernization efforts because of slower spinning disks and the lack of a storage OS designed for performance and scale.

Lightbits Labs and Performant Block

While it is fairly obvious that storage performance can (and will) impact application performance, it’s important to consider whether the application in question is an e-commerce site performing tens of thousands of transactions per minute or an AI cloud delivering services in real time to its customer base. Scale matters, and performant scale is even more important.

Lightbits Labs is a software-defined storage (SDS) provider that powers some of the largest and most demanding environments—everything from e-commerce to cloud service providers. It achieves this through NVMe/TCP, a technology the company invented and has received several patents for. In this environment, the NVMe protocol is routed over Ethernet using the TCP/IP protocol suite. This allows high-performance clusters without the need for specialized networking and hardware.

Alternative approaches have their limitations. Direct-attached storage (DAS) and storage area networks (SAN) are popular models; however, each comes with a set of challenges. In the case of DAS, it’s an inflexibility that can lead to inefficiencies as applications become wedded to servers and storage. In the case of SANs, it’s a matter of cost, as proprietary hardware and specialized networking come at a premium.

These challenges are avoided with SDS in general and Lightbits in particular. In comparison to DAS, Lightbits can deliver higher utilization for lower TCO and better utilization of flash for longer endurance of QLC. Compared to SANs, Lightbits NVMe/TCP delivers high performance without the proprietary hardware stack.

Performance is a big deal for Lightbits Labs. In fact, its claim of scaling up to 75 million IOPS (input/output operations per second) at sub-1ms latency puts it in a performance leadership position. This SDS solution outperforms CEPH, the more broadly deployed open-source block solution for data-intensive cloud environments. Like CEPH, Lightbits can be seamlessly integrated into OpenStack and managed through Cinder, Nova, and Glance through the Cinder API.

Even for legacy virtualization, there is a Lightbits play. The company’s certified solution supports VMware and KVM environments as the back-end SDS. In fact, Lightbits can even run alongside vSAN and be used as a vMotion target in vSphere. Whether this support will continue as the changes in Broadcom’s portfolio impact existing implementations is not yet known.

Meeting the Needs of Data-Driven Workloads

The enterprise IT infrastructure market is constantly changing. However, the confluence of several factors is causing organizations to consider how best to support the business’s needs today and in the future. The cloud operating model is still how organizations can achieve the agility required to meet the needs of the data-driven workloads populating the datacenter. How that cloud operating model is constructed—what the underlying compute, networking, and storage environments are comprised of—matters.

Storage, in particular, is the building block upon which everything depends—performance, resilience, and both of these at scale. Fast, resilient storage can help deliver results faster, be it for AI inferencing or tens of thousands of financial transactions per second.

Companies like Lightbits Labs are, in many ways, the innovation engines that drive change in the industry. While they may not have the brand awareness of some of the bigger players in the market, they nonetheless power some of the largest and most performant clouds and enterprise datacenters in the market. In other words, the most performance- and scale-sensitive organizations deploy Lightbits because of its performance, scale, and cost. Which means it’s probably worth taking a look at for any organization with these high-end needs.

The post RESEARCH NOTE: Modernizing Your Datacenter? Take a Look at Your Storage appeared first on Moor Insights & Strategy.

]]>
Datacenter Podcast: Episode 33 – Talking Microsoft, Supercomputing 2024, Atom Computing, Microsoft Ignite 2024 https://moorinsightsstrategy.com/data-center-podcast/datacenter-podcast-episode-33-talking-microsoft-supercomputing-2024-atom-computing-microsoft/ Mon, 25 Nov 2024 15:03:31 +0000 https://moorinsightsstrategy.com/?post_type=data_center&p=45070 Episode 33 of the Datacenter Podcast, hosts Matt and Paul analyze talk Microsoft, Supercomputing 2024, Atom Computing, and more!

The post Datacenter Podcast: Episode 33 – Talking Microsoft, Supercomputing 2024, Atom Computing, Microsoft Ignite 2024 appeared first on Moor Insights & Strategy.

]]>
On this week’s episode of the MI&S Datacenter Podcast, hosts Matt and Paul analyze the week’s top datacenter and datacenter edge news. This week they are talking Microsoft, Supercomputing 2024, Atom Computing, and more!

Watch the video here:

Listen to the audio here:

2:41 Little AI Guys
7:10 Supercomputing Goes Mainstream
13:20 2 Dozen Logical Qubits
22:43 Chips & Chips & Chips At Microsoft Ignite
32:18 Getting To Know Us – The Thanksgiving Edition

Little AI Guys
https://www.microsoft.com/en-us/research/articles/magentic-one-a-generalist-multi-agent-system-for-solving-complex-tasks/

Supercomputing Goes Mainstream
https://www.linkedin.com/feed/update/urn:li:activity:7264651754347085825/
https://www.linkedin.com/feed/update/urn:li:activity:7265018456003997696/

2 Dozen Logical Qubits
https://atom-computing.com/high-fidelity-gates-and-the-worlds-largest-entangled-logical-qubit-state/
https://www.forbes.com/sites/moorinsights/2024/01/25/microsoft-uses-ai-and-hpc-to-analyze-32-million-new-materials/lion-new-materials/

Chips & Chips & Chips At Microsoft Ignite
https://www.datacenterknowledge.com/cloud/microsoft-ignite-2024-new-azure-data-center-chips-unveiled

Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.

The post Datacenter Podcast: Episode 33 – Talking Microsoft, Supercomputing 2024, Atom Computing, Microsoft Ignite 2024 appeared first on Moor Insights & Strategy.

]]>
Digging Into The Ultra Accelerator Link Consortium https://moorinsightsstrategy.com/digging-into-the-ultra-accelerator-link-consortium/ Sat, 23 Nov 2024 21:56:42 +0000 https://moorinsightsstrategy.com/?p=44495 The newly formed UALink Consortium brings together major tech companies to address the vital technical challenge of GPU-to-GPU connectivity in datacenters

The post Digging Into The Ultra Accelerator Link Consortium appeared first on Moor Insights & Strategy.

]]>
Digging Into The Ultra Accelerator Link Consortium
(Photo from Adobe Stock)

The Ultra Accelerator Link Consortium has recently incorporated, giving companies the opportunity to join, and it has announced that the UALink 1.0 specification will be available for public consumption in Q1 2025. Included in the Consortium are its “Promoter” members, including AMD, Astera Labs, AWS, Cisco, Google, HPE, Intel, Meta and Microsoft.

The UALink Consortium aims to deliver specifications and standards that allow industry players to develop high-speed interconnects for AI accelerators at scale. In other words, it addresses the GPU clusters that train the largest of large language models and solve the most complex challenges. Much like Nvidia developed its proprietary NVLink to address GPU-to-GPU connectivity, UALink looks to broaden this capability across the industry.

The key to the UALink Consortium is the partnership among the biggest technology companies—many of whom compete with one another—to better enable the future of AI and other accelerator-dependent workloads. Let’s explore this initiative and what it could mean for the market.

How We Got Here — The CPU Challenge

High-performance computing was perhaps the first workload classification that highlighted that CPUs were not always the best processor for the job. The massive parallelism and high data throughput of GPUs enable tasks like deep learning, genomic sequencing and big data analytics to perform far better than they would on a CPU. These architectural differences and programmability have made GPUs the accelerator of choice for AI. In particular, the training of LLMs that double in size every six months or so happens far more efficiently and much faster on GPUs.

However, in a server architecture, the CPU (emphasis on the “C”—central) is the brain of the server, with all functions routing through it. If a GPU is to be used for a function, it connects to a CPU over PCIe. Regardless of how fast that GPU can perform a function, system performance is limited by how quickly a CPU can route traffic to and from it. This limitation becomes glaringly noticeable as LLMs and datasets become ever larger, requiring a large number of GPUs to train them in concert in the case of generative AI. This is especially true for hyperscalers and other large organizations training AI frontier models. Consider a training cluster with thousands of GPUs spread across several racks, all dedicated to training GPT-4, Mistral or Gemini 1.5. The amount of latency introduced into the training period is considerable.

This is not just a training issue, however. As enterprise IT organizations begin to operationalize generative AI, performing inference at scale is also challenging. In the case of AI and other demanding workloads such as HPC, the CPU can significantly limit system and cluster performance. This can have many implications in terms of performance, cost and accuracy.

Introducing UALink

The UALink Consortium was formed to develop a set of standards that enables accelerators to communicate with one another (bypassing the CPU) in a fast, low-latency way—and at scale. The specification defines an I/O architecture that enables speeds of up to 200 Gbps (per lane), scaling up to 1,024 AI accelerators. This specification delivers considerably better performance than that of Ethernet and connects considerably more GPUs than Nvidia’s NVLink.

To better contextualize UALink and its value, think about connectivity in three ways: front-end network, scale-up network and scale-out network. Generally, the front-end network is focused on connecting the hosts to the broader datacenter network for connectivity to compute and storage clusters as well as the outside world. This network is connected through Ethernet NICs on the CPU. The back-end network is focused on GPU-to-GPU connectivity. This back-end network is composed of two components: the scale-up fabric and the scale-out fabric. Scale-up connects hundreds of GPUs at the lowest latency and highest bandwidth (which is where UALink comes in). Scale-out is for scaling AI clusters beyond 1,024 GPUs—to 10,000 or 100,000. This is enabled using scale-out NICs and Ethernet and is where Ultra Ethernet will play.

When thinking about a product like the Dell PowerEdge XE9680, which can support up to eight AMD Instinct or Nvidia HGX GPUs, a UALink-enabled cluster would support well over 100 of these servers in a pod where GPUs would have direct, low-latency access to one another.

As an organization’s needs grow, Ultra Ethernet Consortium-based connectivity can be used for scale-out. In 2023, industry leaders including Broadcom, AMD, Intel and Arista formed the UEC to drive performance, scale and interoperability for bandwidth-hungry AI and HPC workloads. In fact, AMD just launched the first UEC-compliant NIC, the Pensando Pollara 400, a few weeks ago. (Our Moor Insights & Strategy colleague Will Townsend has written about it in detail.)

Getting back to UALink, it is important to understand that this is not simply some pseudo-standard being used to challenge the dominance of Nvidia and NVLink. This is a real working group developing a genuine standard with actual solutions being designed.

In parallel, we see some of the groundwork being laid by UALink Promotor companies like Astera Labs, which recently introduced its Scorpio P-Series and X-Series fabric switches. While the P-Series switch enables GPU-to-CPU connectivity over PCIe Gen 6 (which can be customized), the X-Series is a switch aimed at GPU-to-GPU connectivity. Given that the company has already built the underlying fabric, one can see how it could support UALink sometime soon after the specification is published.

It is important to understand that UALink is agnostic about accelerators and the fabrics, switches, retimers and other technology that enable accelerator-to-accelerator connectivity. It doesn’t favor AMD over Nvidia, nor does it favor Astera Labs over, say, Broadcom (if that company chooses to contribute). It’s about building an open set of standards that favors innovation across the ecosystem.

While the average enterprise IT administrator, or even CIO, won’t care much about UALink, they will care about what it will deliver to their organization: faster training and inference on platforms that consume less power and can be somewhat self-managed and tuned. Putting a finer point on it—faster results at lower cost.

What About Nvidia And NVLink?

It’s easy to regard what UALink is doing as an attempt to respond to Nvidia’s stronghold. And at some level, it certainly is. However, in the bigger picture this is less about copying what Nvidia does and more about ensuring that critical capabilities like GPU-to-GPU connectivity don’t fall under the purview of one company with a vested interest in optimizing for its own GPUs.

It will be interesting to watch how server vendors such as Dell, HPE, Lenovo and others choose to support both UALink and NVLink. (Lenovo is a “Contributor” member of the UALink Consortium, but Dell has not joined as yet.) NVLink uses a proprietary signaling interconnect to support Nvidia GPUs. Alternatively, UALink will support accelerators from a range of vendors, with switching and fabric from any vendor that adheres to the UALink standard.

There is a real and significant cost to these server vendors—from design to manufacturing and through the qualification and sales/support process. On the surface, it’s easy to see where UALink would appeal to, say, Dell or HPE. However, there is a market demand for Nvidia that cannot and will not be ignored. Regardless of one’s perspective on the ability of “the market” to erode Nvidia’s dominance, we can all agree that its dominance will not fade fast.

Cooperating For Better Datacenter Computing

The UALink Consortium (and forthcoming specification) is a significant milestone for the industry as the challenges surrounding training AI models and operationalizing data become increasingly complex, time-consuming and costly.

If and when we see companies like Astera Labs and others develop the underlying fabric and switching silicon to drive accelerator-to-accelerator connectivity, and when companies like Dell and HPE build platforms that light all of this up, the downmarket impact will be significant. This means the benefits realized by hyperscalers like AWS and Meta will also benefit enterprise IT organizations that look to operationalize AI across business functions.

Ideally, we would have a market with one standard interconnect specification for all accelerators—all GPUs. And maybe at some point that day will come. But for now, it’s good to see rivals like AMD and Intel or Google and AWS coalesce around a standard that is beneficial to all.

The post Digging Into The Ultra Accelerator Link Consortium appeared first on Moor Insights & Strategy.

]]>
Digging Into The Ultra Accelerator Link Consortium https://moorinsightsstrategy.com/digging-into-the-ultra-accelerator-link-consortium-2/ Fri, 22 Nov 2024 20:25:19 +0000 https://moorinsightsstrategy.com/?p=44715 The newly formed UALink Consortium brings together major tech companies to address the vital technical challenge of GPU-to-GPU connectivity in datacenters.

The post Digging Into The Ultra Accelerator Link Consortium appeared first on Moor Insights & Strategy.

]]>
The newly formed UALink Consortium brings together major tech companies to address the vital technical challenge of GPU-to-GPU connectivity in datacenters. Adin – stock.adobe.com

The Ultra Accelerator Link Consortium has recently incorporated, giving companies the opportunity to join, and it has announced that the UALink 1.0 specification will be available for public consumption in Q1 2025. Included in the Consortium are its “Promoter” members, including AMD, Astera Labs, AWS, Cisco, Google, HPE, Intel, Meta and Microsoft.

The UALink Consortium aims to deliver specifications and standards that allow industry players to develop high-speed interconnects for AI accelerators at scale. In other words, it addresses the GPU clusters that train the largest of large language models and solve the most complex challenges. Much like Nvidia developed its proprietary NVLink to address GPU-to-GPU connectivity, UALink looks to broaden this capability across the industry.

The key to the UALink Consortium is the partnership among the biggest technology companies—many of whom compete with one another—to better enable the future of AI and other accelerator-dependent workloads. Let’s explore this initiative and what it could mean for the market.

How We Got Here — The CPU Challenge

High-performance computing was perhaps the first workload classification that highlighted that CPUs were not always the best processor for the job. The massive parallelism and high data throughput of GPUs enable tasks like deep learning, genomic sequencing and big data analytics to perform far better than they would on a CPU. These architectural differences and programmability have made GPUs the accelerator of choice for AI. In particular, the training of LLMs that double in size every six months or so happens far more efficiently and much faster on GPUs.

However, in a server architecture, the CPU (emphasis on the “C”—central) is the brain of the server, with all functions routing through it. If a GPU is to be used for a function, it connects to a CPU over PCIe. Regardless of how fast that GPU can perform a function, system performance is limited by how quickly a CPU can route traffic to and from it. This limitation becomes glaringly noticeable as LLMs and datasets become ever larger, requiring a large number of GPUs to train them in concert in the case of generative AI. This is especially true for hyperscalers and other large organizations training AI frontier models. Consider a training cluster with thousands of GPUs spread across several racks, all dedicated to training GPT-4, Mistral or Gemini 1.5. The amount of latency introduced into the training period is considerable.

This is not just a training issue, however. As enterprise IT organizations begin to operationalize generative AI, performing inference at scale is also challenging. In the case of AI and other demanding workloads such as HPC, the CPU can significantly limit system and cluster performance. This can have many implications in terms of performance, cost and accuracy.

Introducing UALink

The UALink Consortium was formed to develop a set of standards that enables accelerators to communicate with one another (bypassing the CPU) in a fast, low-latency way—and at scale. The specification defines an I/O architecture that enables speeds of up to 200 Gbps (per lane), scaling up to 1,024 AI accelerators. This specification delivers considerably better performance than that of Ethernet and connects considerably more GPUs than Nvidia’s NVLink.

To better contextualize UALink and its value, think about connectivity in three ways: front-end network, scale-up network and scale-out network. Generally, the front-end network is focused on connecting the hosts to the broader datacenter network for connectivity to compute and storage clusters as well as the outside world. This network is connected through Ethernet NICs on the CPU. The back-end network is focused on GPU-to-GPU connectivity. This back-end network is composed of two components: the scale-up fabric and the scale-out fabric. Scale-up connects hundreds of GPUs at the lowest latency and highest bandwidth (which is where UALink comes in). Scale-out is for scaling AI clusters beyond 1,024 GPUs—to 10,000 or 100,000. This is enabled using scale-out NICs and Ethernet and is where Ultra Ethernet will play.

When thinking about a product like the Dell PowerEdge XE9680, which can support up to eight AMD Instinct or Nvidia HGX GPUs, a UALink-enabled cluster would support well over 100 of these servers in a pod where GPUs would have direct, low-latency access to one another.

As an organization’s needs grow, Ultra Ethernet Consortium-based connectivity can be used for scale-out. In 2023, industry leaders including Broadcom, AMD, Intel and Arista formed the UEC to drive performance, scale and interoperability for bandwidth-hungry AI and HPC workloads. In fact, AMD just launched the first UEC-compliant NIC, the Pensando Pollara 400, a few weeks ago. (Our Moor Insights & Strategy colleague Will Townsend has written about it in detail.)

Getting back to UALink, it is important to understand that this is not simply some pseudo-standard being used to challenge the dominance of Nvidia and NVLink. This is a real working group developing a genuine standard with actual solutions being designed.

In parallel, we see some of the groundwork being laid by UALink Promotor companies like Astera Labs, which recently introduced its Scorpio P-Series and X-Series fabric switches. While the P-Series switch enables GPU-to-CPU connectivity over PCIe Gen 6 (which can be customized), the X-Series is a switch aimed at GPU-to-GPU connectivity. Given that the company has already built the underlying fabric, one can see how it could support UALink sometime soon after the specification is published.

It is important to understand that UALink is agnostic about accelerators and the fabrics, switches, retimers and other technology that enable accelerator-to-accelerator connectivity. It doesn’t favor AMD over Nvidia, nor does it favor Astera Labs over, say, Broadcom (if that company chooses to contribute). It’s about building an open set of standards that favors innovation across the ecosystem.

While the average enterprise IT administrator, or even CIO, won’t care much about UALink, they will care about what it will deliver to their organization: faster training and inference on platforms that consume less power and can be somewhat self-managed and tuned. Putting a finer point on it—faster results at lower cost.

What About Nvidia And NVLink?

It’s easy to regard what UALink is doing as an attempt to respond to Nvidia’s stronghold. And at some level, it certainly is. However, in the bigger picture this is less about copying what Nvidia does and more about ensuring that critical capabilities like GPU-to-GPU connectivity don’t fall under the purview of one company with a vested interest in optimizing for its own GPUs.

It will be interesting to watch how server vendors such as Dell, HPE, Lenovo and others choose to support both UALink and NVLink. (Lenovo is a “Contributor” member of the UALink Consortium, but Dell has not joined as yet.) NVLink uses a proprietary signaling interconnect to support Nvidia GPUs. Alternatively, UALink will support accelerators from a range of vendors, with switching and fabric from any vendor that adheres to the UALink standard.

There is a real and significant cost to these server vendors—from design to manufacturing and through the qualification and sales/support process. On the surface, it’s easy to see where UALink would appeal to, say, Dell or HPE. However, there is a market demand for Nvidia that cannot and will not be ignored. Regardless of one’s perspective on the ability of “the market” to erode Nvidia’s dominance, we can all agree that its dominance will not fade fast.

Cooperating For Better Datacenter Computing

The UALink Consortium (and forthcoming specification) is a significant milestone for the industry as the challenges surrounding training AI models and operationalizing data become increasingly complex, time-consuming and costly.

If and when we see companies like Astera Labs and others develop the underlying fabric and switching silicon to drive accelerator-to-accelerator connectivity, and when companies like Dell and HPE build platforms that light all of this up, the downmarket impact will be significant. This means the benefits realized by hyperscalers like AWS and Meta will also benefit enterprise IT organizations that look to operationalize AI across business functions.

Ideally, we would have a market with one standard interconnect specification for all accelerators—all GPUs. And maybe at some point that day will come. But for now, it’s good to see rivals like AMD and Intel or Google and AWS coalesce around a standard that is beneficial to all.

The post Digging Into The Ultra Accelerator Link Consortium appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending November 22, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-november-22-2024/ Fri, 22 Nov 2024 20:02:20 +0000 https://moorinsightsstrategy.com/?p=44170 MI&S Weekly Analyst Insights — Week Ending November 22, 2024. A wrap up of what our team published during the last week.

The post MI&S Weekly Analyst Insights — Week Ending November 22, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.


Last week was a great mix of time in the field and time back home in Austin. Melody Brue and I traveled to New York City to moderate a panel discussion at the launch of Solidigm’s impressive new 122TB solid-state drive, which answers the need for higher-density storage for AI datacenters that’s also more power-efficient. (You can read Mel’s writeup about the event—and the growing importance of energy efficiency in datacenters—here.)

 

Solidigm header
MI&S CEO Patrick Moorhead and MI&S analyst Melody Brue (far right) moderate a panel of ecosystem partners at the launch event for Solidigm’s new 122TB SSD for datacenters. Photo: Solidigm

As I mentioned last time, the firm took a week off from publishing this Analyst Insights roundup so we could devote a whole day to an all-company strategy session held in Austin. It was a great opportunity for us to review our projects and performance in 2024 and to hone our focus for 2025. We capped off the day in a private room at a downtown restaurant where we welcomed spouses, family members, and friends of the firm for a celebratory dinner.

This week, I was at Microsoft Ignite, where, in addition to my regular analyst duties—including an exclusive briefing with Microsoft chairman and CEO Satya Nadella—I filmed several Six Five videos with key executives to break down the Ignite news. Anshel was in New York for Qualcomm’s Investor Day, Matt was at SC24 in Atlanta, and Will was in Tokyo at the NTT R&D Forum. 

Read more about significant tech trends and events in this week’s Analyst Insights, including insights on happenings from the week you didn’t hear from us, such as BoxWorks (Mel) and the Veeam Analyst Summit in Scottsdale (Robert). 

For those of you in the U.S., the team at MI&S wishes you a happy Thanksgiving! And a wonderful week to all!

Patrick Moorhead

———

Our MI&S team published 19 deliverables:

Over the last two weeks, MI&S analysts have been quoted in multiple syndicated top-tier international publications, including VentureBeat and Wired, with our thoughts on Amazon, Dell, Intel, Nvidia, Microsoft, low-code/no-code, semiconductors, chips, AI, and more. Patrick Moorhead appeared on Yahoo! Finance and CNBC to discuss Nvidia’s Q3 2025 earnings.

MI&S Quick Insights

This week Microsoft hosted its Ignite event in Chicago and, as expected, it was very heavy on AI agents in terms of announcements. This makes a lot of sense, and I intend to do a deeper research piece on all of the announcements very soon. What stands out so far is the breadth of the announcements. Microsoft is a massive technology company providing value to many different stakeholders. So, it was not a surprise to see agentic aspects throughout its developer tools such as Copilot Studio, as well as embedded into standard products such as Office 365. What I wonder is how quickly we will see the agent category start to fracture into sub-categories. This will likely be a healthy thing, since my conversations with sales and marketing leaders tells me that education around agents and AI in general is seriously lacking. One click down into how this stuff works—and the implied business value of different types of agents—might do wonders for everyone.

This week I got the chance to meet the founding team at Zavvis Technologies to discuss their approach to agentic applications. The discussion provided a cool perspective on startups in the age of AI. The access to LLMs via hyperscalers and horizontal tooling for data science and application development enables new approaches to innovating. What Zavvis is doing is digging into how agents can improve the CFO function in a company. To be specific, it’s not really about automating financial operations and processes, but more a set of agents to help the CFO understand options and see opportunities in a mass of data. To me, it’s a very interesting approach for a couple of reasons. First is simply the fact that a company can now use technology to take AI deep into a very specific functional area. Second, the agents should help blend together structured and unstructured data, which may in fact enhance the CFO’s role. That said, this approach is a bit tricky, given that the CFO role in particular is known for risk aversion—which suggests a different sort of go-to-market strategy for Zavvis. I am excited to hear what the company learns as it gets moving.

In October, IBM announced the availability of its Granite 3.0 models; since then it has been engaging many different ecosystems talking about the open source value proposition for LLMs. While IBM is not the only player here, it does have an evolved take on how transparent vendors can be about LLMs. But one item that got lost in the shuffle was IBM introducing one Granite variant specifically to implement guardrails on speech and bias. IBM has a notion of this LLM sitting in front of other LLMs. For an analogy, think of when someone in a TV studio “bleeps” out a word they are not allowed to broadcast. I was talking to another client about this today and I started to think about the utility of purpose-built LLMs being used as a front-end. It’s an interesting notion, and whether it’s going to win out against other rules-based guardrails and security measures is unknown. But it also suggests that we are about to enter into some really serious architectural discussions for AI in 2025.

This week Microsoft hosted its Ignite event in Chicago and, as expected, it was very heavy on AI agents in terms of announcements. This makes a lot of sense, and I intend to do a deeper research piece on all of the announcements very soon. What stands out so far is the breadth of the announcements. Microsoft is a massive technology company providing value to many different stakeholders. So, it was not a surprise to see agentic aspects throughout its developer tools such as Copilot Studio, as well as embedded into standard products such as Office 365. What I wonder is how quickly we will see the agent category start to fracture into sub-categories. This will likely be a healthy thing, since my conversations with sales and marketing leaders tells me that education around agents and AI in general is seriously lacking. One click down into how this stuff works—and the implied business value of different types of agents—might do wonders for everyone.

This week I got the chance to meet the founding team at Zavvis Technologies to discuss their approach to agentic applications. The discussion provided a cool perspective on startups in the age of AI. The access to LLMs via hyperscalers and horizontal tooling for data science and application development enables new approaches to innovating. What Zavvis is doing is digging into how agents can improve the CFO function in a company. To be specific, it’s not really about automating financial operations and processes, but more a set of agents to help the CFO understand options and see opportunities in a mass of data. To me, it’s a very interesting approach for a couple of reasons. First is simply the fact that a company can now use technology to take AI deep into a very specific functional area. Second, the agents should help blend together structured and unstructured data, which may in fact enhance the CFO’s role. That said, this approach is a bit tricky, given that the CFO role in particular is known for risk aversion—which suggests a different sort of go-to-market strategy for Zavvis. I am excited to hear what the company learns as it gets moving.

In October, IBM announced the availability of its Granite 3.0 models; since then it has been engaging many different ecosystems talking about the open source value proposition for LLMs. While IBM is not the only player here, it does have an evolved take on how transparent vendors can be about LLMs. But one item that got lost in the shuffle was IBM introducing one Granite variant specifically to implement guardrails on speech and bias. IBM has a notion of this LLM sitting in front of other LLMs. For an analogy, think of when someone in a TV studio “bleeps” out a word they are not allowed to broadcast. I was talking to another client about this today and I started to think about the utility of purpose-built LLMs being used as a front-end. It’s an interesting notion, and whether it’s going to win out against other rules-based guardrails and security measures is unknown. But it also suggests that we are about to enter into some really serious architectural discussions for AI in 2025.

Microsoft just released Magentic-One, a multi-agent AI system that can handle open-ended tasks common to daily life. The system’s multiple agents each have specialized functions. These agents are controlled by an orchestrator agent that acts as a monitor and supervisor. The open source system is new from the standpoint that it is active instead of passive and can provide recommendations and execute tasks. According to Microsoft, Magentic-One excels at software development, data analysis, and navigating the internet.

My biggest takeaway from the Microsoft Ignite conference is how much the company has invested in its infrastructure. It has added an HSM for crypto management and a DPU for networking and storage acceleration to complement its Cobalt CPU, Maia AI accelerator, and existing security platform. Effectively, Microsoft has joined AWS and Google Cloud in developing custom silicon to deliver a full compute experience.

In addition to this, the company worked with AMD to develop a custom chip—the EPYC 9V64H—to support virtualized HPC workloads. This chip will be outfitted with HBM3 memory and double the infinity fabric. While this is an incredibly powerful compute platform, what is perhaps more interesting is to see the dominant position AMD has taken in the cloud space. Custom chip work for the CSPs was once the domain of Intel and Intel only.

Finally, Microsoft has expanded its partnership with Oracle by activating an additional 12 regions for Oracle Database@Azure and making the environment managed and governed through Azure Resource Manager and Purview, respectively. Effectively, Oracle Database@Azure is now fully integrated as a native service.

Part 1: SC24 came and went, and boy was it a ride! There are so many storylines to trace from the big supercomputing conference, so let me just share a few.

My biggest takeaway is a little bit esoteric. When I attended SC15 in Austin in 2015, it felt like I was walking through a science fiction magazine because the technology was so disconnected from what was happening in the enterprise datacenter at that moment. In particular, it was very focused on the big national and academic labs. At that event, I saw and talked about topics that would eventually become common in the enterprise—but not for years.

By contrast, walking through the show floor this week in Atlanta, I could immediately tie all of the innovations I saw to uses by the average enterprise trying to operationalize AI. Put more succinctly, over the past decade the pace of innovation has drastically accelerated from the innovators to the deployers.

Here’s another big takeaway: holy liquid cooling, Batman! I have been watching the liquid cooling market for some time. Some of the earlier players such as Vertiv, LiquidStack, CoolIT, GRC, and Motivair are now part of a much larger market peppered with logos—both the familiar and the new. I saw a total of 50 companies listed on the exhibitor list, including 22 for liquid cooling. And this doesn’t count the likes of Delta, Schneider Electric, and some of the power and infrastructure companies that have either already joined the market or are looking to enter it. The (smart) acquisition of JetCool by Flex is a good example of this.

Overall, I’m quite impressed with the amount of liquid cooling I’ve seen from cooling vendors and OEMs alike. Lenovo has been pushing Neptune for a long time, and we saw HPE start aggressively telling its liquid cooling story this summer at its Discover conference. Now Dell is really starting to jump in the game (for instance, the XE9712 racks they are shipping to CoreWeave are liquid-cooled).

With this said, I think we are still very early in this cooling game, and what we are seeing in today’s market is kind of like the days of discovering fire and inventing the wheel. As warm-water cooling is starting to find a place in the market, look for two-phase direct-to-chip (D2C) cooling to play a bigger role, as it is so much more able to address the heat density that we see on chips. Longer term, I think immersive technologies will be niche in application and will eventually bridge to cooling technologies we aren’t even covering today.

The last big takeaway is about the silicon innovation going into this market across the entirety of the data journey. I have been in the tech industry for over 30 years and I’ve never seen so much innovation in the silicon space. Most of us see NVIDIA’s biggest threat coming from AMD, Intel, and Arm—or maybe even a Qualcomm or Marvell. However, don’t overlook the many, many innovation engines in the chip industry like NeuReality, Tenstorrent, Cerebras, Untether, or others.

Part 2: The line from supercomputing to enterprise computing has become short and straight.

We’ve seen HPC-like requirements creeping into the enterprise for some time. First it was the larger enterprise organizations with workloads like crash simulation and high frequency trading. Big data, EDA, and data analytics really pushed this requirement for accelerated compute and more bespoke storage and networking to populate the enterprise datacenter. But AI has totally disrupted the game and, yes, it has brought supercomputing into the enterprise in a major way. And to the edge—and wherever else there’s data. This is why we see such big market sizing and CAGRs associated with AI. It’s not just about the chips, servers, storage, and networking; it’s about the cost of deploying, tuning, and managing these environments. And because of its nascency, there is so little knowledge to share—certainly no institutional knowledge or “muscle memory.” Because of this, I see the consulting companies playing a big role in the AI journey.

Like I said when I was talking about the SC15 conference earlier, the line from supercomputing to mainstream enterprise used to be long and crooked. That line is now very short and very straight. No longer is technology being developed and then kind of iterated on for eventual broader consumption. The ability for technology to be broadly adopted (and used) in a commercial way is now a primary concern for any startup playing in this space.

Canva recently appointed Kelly Steckelberg, former Zoom CFO, to the same position within its organization. Steckelberg brings a wealth of experience, having successfully steered Zoom through its IPO and a period of rapid growth. Canva is currently valued at approximately $32 billion, with more than $2 billion in annual recurring revenue. It has seen significant success in expanding into the enterprise market, with 95% of Fortune 500 companies as users. Although Canva states there are no immediate plans for an IPO, Steckelberg’s appointment and the company’s strong financial performance suggest a public offering could be on the horizon. Steckelberg says she sees tremendous opportunity at Canva, and I believe the company is very fortunate to have her. At a recent Zoom event, she expressed confidence in her successor, Michelle Chang. Chang joins Zoom from Microsoft, where she served as corporate vice president and CFO of the Commercial Sales and Partner Organization. Chang will be front and center next week as Zoom presents its earnings. Analysts project Zoom to deliver year-over-year earnings growth driven by higher revenues when it releases its Q3 2025 financial results for the quarter ending October 2024. Chang will be a critical part of Zoom’s next growth phase as the company moves from a video conferencing company to an AI-driven collaboration and productivity platform.

Microsoft Fabric announced significant updates to its unified data platform at Ignite 2024. Fabric Databases, initially including SQL Database, now include transactional capabilities with built-in security, vector search, RAG support, and Azure AI integration, enabling the development of AI-optimized applications. OneLake, the platform’s multi-cloud data lake, now has enhanced multi-cloud and on-premises data integration with Azure SQL DB Mirroring. Several workload-specific updates were also announced, including sustainability data solutions, AI functions for text analysis in notebooks, and a GraphQL API for simplified data access. AI capabilities expanded with conversational AI tools, Azure Event Hubs KQL database support, and integration with Azure AI Foundry.

These updates strengthen Microsoft’s position against competitors, enhancing Fabric’s appeal as a unified platform for data management and AI development. By addressing enterprise requirements, Fabric reinforces Microsoft’s ability to compete with other major players in data management and AI such as Google Cloud and AWS.

IBM has modernized Db2 with a new AI-powered database assistant. As data demands grow, database systems must evolve to keep pace, and last week IBM released Db2 12.1, incorporating a slew of AI features. The new release addresses key challenges faced by database administrators and introduces the Database Assistant, developed using IBM watsonx. This assistant delivers instant answers, real-time monitoring, and intelligent troubleshooting. Explore these innovations further in my latest Forbes article, which features Miran Badzak, IBM’s program director for databases.

LogicMonitor is advancing in the hybrid observability industry with an $800 million investment to integrate AI into datacenter operations. Led by CEO Christina Kosmowski, the company focuses on helping businesses reduce costs and scale AI while improving efficiency and meeting sustainability goals. This funding looks to strengthen LogicMonitor’s role in supporting modern datacenter management and AI-driven operations.

Cloudera announces the acquisition of Octapai’s data lineage and catalog platform, expanding its data catalog and metadata management capabilities. With this move, Cloudera will be able to provide customers with visibility across data solutions, allowing them to use trusted data for AI, predictive analytics, and decision-making tools. Key benefits include improved data discoverability, quality, governance, and migration support.

Change is in the air as we approach the new year: Amazon Web Services has brought in Julia White as CMO. White was recently the Chief Marketing and Solutions Officer at SAP. Before that, she was at Microsoft for two decades, including as corporate vice president for Azure product marketing. Her expertise spans cloud services, AI, and product messaging, making her well-suited to AWS’s strategic needs. With this move, AWS looks to strengthen its position in the competitive cloud market against Microsoft Azure and Google Cloud. With AWS quarterly profits surpassing $10 billion for the first time, White’s leadership is expected to enhance AWS’s focus on cloud computing and AI innovation. This leadership change follows recent executive departures at AWS, including former CEO Adam Selipsky and VP of AI Matt Wood. Wood has since joined PwC as the firm’s first Commercial Technology and Innovation Officer.

Microsoft launched Flight Simulator 2024 with significantly enhanced features and new flying jobs that enable you to pilot virtually any kind of flying craft, from hot air balloons to helicopters and dirigibles. Unfortunately, Microsoft didn’t adequately anticipate the demand for the game, and servers crashed; the company apologized to gamers for not being prepared for the launch to be such a hit. I believe part of the success came from the game being included with Game Pass. Flight Simulator is the second big title this month to launch on Game Pass after Call of Duty launched earlier in November.

Qualcomm Investor Day — Nakul Duggal (Qualcomm’s group GM for automotive, industrial, and cloud) presented Qualcomm’s IIoT deployment model for AI at the edge. The company foresees a $50 billion market opportunity for edge intelligence by 2029 and has defined a path to achieve that goal. The path includes new edge computing chips (the Qualcomm IQ series) designed to support a comprehensive edge deployment architecture. The architecture aligns with industry-wide trends that make IIoT much more scalable.

Here’s how it works. Qualcomm customers develop AI-powered applications in the cloud for deployment on both cloud and local platforms. AI-accelerated on-premises “AI edge boxes” run cloud-native computing software environments on appropriately scaled compute platforms. The development model is “build in the cloud, deploy on the edge” using the same software infrastructure. However, IT-managed, cloud-native platforms do not extend all the way down to the chaotic world of OT (operational technology) devices. These small embedded platforms are often highly customized and optimized for specific tasks. OT devices require unique software stacks, device management services, and connectivity schemes. The result is a line of demarcation that separates small OT platforms from large IT systems running distributed cloud environments. Mr. Duggal explained the Qualcomm IQ Series processors and subsystems in terms of this model, enabling a new generation of on-premises compute platforms with the power and scale to address a wide range of vertical industries.

MY TAKE: AI at the edge is the new IIoT north star. I’ve advocated this three-tier architecture (cloud, distributed cloud, device) for years, and it’s great to see Qualcomm and other big suppliers follow the same pattern.

Microsoft Ignite — Not surprisingly, Ignite focused on AI in the cloud and at the edge. From an IIoT standpoint, Azure’s Adaptive Cloud approach was the star of the show, enabling AI to work across sites, clouds, distributed computing, and devices. The Azure IIoT model is consistent with my “fractal” view of IIoT intelligence, with cloud-native environments scaling from global clouds to local on-premises servers. Microsoft’s AI enablement products, data fabric, event grid, event hubs, storage, Power BI interfaces, and other services run on the whole range of platforms, while Azure IoT Operations (enabled by Azure Arc) implements device data interfaces and manages the OT data at the edge. The IIoT devices are, essentially, peripherals communicating via standards-based protocols.

MY TAKE: Azure IoT Operations is emerging as the interface between the chaotic world of OT devices and the structured world of Microsoft’s AI-enhanced IT systems. Other hyperscalers and platform suppliers are moving in this same direction, allowing enterprise applications (i.e., ERP) to immediately scale up OT-enabled AI-powered solutions with minimal dependencies on IIoT device systems. Even though Ignite didn’t have many IoT-specific sessions, the industrywide trend to separate chaotic device development from high-growth, AI-driven business transformation came across loud and clear.

Shure, a company known for high-quality audio equipment, has partnered with Microsoft by integrating its products with the Microsoft Device Ecosystem Platform (MDEP). This collaboration allows Shure to develop new audio solutions for Android devices for Microsoft Teams Rooms. What this means for Shure is enhanced security measures that meet Microsoft’s high standards, improved compatibility with Microsoft Teams Rooms, and the opportunity to tap into new markets. For example, government agencies that rely on Microsoft Teams for secure communication could now outfit their conference rooms with Shure microphones and audio processors that integrate with their existing systems. This partnership paves the way for Shure to deliver enhanced audio experiences to a broader range of users who depend on Microsoft Teams for collaboration.

Box held its annual BoxWorks event in person for the first time in several years and announced a suite of new AI-powered tools focused on helping businesses unlock the value of their content. These include AI Studio for building custom AI agents, Box Apps for automating workflows, and enhanced security features. Box also supports nonprofits using AI for social good through its Impact Fund. I provide more detail about BoxWorks and the company’s strategy for addressing organizations’ growing need to manage and extract value from increasing content volumes in this Forbes contribution.

Amazon announced the new Echo Show 21, its biggest and most capable smart home display with a built-in smart home hub. This new smart home hub includes support for both Matter and Thread wireless standards, making it an anchor for your smart home and putting Amazon at the center of that experience. While I’m not sure I have the need or space for this, there is also a 15-inch version which may be easier to fit into smaller kitchens. Amazon has designed the Echo Show 21 as a wall-mounted display; I may have to test it myself to understand the utility of having such a large display in the kitchen.

At Microsoft Ignite, the company announced a few updates to Copilot and Window 365, including a thin client called the Link. The Link appears to be Microsoft’s way of satisfying the market need for low-cost products while driving Windows 365 virtual PC usage. While many of Microsoft’s partners like HP, Dell, and Lenovo already offer thin clients, this appears to be Microsoft’s own approach focused around Windows 365 only. Microsoft also announced the new Copilot Actions, which gives prompt templates to help automate repetitive tasks. While this isn’t quite a way to enable scripting, I do think this should help to improve Copilot usage.

Microsoft and Atom Computing successfully entangled 24 logical qubits using neutral atoms under control of Atom Computing lasers. This is the record for the highest number of entangled logical qubits. Logical qubits are constructed from multiple physical qubits, and they allow complex quantum algorithms to be run. The system demonstrated high gate fidelities: 99.963% for single-qubit gates and 99.56% for two-qubit gates, making this the highest neutral-atom two-qubit gate fidelity in a commercial system.

A characteristic of neutral atom quantum computers is the tendency of atoms to disappear during operations. The team developed a method to detect and replace lost atoms without disrupting computations. As a benchmark, the researchers ran the Bernstein-Vazirani algorithm, which identifies a hidden binary string. The 20 logical qubits (created from 80 physical qubits) found the secret code in a single attempt, outperforming classical counterparts that must run the search many times to find all the bits.

This is good news for quantum computing. Great progress is being made with logical qubits, in fact doubling the number that the Microsoft-Atom Computing team has accomplished in just a few short weeks.

Qualcomm held its investor day—the first in three years—and updated analysts and investors on the progress it has made over the last three years in its product diversity strategy. The company also stated that it expects its smartphone business to be just 50% of revenue by 2030. This is a significant goal because the company currently generates 75% of its revenue from its handset business. In addition to that lofty goal, the company also revealed that it is planning an even more affordable PC processor soon, which will enable $600 CoPilot+ PCs. Additionally, it said that the next generation of its X Elite processors will be powered by the third generation of its Oryon CPU cores, which have done exceptionally well in benchmarks against Apple.

The whole industry eagerly awaited NVIDIA’s earnings, and the company beat on revenue and profit and guided slightly above expectations . . . yet still got punished in after-hours trading. The reality is that people’s expectations of NVIDIA are simply unrealistic and heavily tainted by retail hype, even though the company is now generating $35 billion in revenue per quarter and almost $20 billion in profit per quarter. NVIDIA is basically printing money compared to almost everyone else in the industry and dwarfs many of its nearest competitors. I believe that we are still very much in the early phases of AI, and while some of the AI model builders may be hitting a bit of a wall with training, NVIDIA says that demand for training chips remains high — and it is convinced that it is also well-positioned with chips for inference.

IBM has partnered with the Ultimate Fighting Championship to become UFC’s Official Global AI Partner. The UFC Insights Engine, built with IBM watsonx, utilizes data and AI to analyze live bouts, fighter tendencies, projected match outcomes, and methods of victory. This provides fans with detailed information to deepen their interest in the sport. It is another great example of sports technology enriching a sport and driving fan engagement.

Solidigm has launched a new 122TB SSD designed to reduce energy consumption in data centers, which are facing increasing demand and costs due to the rise of AI. This new drive offers significantly higher storage density and efficiency, which should lead to lower energy bills and a smaller physical footprint. This is crucial for sustainability and allows companies to invest more in AI development. Read more about Solidigm’s 122TB drive in my recent Forbes article.

Research Notes Published

Citations

Amazon / AI Chip / Patrick Moorhead / ARS Technica
Amazon ready to use its own AI chips, reduce its dependence on Nvidia

Amazon / AI Chip / Patrick Moorhead / Financial Times
Amazon steps up effort to build AI chips that can rival Nvidia

Amazon / AI Chip / Patrick Moorhead / Nasdaq
Amazon Steps Up Effort to Rival Nvidia in AI Chip Market

Amazon / AI Chip / Patrick Moorhead / SM Bom
Amazon to Push Custom AI Chips to Cut NVIDIA Reliance

Dell / Dell Tech World / Patrick Moorhead / Network Computing
Dell, Deloitte, NVIDIA Roll Out New AI Factory Infrastructure

Intel / Dow Jones Industrial Average / Patrick Moorhead / Business Insider
What needs to go right for Intel, and what happens if it doesn’t

Microsoft / Company Shares / Patrick Moorhead / The Business Standard
Microsoft revenue beats as remote work boosts Teams

No-Code / Melody Brue / KissFlow
What is No-Code? A Complete Guide to No-Code Development

Microsoft / AI / Robert Kramer / Venture Beat
Microsoft brings transactional databases to Fabric to boost AI agents

NVIDIA / Blackwell Chip / Anshel Sag / NetworkWorld
Nvidia Blackwell chips face serious heating issues

NVIDIA / Blackwell Chip / Patrick Moorhead / Wired
Nvidia Says Its Blackwell Chip Is Fine, Nothing to See Here

Trump & Semiconductors & Chips / Patrick Moorhead / Business Insider
Trump’s trade restrictions could be good for American semiconductor jobs


TV APPEARANCES

NVIDIA / Patrick Moorhead / Yahoo Finance 
NVIDIA handily beat Q3 estimates, but ‘investors want more’: Analyst

NVIDIA / Patrick Moorhead / CNBC 
NVIDIA beats on Q3 revenue and earnings

New Gear or Software We Are Using and Testing

  • Kindle Colorsoft (Anshel Sag)

  • Google Pixel Buds 2 Pro (Anshel Sag)

  • Google Pixel Watch 3, 41mm (Anshel Sag)

  • Cisco Desk Pro (Melody Brue)

  • OnePlus Buds Pro 3 (Anshel Sag)

  • Insta360 Link2 4K AI Webcam (Anshel Sag)

  • Google Pixel 9 Pro Fold (Anshel Sag)

  • Google TV streamer – Matter and Thread features (Bill Curtis)

  • Various Matter devices (Bill Curtis)

  • ASUS Zephyrus G16 Gaming Laptop (Anshel Sag)

  • iPhone 16 Pro (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Microsoft Ignite, November 18-22, Chicago (Patrick Moorhead, Robert Kramer – virtual, Will Townsend – virtual, Melody Brue – virtual, Jason Andersen – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • Microsoft Ignite, November 18-22, Chicago (Patrick Moorhead, Robert Kramer – virtual, Will Townsend – virtual, Melody Brue – virtual, Jason Andersen – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • AWS re:Invent, December 2-6, Las Vegas (Patrick Moorhead, Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson, Matt Kimball)
  • IBM Strategic Analyst Event, December 9, Boston (Robert Kramer, Jason Andersen)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Lattice Developer Conference, December 9-10, San Jose (Patrick Moorhead) 
  • Marvel Industry Analyst Day, December 10, Santa Clara (Patrick Moorhead, Matt Kimball)
  • ServiceNow Global Industry Analyst Digital Summit, December 10 (Jason Andersen, Melody Brue, Robert Kramer – virtual)
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)
  • ZohoDay25, February 3-5, Austin (Robert Kramer, Melody Brue)
  • RingCentral Analyst Summit, February 24-26, Napa (Melody Brue)
  • Zendesk Analyst Day, March 35, Las Vegas (Melody Brue)
  • Nutanix .NEXT May 6-9, Washington DC (Matt Kimball)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending November 22, 2024 appeared first on Moor Insights & Strategy.

]]>
RESEARCH NOTE: Pure Storage Comes Downmarket with FlashArray//C20 https://moorinsightsstrategy.com/research-notes/pure-storage-comes-downmarket-with-flasharray-c20/ Wed, 13 Nov 2024 16:17:54 +0000 https://moorinsightsstrategy.com/?post_type=research_notes&p=44092 When Pure Storage was founded in 2009, it made its mark by focusing on flash as the only storage medium for the enterprise. It did so at a time when flash storage was still limited in adoption, primarily due to cost. Fast-forward 15 years, and the company’s strategy has proven wise. Flash is dominant in […]

The post RESEARCH NOTE: Pure Storage Comes Downmarket with FlashArray//C20 appeared first on Moor Insights & Strategy.

]]>

When Pure Storage was founded in 2009, it made its mark by focusing on flash as the only storage medium for the enterprise. It did so at a time when flash storage was still limited in adoption, primarily due to cost. Fast-forward 15 years, and the company’s strategy has proven wise. Flash is dominant in enterprise primary storage, and adoption continues to grow. Further, even as the storage market has been fairly flat over the last couple of years, Pure has continued to see double-digit growth quarter after quarter.

Although the company has firmly established itself in the enterprise, it has not gained the same momentum in the small and mid-market segments. In response to this lack of market penetration, the company has just launched its FlashArray//C20 platform. This research note will look at Pure’s push into the mid-market and what the company needs to do to penetrate this segment.

The Mid-Market Challenge for Storage

Every IT organization—regardless of company size—wants to extract as much value as possible from the solutions it deploys, especially on the storage front. While “extracting value” can mean different things to different organizations, cost, capacity, and performance are three consistent elements of the value equation.

When it first hit the market, flash storage (NAND flash) was exclusive to performance-sensitive workloads due to its high cost per gigabyte. However, as this cost started to curve down over time, flash storage became more affordable for broad use across the enterprise. Yes, different types of flash—QLC versus SLC versus TLC—have different price points, so some are more expensive than others. And yes, the price of flash is somewhat volatile, given the glut/scarcity cycles that impact this market. Still, if one were to plot an average cost per gigabyte over time, there would be a significant downward trend.

As flash has come down in price per gigabyte, its capacity has increased. For example, Pure’s largest-capacity flash storage—its DirectFlash Module—is 150TB and will ship by the end of this year. Further, the company intends to ship a 300TB module by 2026.

Based on the above, you can see how even at a very low price point—say 7 cents per gigabyte—flash-based storage solutions can still be price-prohibitive for a mid-sized company. This is unfortunate because the ease of deploying and managing Pure’s storage solution is ideal for a typical mid-sized company that probably doesn’t have the depth of technical expertise that many enterprise IT organizations have.

FlashArray//C20 Delivers Enterprise Storage to the Mid-Market

In an attempt to bridge this gap between the needs of the mid-market and the economics of storage, this week Pure announced its FlashArray//C20. This storage solution comes with lower capacity to enable a lower overall price point for mid-market IT organizations. However, this is the same Pure Storage architecture that has benefitted the enterprise, with features such as:

  • Cyber resilience — Mid-market IT organizations may be more at risk of compromise than their enterprise peers because they lack the tools and protections that deliver greater resilience. The //C20 includes built-in security features such as encryption (without performance penalties), ransomware remediation, and immutable snapshots.
  • Unified file and block — Most solutions on the market don’t offer file and block support from a single platform. But this is a hallmark of Pure and is enabled in the //C20. One underlying storage platform with dynamic allocation—automatically.
  • Architectural consistency — The underlying architecture for the //C20 is the exact same as the enterprise-grade C//50, C//70, and C//90. So, as a company’s requirements grow, upgrades are simple and non-disruptive.

With this launch, Pure is bringing the entire enterprise storage experience to the mid-market. For example, mid-market customers that deploy the C//20 can still benefit from Pure’s Evergreen architecture. This guarantees that the customer’s storage infrastructure is always the most modern through non-disruptive upgrades. This effectively brings a white-glove upgrade experience to the mid-market.

The //C20 uses the same Pure designed flash modules as its enterprise offerings. While it would likely be cheaper to drop in commodity flash to drive down system costs, Pure is willing to sacrifice a little bit of margin to deliver enterprise quality and performance.

The last point on this enterprise experience theme is integration into the Pure storage platform. The //C20 uses the same management plane used across the portfolio. Want to use Pure1 management features or Pure Fusion services? The entirety of Pure’s control plane for managing data and consolidation is available for all customers.

Pure Storage offers a single control plane for enterprise storage.

If I were still an IT leader, I might find this simple, somewhat automated approach to managing my storage environment the most significant benefit for a mid-market organization. The modern mid-sized business may not have the breadth of an enterprise, yet it still struggles with many of the same challenges as the enterprise. The workloads being deployed are complex, and the hybrid environments where they are deployed can be a challenge, as is the relentless focus on data—data generation, data collection, data management, data utilization. Pure has enabled feature parity of its storage solution to account for this reality—simply at a lower capacity.

There are three consistent elements of value in storage: performance, price, and capacity. Pure is delivering on all of them.

ROBO and Edge — Two Enterprise Use Cases

Mind you, there’s also a play for the //C20 in the enterprise. For remote office/branch office (ROBO) or edge deployments, this lower-capacity and more affordable storage box can be ideal for powering something like a retail location or a bank branch that requires local storage but needs to be managed centrally. This is another example of how the architectural consistency and single control plane of Pure enable flexibility that empowers IT architects and administrators.

Playing in the Mid-Market Is Different

Building a great product for a target market is only half of a winning equation. The other half is go-to-market. In other words, how do you find your target audience, tell the right story, and create a frictionless buying experience?

The selling model for the mid-market segment is indirect. These companies tend to buy through channels and have little loyalty to specific technology vendors. CDW, Connections, SHI, and the like are all common resellers that serve this market.

Pure is a channel-friendly company, and its positioning, messaging, and overall GTM machine are well-suited for this market segment. However, given the transactional nature of the mid-market, the company will have to double down on its channel engagement and enablement efforts, ensuring that those reseller account reps are quick to suggest the //C20 whenever a customer calls needing storage.

Overall, I believe the company has the assets and ability to effectively come downmarket with its messaging for the //C20 and for Pure Storage itself in short order.

A Welcome Solution for Mid-Market Storage

The FlashArray//C20 is a fairly significant expansion of Pure Storage’s reach. This enterprise storage company is bringing the power and efficiency of its all-flash technology to a new market segment. In doing so, it is also competing with a company (NetApp) that has been established in this segment for a while.

I am a big fan of the parity in features and capabilities between the //C20 and its enterprise siblings up the stack. It makes for an easy marketing campaign, but more importantly it delivers much-needed capabilities to a segment that is sometimes an afterthought for IT solutions vendors.

Stay tuned for updates on the company’s mid-market penetration in upcoming quarters.

The post RESEARCH NOTE: Pure Storage Comes Downmarket with FlashArray//C20 appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending November 8, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-november-8-2024/ Tue, 12 Nov 2024 17:47:55 +0000 https://moorinsightsstrategy.com/?p=44035 MI&S Weekly Analyst Insights — Week Ending November 8, 2024. A wrap up of what our team published during the last week.

The post MI&S Weekly Analyst Insights — Week Ending November 8, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.

Moor Insights & Strategy is headquartered in Austin because that’s where I’ve lived for a long time; although more of our staff lives in the Austin area than anywhere else, we are a fully virtual firm with members who live in California, Florida, Massachusetts, and Virginia as well as Texas. For as much as I believe in the power of remote and hybrid work, there’s also no substitute for spending time with each other face to face. That’s why I was glad to attend the annual Dell Tech Analyst Summit last week with my colleagues Matt Kimball, Paul Smith-Goodson, and Anshel Sag, analysts from different generations who represent a wealth of experience, and who each bring their own perspectives to the work we do.

MI&S analysts Matt Kimball, Paul Smith-Goodson, and Anshel Sag
MI&S analysts Matt Kimball, Paul Smith-Goodson, and Anshel Sag. Photo: Patrick Moorhead

This week, Mel and I will be in New York for a BIG event we are sworn to secrecy about for now, but that we will let you in on soon. We’ll also be catching the BoxWorks conference virtually. Robert will be in Scottsdale for the Veeam Analyst Summit.

As touched on above, last week, Matt, Anshel, Paul, and I attended the Dell Tech Analyst Summit in Austin. After the Dell event, Anshel hopped on a plane across the pond to spend some time with Qualcomm in Manchester, England, to get an update on Manchester United’s Snapdragon partnership. Meanwhile, Jason was in sunny San Diego for the Apptio TBM Conference (discussed below) and some tacos.

At the end of this week, our entire team is convening in Austin for a company-wide strategic planning session focused on the new year. (Because of this, it will be two weeks until our next installment of Analyst Insights.) As we grow, we’re committed to delivering the highest caliber research and advisory services in the industry. In the coming months, we’re excited to introduce you to new team members and share updates on our progress. We always value your feedback and look forward to your continued partnership.

Read more about strategically significant tech trends and events in this week’s Analyst Insights. I hope you have a rewarding week!

Patrick Moorhead

———

Our MI&S team published 11 deliverables:

Over the last week, MI&S analysts have been quoted in multiple top-tier international publications with their thoughts on Broadcom, Celona, Freshworks, Intel, Nvidia, AI, earnings, and the Dow Jones Industrial Average change in the semiconductor segment. Patrick appeared on Yahoo! Finance Morning Brief to discuss Qualcomm Q3 2024 earnings.

MI&S Quick Insights

Last week I attended the annual TBM Council Conference in San Diego. TBM stands for Technology Business Management, which is a movement to demystify and optimize IT spending—leading to improved business results. While that may sound like it’s strictly FinOps, in fact it goes well beyond FinOps and covers the business and resource metrics to facilitate a transparent and predictable digital supply chain. What makes it so great is that TBMC is a user conference that keeps the focus on the present state and the solution space—it is not a product and technology extravaganza. What stood out was the common origin story of how companies adopted TBM solutions. It seems to commonly start with a surprise budget-busting bill that nobody can quite account for. However, once the right level of discovery happens, not only are financial surprises minimized, but other areas also emerge such as cross-project resource conflicts or smarter product architecture. The takeaway was twofold: For users, TBM as a framework is worth checking out. For vendors like Apptio, by establishing a very clear view and driving engagement on a solution space, they are able to more effectively position their solutions in the TBM space.

Anthropic is having its moment in the sun with developers. Last week I mentioned that both GitHub and AWS made announcements around their respective AI-powered IDEs. But within those announcements, it was also revealed that Anthropic’s Claude 3.5 Sonnet was being used as either the LLM or a major new LLM option. Sonnet is a deeper-thinking model without a degradation in performance from the previous generation. It’s another example of LLMs being packaged to meet specific situations; for developers, it should be a welcome offer since devs are prioritizing accuracy over speed. But for those who need speed, Antropic happens to have a different model available.

JFrog has been a key partner of GitHub for a while. The combination of curating source code and binaries tees up what is a very complementary partnership. But at GitHub Universe last week, JFrog and GitHub announced they are working together on a more security-driven GitHub Copilot IDE. JFrog has launched integration with GitHub Autofix so devs can scan their code pulls as part of the development workflow using JFrogs own SAST security capabilities. This is a great way to enable devs to intercept problems even earlier in the process, reducing the cost and complexity of future remediation. For more on this, you can read JFrog’s blog post about the integration here.

Last week I attended the annual TBM Council Conference in San Diego. TBM stands for Technology Business Management, which is a movement to demystify and optimize IT spending—leading to improved business results. While that may sound like it’s strictly FinOps, in fact it goes well beyond FinOps and covers the business and resource metrics to facilitate a transparent and predictable digital supply chain. What makes it so great is that TBMC is a user conference that keeps the focus on the present state and the solution space—it is not a product and technology extravaganza. What stood out was the common origin story of how companies adopted TBM solutions. It seems to commonly start with a surprise budget-busting bill that nobody can quite account for. However, once the right level of discovery happens, not only are financial surprises minimized, but other areas also emerge such as cross-project resource conflicts or smarter product architecture. The takeaway was twofold: For users, TBM as a framework is worth checking out. For vendors like Apptio, by establishing a very clear view and driving engagement on a solution space, they are able to more effectively position their solutions in the TBM space.

Anthropic is having its moment in the sun with developers. Last week I mentioned that both GitHub and AWS made announcements around their respective AI-powered IDEs. But within those announcements, it was also revealed that Anthropic’s Claude 3.5 Sonnet was being used as either the LLM or a major new LLM option. Sonnet is a deeper-thinking model without a degradation in performance from the previous generation. It’s another example of LLMs being packaged to meet specific situations; for developers, it should be a welcome offer since devs are prioritizing accuracy over speed. But for those who need speed, Antropic happens to have a different model available.

JFrog has been a key partner of GitHub for a while. The combination of curating source code and binaries tees up what is a very complementary partnership. But at GitHub Universe last week, JFrog and GitHub announced they are working together on a more security-driven GitHub Copilot IDE. JFrog has launched integration with GitHub Autofix so devs can scan their code pulls as part of the development workflow using JFrogs own SAST security capabilities. This is a great way to enable devs to intercept problems even earlier in the process, reducing the cost and complexity of future remediation. For more on this, you can read JFrog’s blog post about the integration here.

Konecta and Google Cloud have established a three-year strategic partnership to integrate Google Cloud’s AI and cloud technologies into Konecta’s customer experience solutions. This collaboration seeks to enhance Konecta’s Digital Unit with AI-powered tools and services, including implementing Google Cloud’s Customer Engagement Suite and generative AI solutions. As part of the partnership, Konecta will transition its workforce to Google Workspace and certify up to 500 engineers in Google Cloud technologies. These initiatives are designed to improve operational efficiency, enable the development of advanced CX offerings for clients, and strengthen Konecta’s position as a provider of AI-driven customer service solutions. Konecta and Google Cloud expect this alliance to facilitate more personalized and efficient customer interactions for businesses and contribute to their digital transformation objectives.

Twilio has partnered with Google Cloud to integrate generative AI into its customer engagement platform. This enables Twilio users to deploy AI-powered solutions such as virtual agents and interactive chatbots to enhance their customer service capabilities. By leveraging Google Cloud’s AI tools, including Dialogflow, businesses can automate responses to common inquiries, provide 24/7 support, and efficiently escalate complex issues to human agents. Early results from this collaboration are promising. Google says that a major automotive manufacturer using its “Destination Assist” feature has reported an 18% to 20% reduction in agent call times. While Google did not include resolution rates for this particular use case, this type of reduction in call times and always-on support will likely lead to higher resolution and happier customers.

Freshworks has announced a significant restructuring that will result in layoffs affecting approximately 13% of its global workforce, equating to about 660 employees. This move surprised many, particularly since the company recently reported strong fourth-quarter financials for 2024, demonstrating a 22% year-on-year increase in revenue. Most of the positions affected by these layoffs are based in India and the United States. CEO Dennis Woodside described this decision as a difficult but necessary step to streamline operations and focus on the company’s key strategic areas: employee experience, artificial intelligence, and customer experience. Following the announcement, the company’s share price surged by 15%, and Freshworks unveiled a substantial $400 million stock buyback program, indicating a strong financial position despite the layoffs.

While Freshworks is framing this as a strategic realignment, the increasing role of AI could be a contributing factor to these layoffs. Overall, this decision reflects a broader trend in the tech industry, where companies are optimizing their workforces in response to changing business objectives and the rise of AI. It will be interesting to observe how this restructuring impacts Freshworks’ growth trajectory in its key strategic areas moving forward.

I write this as I am finishing up the final day of Dell’s analyst event—Dell Tech Summit. While this was an NDA event, so the details are secret, there are a few general takeaways I want to share:

  1. If you thought AI hype had peaked—think again. However, unlike some other hype cycles, AI is driving a lot of revenue and completely disrupting enterprise IT organizations.
  2. Dell’s AI journey is an amazing story. I’m convinced this experience puts the company in a stronger position for helping its customers evolve.
  3. Partners are critical to successfully delivering AI transformation projects to the market
  4. We are still very early in the AI game. Very, very early.
  5. There is opportunity for every company in the AI supply chain. Dell partners. Dell suppliers. Frankly, Dell competitors.


2025 is going to be a fun year in tech and AI. If you are an IT solutions vendor: better understand where you fit in the enterprise AI journey and start aligning portfolios and GTM efforts. If you are an enterprise organization: don’t start this AI journey on your own. Learn from those that have gone through this before.

Most of us in tech are familiar with software-defined storage, software-defined networking—even software-defined infrastructure. How about software-defined silicon? NextSilicon, a semiconductor company based in Tel Aviv and Minnesota, has launched its Maverick-2 intelligent compute accelerator (ICA). This silicon, which in earlier versions has been powering HPC clusters for a number of years, has been tuned to also support AI and vector database operations, delivering a performance-per-watt advantage of 4x over GPUs while cutting operational costs in half.

Isn’t this just a fancy name for an FPGA, you say? Not really. With Maverick-2, real-time application telemetry feeds the silicon, tuning its characteristics for optimal performance. If you are familiar with Kalman filtering for voltage/current regulation, a very crude comparison can be drawn.

My thoughts: There is a really interesting play here with Maverick-2. As datacenters struggle with power and space constraints, such a piece of silicon enables a bit of platform flexibility that can optimize itself in real time for a variety of workloads. NextSilicon is not going to use Maverick-2 (or -3, -4, etc.) to displace the likes of NVIDIA or AMD for AI training. Nor is it going to replace those companies or Qualcomm, Untether, or others for AI inference. However, NextSilicon has found a niche for itself. Further, the way the company is approaching what I would call autonomous programmability will have a longer-term impact on the silicon market in general.

The Ultra Accelerator Link (UALink) Consortium was officially incorporated last week. This group, which includes some of the biggest tech companies in the world, is focused on establishing standards and specifications for GPU-to-GPU connectivity. What problem is this solving? In larger AI and HPC environments where GPUs are used to perform complex computations, the design of legacy infrastructure can force GPUs to rely on CPUs that can become bottlenecks. As you can imagine, this introduces a lot of latency into the equation.

To solve this, NVIDIA developed a fabric called NVLink to enable its GPUs to bypass the CPU as a controller or head node. This allows NVIDIA-based environments to be more performant. However, NVLink was developed by NVIDIA for NVIDIA. UALink aims to take this concept and make it open for all: a universal set of standards and specifications that enable accelerator-to-accelerator connectivity at scale.

My thoughts: The broad support for UALink (from AMD, Intel, Meta, HPE, Astera Labs, AWS, Microsoft, and others) tells me that this consortium has legs and will find adoption among the customers that have immediate needs. I expect to see other big players such as Dell and Broadcom to join in as contributors at some point. See my full analysis on Forbes.

Should enterprises be concerned about cyberattacks from AI agents? AI’s advanced capabilities can enhance the sophistication and scale of malicious attacks, making them more difficult to detect and defend against. This is especially concerning given AI’s ability to automate and personalize attacks on a large scale. Potential threats include AI-powered phishing, deepfakes, and malware that can bypass traditional security measures.

Here are some key reasons why AI poses a cyberthreat to enterprises:

  1. Advanced Attack Vectors — AI can analyze large datasets to exploit vulnerabilities and deliver highly targeted attacks.
  2. Automated Attacks — AI enables rapid, large-scale attacks that are difficult to contain due to automation.
  3. Evasion of Detection Systems — AI can create and modify malware to bypass traditional security measures, complicating detection.
  4. Data Manipulation and Poisoning — Attackers can use AI to corrupt training data, impacting security models and resulting in missed threats or false positives.
  5. Accessibility to Malicious Actors — The availability of AI tools allows even attackers who are less skilled to deploy AI-powered attacks.


In response, here are some mitigation strategies for enterprises:

  1. Develop AI-powered defenses to detect and respond to sophisticated threats in real time.
  2. Monitor AI systems closely, particularly the behavior of AI agents, to detect potential malicious activity.
  3. Implement employee awareness training to help staff recognize and report AI-driven threats.
  4. Strengthen data protection and privacy measures to safeguard sensitive information.

It’s important to stay updated on this topic, as the integration of AI agents into various applications will continue to expand, bringing with it new challenges.

By all measures, Juniper Networks is executing well financially, beating expectations for both top-line revenue and profitability with its recent 3Q 2024 earnings announcement. Chief executive Rami Rahim credits the company’s support of front-end and back-end AI networking for its recent success. Juniper’s reinvigorated focus on enterprise networking and its depth in AI help explain why HPE is acquiring the company, with an expected close of the deal anticipated soon. The combination of both companies’ engineering resources and complementary portfolio coverage of campus, branch, and data center is powerful, which should allow the combined entity to compete more effectively with Cisco and others.

NetSuite has examined emerging trends and made some predictions about ERP as we head into the new year. Enterprises are increasingly prioritizing ERP modernization, with the cloud ERP market projected to expand from $72.2 billion in 2023 to $130.5 billion by 2028. Because ERP systems are so essential for so many enterprise processes, modernizing them can reshape how enterprises operate.

Many enterprises face the challenge of modernizing their operations by transitioning to cloud ERP solutions. Software provider IFS saw strong performance in Q3 2024, with a 30% increase in ARR, driven by a 71% rise in IFS Cloud usage and a 46% growth in cloud revenue. IFS added 90 new clients, introduced AI features in IFS Cloud 24R2, and launched a new module for sustainability management.

Matter, the smart home automation standard from the Connectivity Standards Alliance, just released version 1.4. The new version has some much-anticipated enhancements, including support for many new device types. When the CSA released Matter 1.0 in October 2022, the organization’s leadership promised a regular six-month release cadence, and that’s exactly what they have done. The team has shipped four releases in two years, each with significant improvements in device coverage, ease of use, and functionality. The cumulative expansion of device types is impressive, with version 1.4 adding support for energy management products including solar power, energy storage, heat pumps, water heaters, electric car charging, and time-based orchestration. The CSA was smart to focus device development on a single high level use case—energy—rather than adding a hodgepodge of random devices.

But the most significant version 1.4 enhancement is automating “multi-admin” configurations. Here’s why. Matter’s value proposition rests upon interoperability, and that means enabling devices from any manufacturer to work with any combination of home automation ecosystems—i.e., HomeKit, Google Home, Alexa, or SmartThings. Although previous Matter versions support this feature, the configuration process has been complicated and often confusing. Version 1.4 automates multi-admin configuration by allowing the user to grant permission just once, then Matter automatically adds a new ecosystem. Thus, “Alexa” and “Hey Google” can both turn down your thermostat. This fulfills a fundamental brand promise for Matter. Between now and CES, I’ll write more about Matter’s progress. I’m also doubling down on my prediction that Matter will hit its tipping point in early 2026, becoming the leading smart home connectivity standard for new designs.

A new security feature in iOS 18.1 has made it more difficult for police and other agencies to snoop inside iPhones they’ve confiscated for investigations. This feature is an inactivity timer that reboots iPhones into a more secure state when they haven’t been unlocked for a while. This is a welcome feature for many privacy and security advocates because so many warrantless searches have been conducted on people’s phones. And Apple isn’t the first to implement this, as GrapheneOS for Google Pixel phones already offers this capability.

Workvivo, an employee experience platform acquired by Zoom in 2023, has launched a new suite of tools called “Employee Insights.” This suite is designed to measure and improve employee engagement. Integrated directly into the Workvivo platform, Employee Insights enables organizations to deploy pulse surveys, monitor engagement across 12 key drivers, and analyze results using real-time dashboards.

This solution provides a centralized way to gather employee feedback, particularly from frontline workers who may have limited access to e-mail. Workvivo emphasizes that the platform facilitates actionable insights based on the “listening” data collected, aiming to foster a cycle of continuous improvement. Future updates are expected to include integration with Zoom AI Companion for enhanced industry data analysis and benchmarking capabilities. This launch comes at a significant time for Workvivo, as it was recently named Meta’s preferred migration partner for the discontinued Workplace platform.

I served as a judge for UC Today’s inaugural UC Leaders Awards 2024, which aims to highlight the top professionals in the unified communications (UC) and collaboration space. The judging process was enjoyable, and I learned a great deal about the individual and team contributions to the companies and industries I cover. There were many outstanding applicants this year.

UC Today has announced the UC Leaders finalists, including Craig Walker from Dialpad and Eric Yuan from Zoom for the Innovator of the Year award. Eric Yuan is also nominated alongside Vlad Shmunis from RingCentral for the UC Leader of the Year award. The finalists for the Women in UC Leadership Award include Smita Hashim, chief product officer at Zoom; Christina Hyde, VP of revenue at SkySwitch; Kira Makagon, chief innovation officer at RingCentral; and Aruna Ravichandran, SVP and chief marketing and customer officer at Cisco Webex—quite a powerhouse lineup!

Additional award categories include Industry Influencer of the Year, Rising Star of the Year, The Editor’s Choice Award, and the UC Team of the Year. The award ceremony will be held on November 21, 2024, and will be streamed live on the UC Today website, LinkedIn, and X at 4:00 p.m. GMT / 11:00 a.m. EST. I will be presenting the awards in an Oscars-style format, and I hope you will join us!

People are really loving Apple’s new M4 Mac Mini thanks to its extremely compact size and reasonable price. With an M4 chip, it may be one of the most inexpensive and powerful computers on the market. One of its biggest downfalls, however, is that memory upgrades are extremely expensive, so much so that in some cases it’s cheaper to buy a second Mac Mini than it is to upgrade the RAM. The one upside of this new design is that there is user-upgradable memory; however, there are no known third-party memory products for the device yet, although I expect OWC will offer one fairly soon.

North Korean threat actors have successfully used a combination of phishing emails and social engineering schemes to target cryptocurrency-related business. Cybersecurity solution provider SentinelOne has named the campaign “Hidden Risk”; this complex attack also employs a seemingly benign PDF with fake cryptocurrency news headlines and offers of employment to infect Apple macOS users with a malicious payload. It is a highly sophisticated campaign, one that is difficult to defend against, especially given the extensive grooming of targets over social media. The silver lining in this latest cyberattack is that security operators have access to an incredible amount of real-time threat intelligence shared by Infoblox, Microsoft, Palo Alto Networks, SentinelOne, Zscaler, and others to enable stronger security postures.

AMD’s new 9800X3D CPU is officially on the market and has already sold out. This new CPU is the first to use AMD’s second-generation 3-D V-Cache, which has faster CPU clock speeds, better memory bandwidth, and significantly improved thermals because now the CPU cores sit atop the memory instead of the other way around. While I haven’t had a chance to test this chip fully, the consensus is that its gaming performance is industry-leading, and its handling of multitasking and productivity workloads isn’t far behind—which wasn’t necessarily the case with past processors in this family.

Globant has opened a new office within Intuit Dome, home of the Los Angeles Clippers. This move is part of the two organizations’ ongoing partnership to enhance the fan experience through digital transformation initiatives. The new office includes a “Digital Playground” technology showcase, which the company says it hopes will foster creativity and attract talent while strengthening Globant’s presence in Southern California. This strategy aligns with a growing trend of businesses creating experience centers to demonstrate the potential of their products and services in an engaging environment. These centers provide tangible, interactive experiences beyond traditional marketing, allowing consumers to interact directly with technology and see various real-life use cases.

IBM is partnering with Ferrari’s racing team starting in 2025. This partnership will leverage IBM’s advanced data analytics to improve engagement for fans of Scuderia Ferrari through personalized content and insights that “bring racing enthusiasts closer than ever to the racing team.” IBM will also support Ferrari by using cutting-edge data analysis to enhance performance on and off the track. This newly announced partnership shares several similarities with IBM’s long-standing collaborations with the US Open, Wimbledon, and The Masters. All of these partnerships focus on enhancing fan engagement with premium sporting events—now including Formula 1—through digital platforms, leveraging advanced AI and data analytics to process vast amounts of data for real-time insights. These partnerships also serve as high-profile demonstrations of IBM’s capabilities on a global stage.

The National Football League’s data-driven approach to reducing injuries is showing progress. By analyzing player data with Amazon Web Services, the NFL provides insights to help teams improve training and safety. The league and its teams have also used computer vision and sensors to track head impacts, helping coaches implement injury-prevention strategies.

“When you can integrate and aggregate data across all 32 [teams] for all 53 [players], you have more power in the data that you are generating to model,” said Jennifer Langton, NFL Player Health & Safety Innovation Advisor. With AWS, the NFL can track injuries in real time, automatically linking incidents to specific plays. The league is also working on new tech for full-body tracking to better prevent injuries in the future.

As one example, the redesigned kickoff rule, created in collaboration with AWS, has reduced injury risk. The rule change has led to 32% fewer injuries overall on kickoffs, including no ACL or MCL tears, as player speeds have decreased.

US Cellular informed the FCC earlier this fall that T-Mobile’s acquisition of its wireless operations was essential in ensuring uninterrupted service for its dwindling subscribership. A sticking point is regulatory approval of the transfer of 30% of US Cellular’s spectrum assets to T-Mobile. However, that may be less of an issue now, given the breaking news that AT&T has agreed to purchase over $1 billion worth of US Cellular’s spectrum licenses.

It is an unfortunate situation for US Cellular, signaling the company’s inability to keep pace with necessary infrastructure deployments to support its primarily rural customer base. It is extremely challenging to provide mobility services in remote areas, given lower population densities and longer timelines to recoup investment in radio access network infrastructure. T-Mobile’s acquisition makes sense in many ways, especially given its success in connecting rural America with 5G access that leverages its lower-band spectrum assets.

Research Notes Published

Citations

AI & Enterprise / Jason Andersen / BizTech
Can AI Agents Ease Workloads for Enterprises?

Intel / AI Stocks /  Patrick Moorhead / Insider Monkey
15 Trending AI Stocks on Latest Analyst Ratings and News

Broadcom / VeloRAIN / Matt Kimball / NetworkWorld
Broadcom launches VeloRAIN, using AI/ML to improve network performance

Celona / 5G / Will Townsend / The Fast Mode
Celona Launches Aerloc to Secure Private 5G for Industrial IT & OT Systems

CHIPS Act in New Administration / Moor Insights & Strategy Analysts  / Digitimes Asia
Trump teases shutting down CHIPS Act: what does it mean?

Freshworks / Layoffs / Melody Brue / CIO
Freshworks lays off 660 — about 13 percent of its global workforce — despite strong earnings, profits

Nvidia / Dow Jones Industrial Average Representative / Patrick Moorhead / Data Center Planet
Nvidia Replaces Intel on Dow Amid AI Frenzy

Nvidia / Dow Jones Industrial Average Representative / Patrick Moorhead / The Register
Dow swaps Intel for Nvidia leaving no index free from wild AI volatility

TV APPEARANCES

Qualcomm / Earnings / Patrick Moorhead / Yahoo Finance – Morning Brief
Fed meeting outlook, Qualcomm, Trump tariffs: Morning Brief

New Gear or Software We Are Using and Testing

  • Kindle Colorsoft (Anshel Sag)

  • Google Pixel Buds 2 Pro (Anshel Sag)

  • Google Pixel Watch 3, 41mm (Anshel Sag)

  • Cisco Desk Pro (Melody Brue)

  • OnePlus Buds Pro 3 (Anshel Sag)

  • Insta360 Link2 4K AI Webcam (Anshel Sag)

  • Google Pixel 9 Pro Fold (Anshel Sag)

  • Google TV streamer – Matter and Thread features (Bill Curtis)

  • Various Matter devices (Bill Curtis)

  • ASUS Zephyrus G16 Gaming Laptop (Anshel Sag)

  • iPhone 16 Pro (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Works / Analyst Summit, November 12-13, San Francisco (Melody Brue – virtual)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend – virtual, Melody Brue – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Works / Analyst Summit, November 12-13, San Francisco (Melody Brue – virtual)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend – virtual, Melody Brue – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • AWS re:Invent, December 2-6, Las Vegas (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson, Matt Kimball)
  • IBM Strategic Analyst Event, December 9, Boston (Robert Kramer)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)
  • ServiceNow Global Industry Analyst Digital Summit, December 10 (Jason Andersen, Melody Brue, Robert Kramer – virtual)
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)
  • ZohoDay25, February 3-5, Austin (Robert Kramer, Melody Brue)
  • Zendesk Analyst Day, March 35, Las Vegas (Melody Brue)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending November 8, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending November 1, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-november-1-2024/ Tue, 05 Nov 2024 01:18:26 +0000 https://moorinsightsstrategy.com/?p=43834 MI&S Weekly Analyst Insights — Week Ending November 1, 2024. A wrap up of what our team published during the last week.

The post MI&S Weekly Analyst Insights — Week Ending November 1, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.

Each autumn, Qualcomm hosts its Snapdragon Summit on the island of Maui to showcase its newest technologies. As longtime Moor Insights & Strategy analyst Will Townsend noted in his writeup of Qualcomm’s new mobile networking chips, “It’s an awe-inspiring setting for a technology conference.” And yet the gorgeous tropical environment makes for a wonderful backdrop rather than a distraction because Qualcomm and the attendees are all so engaged with the business at hand.

Qualcomm CEO Amon at Snapdragon Summit
Qualcomm CEO Cristiano Amon presents at the Snapdragon Summit in Maui. Photo: Will Townsend

This time of year, the seemingly endless sequence of tech conferences can feel like a grind at times. How could it not, with so many different flights, hotels, and restaurant meals? But I wouldn’t trade it, because that’s where the action is for the newest technology just coming to market. And because sometimes it’s precisely when you’re in some distant metropolis (or Hawaiian island) that you can look at technology—new and old—with fresh eyes.

This week, Matt, Anshel, Paul and I will attend the Dell Tech Analyst Summit in Austin, while Jason will attend the Apptio TBM Conference in San Diego. 

Last week, I was at AWS HQ in Seattle for an exclusive re:Invent preview, then spent some time in Los Angeles for the analyst session at Cisco’s Partner Summit. (Don’t miss the Six Five On the Road interviews with Cisco executives from the event.) Mel tuned into SAP SuccessConnect Virtual to dig into everything happening in HCM. Jason (virtually) and Matt attended the Red Hat Analyst Day in Boston while Will was in Riga at 5G Techritory. Will moderated two panels at the event on topics ranging from road connectivity to bridging the gap between industry, research, and public understanding of 5G innovation. Robert hosted a webinar with Sam Gupta, principal consultant at ElevatIQ, to discuss 2025 ERP trends and how businesses can prepare for digital transformation. 

Read more about these events in the respective analysts’ insights.

Have a great week!
Patrick Moorhead

———-

Our MI&S team published 14 deliverables:

Over the last week, MI&S analysts have been quoted in multiple top-tier international publications with their thoughts on Amazon, AMD, AT&T, Celona, Google, Intel, SAP, agentic AI, earnings, wearables, and more. Patrick appeared on CNBC Asia and Yahoo! Finance to discuss Intel Q3 2024 earnings. 

MI&S Quick Insights

Last week AWS announced inline editing for Amazon Q Developer, among a series of other new capabilities. Inline editing is very much in line (ha!) with AWS’s commitment to an efficient pro developer experience. Instead of the AI working in a window next to the coding workspace, you can now inject AI recommendations right into your work. What’s nice is that it needs to be invoked via a command key instead of just looming there all the time. Initial findings suggest that developers like using inline editing when they are truly stuck on something where they may lack familiarity. Think of something like a whole routine versus just something at the line level. The intended benefit is that devs will get richer and more verbose assistance, leading to even greater productivity. It will be very interesting to see how this approach moves the needle in metrics such as accepted changes, which is something that AWS is tracking and publishing regularly.

GitHub hosted its Universe event last week and made two major announcements. The first is that GitHub Copilot will now support the use of multiple LLMs. While this is not a new concept, given that many other developer tools do this, it’s interesting since Gitub has been all-in on using parent company Microsoft’s Copilot LLM until now. Additionally, GitHub announced a new no-code tooling and runtime offering called Spark, which is geared towards citizen and business developers. My initial thought is that this tool is geared towards business users who want to build simple forms-based applications, as opposed to power users who want to incorporate workflows and processes. Again, this is a capability that many other companies already have. And to be honest, it’s interesting to see GitHub branching out to attract new users. Especially since a shared set of repositories and services could be of value to enterprises. What will be more interesting is to see how this introduction shapes Microsoft’s low-code and no-code strategy, which currently spans multiple projects.

Red Hat hosted an analyst event in which its leadership walked through its AI strategy and other product updates. However, just like at IBM’s Analyst Summit a couple weeks ago, Red Hat could not resist talking about its major push to move customers from VMware to its OpenShift Virtualization offering. To be candid, it’s not cool, sexy, or even new since Red Hat has been in the virtualization game for decades now. But, frankly, it’s a sneaky-brilliant monetization play. Red Hat has a very strong history in executing pricing disruptions in commoditized and semi-commoditized markets (Unix to RHEL, BEA to JBoss). And Red Hat typically needs a lot of time to spin up in new markets (like AI). This is very typical of low-cost players in any marketplace. So while Red Hat talks the AI talk, my sense is that continued financial results will come from ripping-and-replacing hypervisors for a while. Red Hat will then attempt to parlay that customer savings into consideration for more modern solutions From Red Hat or parent IBM down the road.

Last week AWS announced inline editing for Amazon Q Developer, among a series of other new capabilities. Inline editing is very much in line (ha!) with AWS’s commitment to an efficient pro developer experience. Instead of the AI working in a window next to the coding workspace, you can now inject AI recommendations right into your work. What’s nice is that it needs to be invoked via a command key instead of just looming there all the time. Initial findings suggest that developers like using inline editing when they are truly stuck on something where they may lack familiarity. Think of something like a whole routine versus just something at the line level. The intended benefit is that devs will get richer and more verbose assistance, leading to even greater productivity. It will be very interesting to see how this approach moves the needle in metrics such as accepted changes, which is something that AWS is tracking and publishing regularly.

GitHub hosted its Universe event last week and made two major announcements. The first is that GitHub Copilot will now support the use of multiple LLMs. While this is not a new concept, given that many other developer tools do this, it’s interesting since Gitub has been all-in on using parent company Microsoft’s Copilot LLM until now. Additionally, GitHub announced a new no-code tooling and runtime offering called Spark, which is geared towards citizen and business developers. My initial thought is that this tool is geared towards business users who want to build simple forms-based applications, as opposed to power users who want to incorporate workflows and processes. Again, this is a capability that many other companies already have. And to be honest, it’s interesting to see GitHub branching out to attract new users. Especially since a shared set of repositories and services could be of value to enterprises. What will be more interesting is to see how this introduction shapes Microsoft’s low-code and no-code strategy, which currently spans multiple projects.

Red Hat hosted an analyst event in which its leadership walked through its AI strategy and other product updates. However, just like at IBM’s Analyst Summit a couple weeks ago, Red Hat could not resist talking about its major push to move customers from VMware to its OpenShift Virtualization offering. To be candid, it’s not cool, sexy, or even new since Red Hat has been in the virtualization game for decades now. But, frankly, it’s a sneaky-brilliant monetization play. Red Hat has a very strong history in executing pricing disruptions in commoditized and semi-commoditized markets (Unix to RHEL, BEA to JBoss). And Red Hat typically needs a lot of time to spin up in new markets (like AI). This is very typical of low-cost players in any marketplace. So while Red Hat talks the AI talk, my sense is that continued financial results will come from ripping-and-replacing hypervisors for a while. Red Hat will then attempt to parlay that customer savings into consideration for more modern solutions From Red Hat or parent IBM down the road.

Osmo.ai is able to capture and digitally recreate a scent. That means a scent can be teleported from one location to another. The company uses gas chromatography-mass spectrometry (GCMS) to analyze a given scent so it can be uploaded to Osmo’s cloud-based Primary Odor Map, which uses AI to predict the molecular composition. Next, a formulation robot creates the scent by mixing different molecules. Osmo produces high-fidelity scent replication, and it can capture even the subtlest nuances of a scent. The system is almost fully automated, and human assistance is needed only at the input and output stages. As a result of its experimentation, Osmo has accumulated the largest AI-compatible scent databank, which is now used for training the AI.

Osmo has the opportunity to create a number of new use cases. Perfume retailers could digitally transmit a perfume’s scent to potential customers. Because scent has an important role in memory and emotion, Osmo believes the technology could be used for PTSD or dementia therapy by recreating comforting or significant scents. All in all, the technology could blend the physical and digital worlds and offer a new way to experience and share sensory information.

I’ve been thinking through the recent announcement regarding the partnership between NTT Data and Oracle Cloud Infrastructure. In essence, NTT will be using (or OEM-ing) the Oracle Alloy cloud infrastructure platform to deliver sovereign cloud services to its customers—initially in Japan, with plans to expand over time.

Can we stop and take a moment to look at the novel ways Oracle is building OCI’s relevance? Through Alloy, Database@[CSP], HeatWave, Cloud@[customer], and a number of other avenues, OCI is becoming an indispensable part of the cloud marketplace. Rather than solely focusing on delivering better services at a lower price point than the competition (which it also does), Oracle is looking at embracing traditional competitors to better meet the needs of enterprise customers.

What a unique and smart approach.

I attended Red Hat’s analyst summit in Boston last week, and there were two themes front and center: AI and VMware. While focusing on these may sound somewhat boring to analysts, this is a smart move as these are the top two infrastructure focus areas for any IT organization of size across the industry.

On the VMware front, Red Hat has a solid competitive story with OpenShift. In fact, right now the post-VMware reality for many IT organizations comes down to a two-horse race—Nutanix or Red Hat. I think Red Hat’s depth in cloud-native/containerization combined with its virtualization makes for a compelling story. Further, its long reach into the enterprise datacenter perhaps gives it a little bit of an advantage in pursuing the larger opportunities. With that said, I don’t believe that Red Hat has been aggressive enough in establishing and amplifying its dominant position. My expectation is that we will see the volume turned up in 2025.

Looking at AI, Red Hat seems to be in a little bit of a muddled place in the market as it tries to focus on RHEL AI and OpenShift as foundational elements of the enterprise AI equation. While I can see the story after spending a day with the company, I believe this is another marketing opportunity for Red Hat to educate and engage with enterprise IT. It is critical to be a part of the early AI discussion and project planning if Red Hat hopes to be successful.

It was earnings week for AMD and Intel. Both companies demonstrated growth and progress. However, the markets certainly treated each of them differently.

In the case of AMD, the company showed strong datacenter performance for its fiscal third quarter with revenue of $3.5 billion, representing a 122% year-over-year increase, and operating income of $1.04 billion, a whopping 240% YoY increase. Overall, AMD beat expectations for revenue and met them for EPS. Interestingly, the market was down on AMD (about 15% since earnings) due to the company’s conservative guidance and its weakness in other contributions to revenue (e.g., gaming). Realistically, I don’t believe that investors are viewing AMD negatively; rather, they are normalizing after a hype period that followed the company’s very aggressive take on the AI market.

As an ex-AMD employee, I don’t think there is a single person in Austin or Santa Clara (or Markham) who is disappointed with a $141 share price—save Dr. Lisa Su and the CFO. However, I do believe we will continue to see a little less stability in the stock price as the AI chip market continues to evolve, even though I believe that AMD is making all the right long-term bets for the AI future.

Intel also showed growth in its datacenter segment on revenues of $3.3 billion, representing YoY and sequential growth. Its $300 million in operating income represents margins of 10.4%—also up sequentially. Following the company’s earnings, the market showed favorable coverage as the stock moved up about 2%. Further, a number of institutional investors have raised their target buy price for Intel’s stock.

Intel’s financials were a lot more nuanced than what we saw from AMD. The company is going through a major restructuring, with many of the restructuring costs impacting bottom-line numbers. It seems investors are accounting for this and showing confidence in the work that CEO Pat Gelsinger and team are doing to right the ship.

Stock prices are about expectations: meeting, beating, or failing to meet. Intel has done a good job of managing the investor community as it continues its multi-year turnaround.

Cisco recently held its partner summit, using it to announce a revamped compensation and training initiative. What I really like about the Cisco 360 Partner Program is the long-lead-time deployment, which should ensure a smooth introduction in 2026. The company is also making a substantial $80 million investment in training to facilitate meaningful opportunities for the company’s channel sellers to expand their capabilities for providing expertise across AI, networking, security, observability, and more.

As regulatory demands for environmental accountability grow, enterprises need to track and manage carbon footprints across supply chains. SAP’s new Sustainability Data Exchange (SDX) offers a platform for standardized data sharing and improved emissions accounting. Ahead of the COP29 meeting in November 2024, SDX provides enterprises with tools to support international climate goals, particularly in sharing Scope 3 emissions data across the value chain. Integrated with SAP S/4HANA Cloud ERP, SDX enables precise data exchange of carbon emissions among businesses and their suppliers and customers. It helps address the issues created by outdated tools, data inaccuracies, and inconsistent calculations, helping enterprises move from estimations to actual emissions data provided by suppliers. Read more in my latest Forbes article on SAP’s Sustainability Data Exchange.

AI agents are becoming integral to ERP systems, creating a balance between automation and human oversight. Leading ERP providers are embedding these agents as a core component of their recent innovations. AI agents offer real benefits by supporting task management and improving operational efficiency in areas such as forecasting, customer satisfaction in sales and marketing, and collaboration across the supply chain. By managing routine tasks, these agents simplify ERP interactions, making the systems easier for users to navigate and apply across various business functions. As I’ve suggested in the past, strong data management strategies are necessary for these benefits to be attainable—AI agents rely on accurate, up-to-date data to be effective.

Blue Yonder’s Q3 2024 results show that the company added 31 new customers, including BJ’s Wholesale Club, PepsiCo’s Latin American unit, and Sainsbury’s. Its recent acquisition of One Network Enterprises expands Blue Yonder’s capabilities with real-time collaboration and data sharing across supply chains. Blue Yonder’s latest features include the Intelligent Rebalancer for real-time order adjustments, the Fulfillment Sourcing Simulator for optimizing fulfillment, and tools for automating warehouse and yard tasks. Updates to Cognitive Demand Planning improve forecasting and planning, providing clients with more accurate, flexible supply chain management solutions.

Blue Yonder has identified some key SCM industry trends this quarter, including rising food costs pushing grocery retailers toward value-focused inventory strategies, increasing demand for traceability due to U.S. and EU regulations, and labor and logistics challenges leading companies to cross-train staff and relocate distribution centers closer to U.S. markets. Additionally, manufacturers are adopting generative AI tools to enhance supply chain efficiency, customer service, and cost management. I see these trends as critical real-world challenges that Blue Yonder addresses to support its clients’ complexities and to align with today’s operational and regulatory demands.

As announced last week at the Money 20/20 conference, NVIDIA has introduced an AI-powered workflow for fraud detection running on AWS. With financial losses from credit card fraud projected to reach $43 billion by 2026, this solution is critical. The workflow leverages advanced algorithms and accelerated data processing to identify fraudulent transactions more accurately than traditional methods, potentially improving detection by up to 40%. It takes advantage of NVIDIA’s AI Enterprise software platform, GPUs, and tools including RAPIDS AI libraries and the Morpheus application framework to enhance fraud detection models and streamline their deployment.

For context, financial institutions are increasingly adopting AI and accelerated computing to combat fraud. This new workflow aims to provide a comprehensive solution for fraud use cases beyond credit card transactions, including new account fraud, account takeover, and money laundering. While fraudsters can also exploit AI to develop new and sophisticated schemes, AI equips the good guys with powerful tools to analyze vast amounts of data, detect subtle patterns, and adapt to evolving threats to fight back.

Across three days last week I attended multiple SAP SuccessConnect virtual sessions, during which several key trends in human capital management surfaced. I noted a strong emphasis on strategic workforce planning, particularly the need for proactive approaches to address skills gaps, enhance employee competencies, and support career transitions. Speakers frequently highlighted the importance of integrating HR and financial data to drive growth. Many sessions showcased and acknowledged the potential of AI in HCM, but also emphasized the importance of responsible AI development and deployment to prevent bias. I observed a recurring theme around the critical role of data analytics in understanding workforce trends and informing effective HCM strategies. SAP’s acquisition of WalkMe appears to generate positive results within SuccessFactors, with reports of improved user experience, increased task completion rates, and higher overall satisfaction.

Amal Clooney’s keynote address was a highlight of the event for me. She offered a compelling look at real-world use cases of AI in human rights and the workforce, providing valuable insights into the broader societal impact of these technologies.

I often write about the increasing practicality of using AI and machine learning techniques in small, low-power edge devices. NXP’s new AI-enhanced chips and high-productivity edge AI software development tools align with this trend.

NXP i.MX RT700 — NXP is doubling down on intelligence for small devices with AI-enhanced SoCs, new AI development tools, and eIQ Neutron, an internally developed family of NPUs (neural processors). The company recently introduced the i.MX RT700 “crossover” SoC, designed for ultra-low-power smart devices. (In NXP parlance, crossover MCUs are simple, low-power processors with MPU-like performance.) I don’t usually dive into the technical details of SoC designs in these weekly summaries, but this chip is noteworthy because of its heterogeneous design. There are two Cortex-M33 compute subsystems, each with a DSP (Tensilica HiFi). One M33 is the main processor for the chip, and the other is a low-power subsystem for always-on applications such as keyword recognition. The chip also has a modest graphics subsystem, a dedicated I/O processor (RISC-V), and an advanced memory architecture optimized for multiprocessor partitioning. The chip’s most disruptive compute subsystem is its eIQ Neutron N3-64 NPU. NXP claims the NPU provides a 172x performance boost and 119x per-inference power decrease compared with “general purpose processors.”

NXP eIQ tools — This week, NXP announced two new software enablement tools for the eIQ NPU family: eIQ Time Series Studio (TSS) and eIQ GenAI Flow. TSS automates machine learning workflows, streamlining time-series-based machine learning model development and deployment across MCU-class devices such as the i.MX RT700. Applications include anomaly detection, classification, and regression for many types of sensor data. The development model is BYOD (bring your own data), and the TSS development flow simplifies model tuning, optimization, and deployment. GenAI Flow provides building blocks for large and small language models (LLMs, SLMs) that power applications for generation and reasoning. These large models run on NXP’s i.MX MPU processor families. GenAI Flow makes generative AI applications accessible on these devices and supports retrieval-ugmented generation (RAG), a technique for securely fine-tuning models on domain-specific knowledge and private data.

Efficient application development for complex chips such as the i.MX RT700 requires more than a board support package (BSP) on GitHub. These devices are complete computer platforms, so developers need hardware, software, and tools that “just work” with mainstream OS distributions right out of the box, including support for all the accelerators and specialized function blocks on the chip—AI, DSPs, low-power compute subsystems, I/O, graphics, security, memory management, connectivity, and networking. I’m pleased to see NXP enabling application developers to focus on applications rather than system code.

Apple has expanded its relationship with Globalstar via a $1.1 billion investment, which gives Apple a 20% stake in the company and guarantees it a new satellite constellation for satellite messaging and other satellite services. This new constellation will have 85% of its capacity dedicated to Apple, and Globalstar can allocate the remaining 15% for its own customers. This gets Apple to become its own service provider, a route that I expected it would achieve with a 5G network slice well before I would’ve expected it over satellite. That said, this new network won’t replace terrestrial connectivity, but it does guarantee iPhone users nearly global coverage and likely enhanced satellite services once that new constellation goes up.

The OnePlus 13 has been announced in China and is already showing some stellar specs and performance, including a 6,000 mAh battery, an under-screen fingerprint sensor, and a Snapdragon 8 Elite SoC. Chinese reviewers who have already gotten their hands on the OnePlus 13 are reporting it as the new king of performance on tests in the AnTuTu benchmarking tool. OnePlus did this all while making the phone thinner and adding IP69 water resistance, which is higher than the previous IP65 rating and even higher than devices that proudly proclaim IP68 water resistance. I will be attending the company’s U.S. launch later in December and will give my thoughts after I’ve seen this device for myself.

RingCentral has announced that its AI Assistant is now included at no additional cost for RingEX users. This provides access to features like live transcription, closed captioning, meeting summaries, real-time call note capture, text and chat message writing, editing, and translation across different tiers. This strategic move allows RingCentral to demonstrate the value of its AI capabilities, potentially driving wider adoption of AI-powered solutions across its product portfolio. By giving users a taste of the efficiency gains and communication enhancements possible through AI, RingCentral encourages further exploration and integration of these technologies within its ecosystem.

Google’s Q3 2024 results indicate the growing influence of AI on its modern work solutions and overall Google Cloud business. Introducing the Customer Engagement Suite and its adoption by prominent clients such as Volkswagen of America show a strategic focus on enhancing AI-driven customer experience. The company reported improvements in work quality among Google Workspace users leveraging Gemini AI—and as a user myself, I would agree with this sentiment. While Google does not provide specific revenue breakdowns for these segments, the company’s overall solid revenue growth for Google Cloud (35% year-over-year, reaching $11.4 billion) and its attribution of this growth to increased generative AI adoption indicate that AI is a significant driver of its Cloud business. This data suggests that Google’s AI-powered tools have attracted new customers, facilitated more significant deals, and increased product adoption among existing users.

Apple announced a series of new Mac computers based on the new M4, M4 Pro, and M4 Max chips. These include the Mac Minis, MacBook Pros, and iMacs. Apple’s M4 series now starts with a 10-core CPU/10-core GPU configuration and ranges up to an M4 Max with 16 CPU cores and 40 GPU cores. Apple also finally admitted defeat over its claim from last year that 8GB of memory is adequate for AI applications; it simply isn’t enough. CoPilot+ PCs and now almost all new Macs include a minimum of 16GB of memory, including the 15-inch M3 MacBook Air and 13-inch M2 MacBook Air. This is a huge win for consumers.

The new Kindle Colorsoft has officially launched and it’s everything you would’ve hoped for from a color Kindle. It still has all the great advances in the latest Kindle Paperwhite, including great water resistance and a bright screen; it just has a color screen that looks great—and it still delivers an enjoyable Kindle experience.

Microsoft has unfortunately delayed Recall once again, this time pushing the feature’s release out to December. Realistically, this means that Recall won’t reach most users until next year, given that the initial release will be via the Preview channel; Microsoft said the feature would be there for a while as the company works out bugs and other issues. This is truly disappointing, and I believe these delays are hurting the entire Copilot+ PC narrative and the rollout of AI PCs in general. Microsoft took the narrative by the horns with Copilot+ PCs, but seems to have tripped over itself repeatedly since then.

The U.S. Department of Energy’s Advanced Research Projects Agency-Energy (ARPA-E) has announced $30 million in funding for the Quantum Computing for Computational Chemistry (QC3) program. The objective is to develop quantum algorithms that are better than existing chemistry and materials science simulations from classical computing. The ultimate goal is to design new industrial catalysts, discover superconductors for efficient electricity transmission, and improve battery chemistries.

Classical computing is unable to handle the full complexity of simulating the chemistry and materials needed for energy research and development. However, quantum computing has the potential to overcome these limitations. Project teams will be assigned specific problems in chemistry or materials science where quantum computing can be applied to reduce greenhouse gas emissions. The objective is to obtain a 100x improvement over classical methods or a scalable approach to achieving this, validated on existing quantum hardware.

CrowdStrike and Fortinet recently announced a partnership that aims to integrate AI-native endpoint security protection from the CrowdStrike Falcon platform into Fortinet’s FortiGate next-generation firewall portfolio. It seems like an unconventional partnership, given that both companies compete for the same set of customers. However, the collaboration has the potential to marry best-of-breed cybersecurity protection capabilities, enhancing threat protection and delivering optimized security outcomes.

AT&T recently announced a new fiber and 5G fixed wireless access gateway. On the surface, the solution could be attractive for branch operations where downtime equates to immediate loss of revenue and goodwill. Other connectivity infrastructure products offer cellular redundancy, but this converged gateway provides automatic failover to 5G in the rare event of a fiber cut or outage. Marrying the company’s depth in both fiber and FWA broadband in a single form factor that’s easy to provision and deploy could be a game changer—especially for small and medium-sized businesses that have limited IT staff resources.

Citations

Google / Pixel Watches / Anshel Sag / Android Central
https://www.androidcentral.com/wearables/wear-os/upcoming-pixel-watches-could-play-it-safe-and-i-dont-like-it

AI / Jason Andersen / CIO
Is now the right time to invest in implementing agentic AI?

Amazon / GenAi / Jason Andersen / InfoWorld
Amazon rolls out a genAI-powered inline chat function for Amazon Q Developer

AMD / Stock / Patrick Moorhead / Yahoo! Finance
Analyst updates AMD stock forecast before earnings

AT&T / 5G / Will Townsend / TeckNexus
AT&T Launches First Device Integrating Fiber and 5G for Business Connectivity

AT&T / 5G / Will Townsend / Telecomlead
AT&T intros fiber internet and 5G wireless backup solution

AT&T / 5G / Will Townsend / YCharts
AT&T Launches Industry-First Seamless Integration of Fiber and 5G Networks with Single-box Solution

Celona / 5G / Will Townsend / Celona
Celona Aerloc Brings Private 5G Zero Trust to OT Networks for Industrial IoT

Celona / 5G / Will Townsend / Voice Data
Celona launches Aerloc to secure industrial IoT with private 5G

Intel / Q3FY24 Earnings / Patrick Moorhead / Benzinga
Analyst Throws Support Behind Intel CEO Gelsinger After Better-Than-Expected Q3 Results: ‘If You’re Questioning Pat As The Leader, Then Who Else?’

Intel / Q3FY24 Earnings / Patrick Moorhead / Fierce Electronics
Intel sees revenue drop 6% in 3Q, but AI revenues jump 9%

SAP / AI / Roberts Kramer / CIO
SAP ups AI factor in its SuccessFactors HCM suite

US Chips / Patrick Moorhead / CNBC
Trump accuses Taiwan of stealing U.S. chip industry. Here’s what the election could bring

Word Press / Legal Battle / Melody Brue / CIO
As the WordPress saga continues, CIOs need to figure out what it might mean for all open source

TV APPEARANCES

Intel / Q3FY24 Earnings / Patrick Moorhead / CNBC Asia
Analyst: Optimistic about Intel’s future provided there is no ‘bumps on the road’ throughout 2025

Intel / Q3FY24 Earnings / Patrick Moorhead / Yahoo! Finance
2025, 2026 will be a ‘proving ground’ for Intel: Analyst

New Gear or Software We Are Using and Testing

  • Kindle Colorsoft (Anshel Sag)

  • Google Pixel Buds 2 Pro (Anshel Sag)

  • Google Pixel Watch 3, 41mm (Anshel Sag)

  • Cisco Desk Pro (Melody Brue)

  • OnePlus Buds Pro 3 (Anshel Sag)

  • Insta360 Link2 4K AI Webcam (Anshel Sag)

  • Google Pixel 9 Pro Fold (Anshel Sag)

  • Google TV streamer – Matter and Thread features (Bill Curtis)

  • Various Matter devices (Bill Curtis)

  • ASUS Zephyrus G16 Gaming Laptop (Anshel Sag)

  • iPhone 16 Pro (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Dell Tech Analyst Summit, November 6-8, Austin (Matt Kimball, Anshel Sag, Paul Smith-Goodson, Patrick Moorhead)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Works / Analyst Summit, November 12-13, San Francisco (Melody Brue – virtual)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend – virtual, Melody Brue – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • Dell Tech Analyst Summit, November 6-8, Austin (Matt Kimball, Anshel Sag, Paul Smith-Goodson, Patrick Moorhead)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Works / Analyst Summit, November 12-13, San Francisco (Melody Brue – virtual)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend – virtual, Melody Brue – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  •  
  • AWS re:Invent, December 2-6, Las Vegas (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson, Matt Kimball)
  • IBM Strategic Analyst Event, December 9, Boston (Robert Kramer)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)
  • ServiceNow Global Industry Analyst Digital Summit, December 10 (Jason Andersen, Melody Brue, Robert Kramer – virtual)
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)
  • ZohoDay25, February 3-5, Austin (Robert Kramer, Melody Brue)
  • Zendesk Analyst Day, March 35, Las Vegas (Melody Brue)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending November 1, 2024 appeared first on Moor Insights & Strategy.

]]>
Datacenter Podcast: Episode 32 – Talking Juniper, AMD & Intel, IBM, Cisco, Oracle, Google https://moorinsightsstrategy.com/data-center-podcast/datacenter-podcast-episode-32-talking-juniper-amd-intel-ibm-cisco-oracle-google/ Mon, 04 Nov 2024 20:15:17 +0000 https://moorinsightsstrategy.com/?post_type=data_center&p=45075 On episode 32 of the Datacenter Podcast, hosts Matt, Will, & Paul talk Juniper, AMD & Intel, IBM, Cisco, and more

The post Datacenter Podcast: Episode 32 – Talking Juniper, AMD & Intel, IBM, Cisco, Oracle, Google appeared first on Moor Insights & Strategy.

]]>
On this week’s episode of the MI&S Datacenter Podcast, hosts Matt, Will, and Paul analyze the week’s top datacenter and datacenter edge news. This week they are talking Juniper, AMD & Intel, IBM, Cisco, and more!

Watch the video here:

Listen to the audio here:

2:36 Juniper’s Mining For GenAI Gold
10:49 What To Make Of Semiconductor Earnings?
18:42 Granite 3.0 Rocks
26:43 Open The Cisco AI POD Bay Doors Hal
34:59 OCI’s Unique Path To Growth
45:01 Dry Watermarks For AI
50:14 Our Top 3 List

Juniper’s Mining For GenAI Gold
https://x.com/WillTownTech/status/1852296047806066826

What To Make Of Semiconductor Earnings?
https://www.linkedin.com/feed/update/urn:li:activity:7257130388483919872/
https://www.intc.com/news-events/press-releases/detail/1716/intel-reports-third-quarter-2024-financial-results

Granite 3.0 Rocks
https://www.forbes.com/sites/moorinsights/2024/10/25/ibms-new-granite-30-ai-models-show-strong-performance-on-benchmarks/

Open The Cisco AI POD Bay Doors Hal
https://x.com/WillTownTech/status/1851493750461075694

OCI’s Unique Path To Growth
https://www.linkedin.com/feed/update/urn:li:activity:7255962768703434752/

Dry Watermarks For AI
https://www.nature.com/articles/d41586-024-03462-7

Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.

The post Datacenter Podcast: Episode 32 – Talking Juniper, AMD & Intel, IBM, Cisco, Oracle, Google appeared first on Moor Insights & Strategy.

]]>
Analyzing AMD’s Next-Generation CPU, GPU And DPU https://moorinsightsstrategy.com/analyzing-amds-next-generation-cpu-gpu-and-dpu/ Tue, 29 Oct 2024 20:41:03 +0000 https://moorinsightsstrategy.com/?p=44383 With its new releases at the Advancing AI 2024 event, AMD has positioned itself as a legitimate competitor to Nvidia in enterprise and hyperscaler AI in the datacenter

The post Analyzing AMD’s Next-Generation CPU, GPU And DPU appeared first on Moor Insights & Strategy.

]]>
Lisa Su, chairwoman and CEO of Advanced Micro Devices (AMD), delivers the opening keynote speech at Computex 2024 in Taipei on June 3, 2024. (Photo by I-HWA CHENG/AFP via Getty Images)AFP via Getty Images

AMD held its Advancing AI 2024 event last week, where it launched its latest datacenter silicon—the 5th Generation EPYC processor (codenamed “Turin”) and the MI325X AI accelerator. On the networking front, the company introduced Pensando Salina and Pensando Pollara to address front-end and back-end networking, respectively. As the silicon market gets hotter and hotter, AMD’s launches have become increasingly anticipated. Let’s dig into what AMD launched and what it means for the industry.

AI Is Still Top Of Mind

For those who thought the AI hype cycle was at its peak, guess again. This trend is stronger than ever, and with good reason. As the AI market starts to move from frontier models and LLMs to operationalizing AI in the enterprise, virtually every IT organization is focused on how to best support these workloads. That is, how does IT take a model or models, integrate and tune them using organizational data and use the output in enterprise applications?

Further, organizations that have already operationalized AI to some degree are now exploring the concept of agentic AI, where AI agents learn from each other and become smarter. This trend is still a bit nascent, but we can expect it to grow rapidly.

The point is that AI in the enterprise is already here for many companies and right around the corner for many more. With this comes the need for compute platforms tailored for AI’s unique performance requirements. In addition to handling traditional workloads, CPUs are required to handle the AI data pipeline, and GPUs are required to perform the tasks of training and inference. (CPUs can also be used to perform the inference task.)

Because of this, AI silicon market leader Nvidia has designed its own CPU (Grace) to tightly integrate and feed its GPUs. While the company’s GPUs, such as Hopper and Blackwell, will run with any CPU, their tight integration with Grace is designed to deliver the best performance. Similarly, Intel has begun to enter the AI space more aggressively as it builds tight integration among its Xeon CPUs, Gaudi AI accelerators and forthcoming GPU designs.

For AMD, the integration of CPU with GPU (and GPUs connected by DPUs) is the company’s answer to the challenges faced by enterprise IT and hyperscalers alike. This integration accelerates the creation, cleansing, training and deployment of AI across the enterprise.

5th Gen EPYC Scales Out And Scales Up

To meet the entire range of datacenter needs, AMD designed two EPYC Zen 5 cores—the Zen 5 and Zen 5c. The Zen 5, built on a 4nm process, is the workhorse CPU designed for workloads such as database, data analytics and AI. The Zen 5c is designed with efficiency in mind. This 3nm design targets scale-out cloud and virtualized workloads.

Zen 5 and Zen 5c address the range of datacenter workloads. AMD

AMD has held a performance leadership position in the datacenter throughout the last few generations of EPYC. There are more than 950 cloud instances based on this CPU, and the reason is quite simple. Thanks to AMD’s huge advantages in terms of number of cores and performance of those cores, cloud providers can put more and more of their customers’ virtual machines on each server. Ultimately, this means the CSP can monetize those servers and processors in a much more significant way.

In the enterprise, even though servers are a budget line item instead of a contributor to revenue (and margin), the math still holds: those high-core-count servers can accommodate more virtual machines, which means less IT budget goes to infrastructure so that more can go to other initiatives like AI.

Having lots of cores doesn’t mean anything if they don’t perform well. In this regard, AMD has also delivered with Turin. Instructions per cycle is a measure of how many instructions a chip can process every clock cycle. This tells us how performant and efficient a CPU is. The fact that Turin has been able to deliver double-digit percentage increases in IPC—large ones—over its predecessor is significant.

Turin delivers strong generational performance growth. AMD

How Does 5th Gen EPYC Stack Up Against Xeon 6?

Because the new EPYC launched a couple of weeks after Intel’s Xeon 6P CPU (see my deep analysis on Forbes), we haven’t yet seen head-to-head comparisons in terms of performance. However, we can do a couple of things to get a feel for how EPYC and Xeon compare. The first is to look at the side-by-side “billboard” specifications. When comparing these chips for scale-out workloads, the 5c CCD-based CPUs have up to 192 cores with 12 DDR5 memory channels (6,400 MT/s) and 128 lanes of PCIe Gen 5.

By comparison, Intel’s Xeon 6E (efficiency core) scales up to 144 cores with 12 DDR5 memory channels and 96 lanes of PCIe Gen 5. However, in the first quarter of 2025, Intel will launch its second wave of Xeon 6E, which will scale up to 288 cores.

It’s clear that on the performance side of the equation, EPYC and Xeon are close on specs—128 cores, 12 channels of memory and lots of I/O (128 lanes of PCIe for EPYC, 96 for Xeon). Here are some of the differences between the two:

  • EPYC now supports AVX-512 natively, boosting its use of this advanced vector extension, which will improve its HPC performance considerably.
  • Xeon 6P supports multiplex ranked memory (MRDIMM) that can boost memory throughput to 8,800 MT/s. So far, I have not seen that AMD is supporting this. To be clear, MRDIMM will not be used for traditional datacenter workloads.
  • EPYC can reach clock speeds of up to 5 gigahertz—a big boost for some HPC and AI workloads.
  • Xeon 6P has discrete accelerators integrated into its compute complex to speed up workloads such as AI and database.

Below are the many benchmarks that AMD provided to demonstrate Turin’s performance. I show this because the SPEC suite of benchmarks most closely and objectively measures a CPU’s core performance. In this test, the 5th Gen EPYC significantly outperforms the 5th Gen Xeon.

SPECrate 2017 comparison AMD

As I always say with any benchmark a vendor provides, take these results with a grain of salt. In the case of this benchmark, the numbers themselves are accurate. However, Xeon’s performance took a significant leap between 5th Gen and Xeon 6P, making it hard to truly know what the performance comparison looks like until both chips can be independently benchmarked. Mind you, AMD couldn’t test against Xeon 6P, so I do not fault the company for this. However, I’d like to see both companies perform this testing in the very near future.

Is The Market Responding To EPYC?

The market is responding positively to EPYC, and no doubt about it. In fact, in the five generations that EPYC has been on the market, AMD’s datacenter CPU share has climbed from less than 2% to about 34%. Given the slow (yet accelerating) growth of EPYC in the enterprise, this tells me that the CPU’s market share just for the cloud and hyperscale space must be well north of 50%. In fact, Meta recently disclosed that it has surpassed 1.5 million EPYC CPUs deployed globally—and that’s before we get to the CSPs.

I expect that Turin will find greater adoption in the enterprise datacenter, further increasing EPYC’s market share. In the last couple of quarters, I’ve noticed AMD CEO Lisa Su saying that enterprise adoption is beginning to accelerate for EPYC. Additionally, the rising popularity of the company’s Instinct MI300X series GPUs should help EPYC deepen its appeal. Which brings us to our next topic.

Instinct MI325X And ROCm 6.2 Close The Gap With Nvidia

While we look to the CPU to perform much of the work in the AI data pipeline, the GPU is where the training and inference magic happens. The GPU’s architecture—lots of little cores that enable parallelism, combined with high-bandwidth memory and the ability to perform matrix multiplications at high speeds—delivers efficiency. Combined with optimized libraries and software stacks, these capabilities make for an entire AI and HPC stack that developers and data scientists can employ more easily.

While Nvidia has long been the leader in the HPC and AI space, AMD has quietly made inroads with its Instinct MI300 Series GPUs. Launched at the inaugural Advancing AI event in 2023, the MI300X posed the first legitimate alternative to the Nvidia H100 and H200 GPUs for AI training through a combination of its hardware architecture and ROCm 6.0 software stack (competing with Nvidia’s CUDA).

Over the following few quarters, AMD went on to secure large cloud-scale wins with the likes of Meta, Microsoft Azure, Oracle Cloud Infrastructure and the largest independent cloud provider, Vultr, to name a few. This is important because these cloud providers modified their software stacks to begin the effort of supporting Instinct GPUs out of the box. No more optimizing for CUDA and “kind of” supporting ROCm—this is full-on native support for the AMD option. The result is training and inference on the MI300 and MI325 that rival Nvidia’s H100 and H200.

Introducing the Instinct MI325X is the next step for closing the gap on Nvidia. This GPU, built on AMD’s CDNA 3 architecture and boasting 256GB of HBM3E memory, claims to deliver orders of magnitude better performance over the previous generation as well as leadership over Nvidia.

MI32X specifications AMD

As mentioned, hardware is only part of the equation in the AI game. A software stack that can natively support the most broadly deployed frameworks is critical to training data and operationalizing AI through inference. On this front, AMD has just introduced ROCm 6.2. With this release, the company is making bold claims about performance gains, including a doubling of performance and support for over a million models.

ROCm 6.2 improvements AMD

AMD Pensando Salina DPU And AMD Pensando Pollara 400 NIC

Bringing it all together is networking, which requires both connecting AMD’s AI cluster to the network and connecting all of this AI infrastructure on the back end. First, the company introduced its third-generation DPU—the Pensando Salina. Salina marries high-performance network interconnect capabilities and acceleration engines aimed at providing critical offload to improve AI and ML functions. Among the new enhancements are 2x400G transceiver support, 232 P4 match processing units, 2x DDR5 memory and 16 Arm Neoverse N1 cores.

Combined, these features should facilitate improved data transmission, enable programming for more I/O functions and provide compute density and scale-out—all within a lower power-consumption envelope—for hyperscale workloads. AMD claims that Salina will provide a twofold improvement in overall performance compared to its prior DPU generations; if it delivers on this promise, it could further the company’s design wins with public cloud service providers eager to capitalize on the AI gold rush.

Second, the AMD Pensando Pollara 400 represents a leap forward in the design of NICs. It is purpose-built for AI workloads, with an architecture based on the latest version of RDMA that can directly connect to host memory without CPU intervention. AMD claims that this new NIC, which employs unique P4 programmability and supports 400G interconnect bandwidth, can provide up to 6x improvement in performance when compared to legacy solutions using RDMA over Converged Ethernet version 2. Furthermore, the Pollara 400 is one of the industry’s first Ultra Ethernet-ready AI NICs, supported by an open and diverse ecosystem of partners within the Ultra Ethernet Consortium, including AMD, Arista, Cisco, Dell, HPE, Juniper and many others.

AMD’s new NIC design could position it favorably relative to Broadcom 400G Thor, especially since the company is the first out of the gate with a UEC design. Both the Salina DPU and Pollara 400 NIC are currently sampling with cloud service and infrastructure providers, with commercial shipments expected in the first half of 2025.

Putting It All Together

One of the understated elements of AMD’s AI strategy is seen in an image above: the acquisition of Silo AI. This Finnish company, the largest private AI lab in Europe, is filled with AI experts who spend all their time helping organizations build and deploy AI.

When looking at what AMD has done over the last year or so, it has built an AI franchise by bringing all of the critical elements together. At the chip level, the company delivered 5th Gen EPYC for compute, MI325X for GPU and Salina and Pollara for front-end and back-end networking. ROCm 6.2 creates the software framework and stack that enables the ISV ecosystem. The acquisition of ZT Systems last month delivers rack-scale integration that Silo AI can use to deliver the last (very long) mile to the customer.

In short, AMD has created an AI factory.

What Does All This Mean?

As I say again and again in my analyses of this market, AI is complex—and even that is an understatement. Different types of compute engines are required to effectively generate, collect, cleanse, train and use AI across hyperscalers, the cloud and the enterprise. This translates into a need for CPU, GPU and DPU architectures that are not only complementary, but indeed optimized to work with one another.

Over time, AMD has acquired the pieces that enable it to deliver this end-to-end AI experience to the market. At Advancing AI 2024, the company delivered what could be called its own AI factory. It is important to note that this goes beyond simply providing an alternative to Nvidia. AMD is now a legitimate competitor to Nvidia.

At the same time, AMD demonstrated a use for all of this technology outside of the AI realm, too. With the new EPYC, it has delivered a generation of processors that demonstrates continued value in the enterprise. And in the MI325X, we also see excellent performance across the HPC market.

Here is my final takeaway from the AMD event: The silicon market is more competitive than ever. EPYC and Xeon are both compelling for the enterprise and the cloud. On the AI/HPC front, the MI325X and H100/H200/B200 GPUs are compelling platforms. However, if I were to create a Venn diagram, AMD would be the only company strongly represented in both of these markets.

Game on.

The post Analyzing AMD’s Next-Generation CPU, GPU And DPU appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending October 25, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-october-25-2024/ Mon, 28 Oct 2024 18:16:58 +0000 https://moorinsightsstrategy.com/?p=43721 MI&S Weekly Analyst Insights — Week Ending October 25, 2024. A wrap up of what our team published during the last week.

The post MI&S Weekly Analyst Insights — Week Ending October 25, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.

AI Sidekick in Miro Innovation Workspace
A screenshot of the AI sidekick in Miro Innovation Workspace.

Our analyst Melody Brue joined the firm in 2020 to cover fintech, but her versatility means that she is now responsible for other areas of enterprise software, plus our “Modern Work” practice—the tools, policies, and strategies that affect things like working from home (or being required to RTO). Last week she wrote a detailed analysis of the new AI-driven Innovation Workspace product from Miro. Fittingly, to round out her coverage of this collaboration app, she worked with our enterprise software development analyst, Jason Andersen, who put Innovation Workspace through its technical paces. We’re always on the lookout for opportunities like this to draw on the different flavors of expertise of Moor Insights & Strategy analysts.

Last week, Patrick and Will were in gorgeous Maui at Qualcomm’s Snapdragon Summit, Melody was back in Florida for Cisco’s WebexOne, and Matt attended the RISC-V Summit virtually.

This week, Patrick will be at AWS HQ in Seattle for an exclusive re:Invent preview, then off to Los Angeles for the analyst session for Cisco’s Partner Summit. Mel will be tuning into SAP SuccessConnect Virtual. Jason (virtually) and Matt will be attending the Red Hat Analyst Day in Boston. Will is off to Riga to spend Halloween at 5G Techritory.

After several weeks of back-to-back travel, our team is looking forward to a brief respite at our desks and home offices as we head into the holidays. We’d love to connect with you over the next several weeks to touch base and discuss your plans for 2025 and beyond. Reach out to schedule some time!

Our MI&S team published 21 deliverables:

Over the last week, MI&S analysts have been quoted in multiple top-tier international publications with our thoughts on IBM, Qualcomm & Arm, Google, Nvidia, the CHIPS Act, and more. Patrick was a guest on Prof G Markets to break down the state of play in the chip industry. He also made appearances on CNBC Power Lunch and Yahoo! Finance.

MI&S Quick Insights

Last week IBM hosted its TechXchange event in Las Vegas. While there has been a lot of attention on the new Granite models (which was the big announcement), the show itself was pretty interesting. Now in its second year, TechXchange focuses on IBM technology practitioners. So by intention, it’s not going to have major product announcements. It also had two other notable differences. First was an incredibly broad span of technologies. There were sessions on the latest AI innovations and DevOps tools, but also quite a bit on IBM mainframes and older technologies such as DB2. Second, IBM very intentionally managed multiple programs to better engage this important part of the IBM community. For instance, IBM has a program to name IBM Champions, who are peer-nominated customer ambassadors. The Champions were easy to find because they had special blue sweatshirts, and there was programming just for them. As an ex-IBM salesperson, I always found that the internal technical sponsor was a big key to growing IBM footprint at a given client. So, I think IBM increasing its focus on that community is a really smart move.

While at TechXchange, I also met Dave Nielsen from IBM, who is a leader of the AI Alliance that IBM co-chairs with Meta. As two of the biggest open source players in AI, I was quite interested to hear how IBM and Meta were collaborating on how to make AI more transparent and safe. I’m looking forward to a follow-up conversation on this soon.

I also had an opportunity to check out UiPath’s Forward and TechEd shows, which ran concurrently last week. UiPath has been a leader in business process automation and robotic process automation for some time now. CEO Daniel Dines is not unique in pitching generative AI as a potential to expand the UiPath universe. While this is not unique, what is unique is how many pieces its existing platform has that could truly accelerate agentic programming. In fact, I was introduced to UiPath after publishing my article on AI Agents last month in Forbes. UiPath is certainly a candidate for the first-mover status I mention in the piece. Also, I was quite taken with the leadership at UiPath. After meeting SVP and general manager Mark Geene and CTO Ragu Malpani, I saw a team that was open, collaborative, and in many ways humble in its approach. This is not something I get to see every day.

As mentioned in the headnote for this week’s updates, last week my colleague Melody Brue (with some help from me) published this great piece on Miro Innovation Workspace on Forbes. I got to try out the product, using an application development process as a test run of its functionality. Quite frankly, I was impressed with how much the GenAI helped me create a better set of deliverables and how Miro could help overall. While methods like agile and extreme programming push a code-first agenda, I find that in many cases there is still value in taking some time for product owners within the organization to develop a cohesive set of documents to help drive purpose and customer sentiment to the development teams. Now, I am not talking about hundreds of pages of requirements like we see in older methods, but something in between—maybe even just a sprint timeframe for POs and architects to collaborate and get the team on board. What’s nice is that Miro can improve and accelerate that Sprint 0 activity and then feed the data into familiar project management tools like Jira or Monday. It’s worth a look if you are encountering teams that are overly focused on tickets and not the journey itself.

Last week IBM hosted its TechXchange event in Las Vegas. While there has been a lot of attention on the new Granite models (which was the big announcement), the show itself was pretty interesting. Now in its second year, TechXchange focuses on IBM technology practitioners. So by intention, it’s not going to have major product announcements. It also had two other notable differences. First was an incredibly broad span of technologies. There were sessions on the latest AI innovations and DevOps tools, but also quite a bit on IBM mainframes and older technologies such as DB2. Second, IBM very intentionally managed multiple programs to better engage this important part of the IBM community. For instance, IBM has a program to name IBM Champions, who are peer-nominated customer ambassadors. The Champions were easy to find because they had special blue sweatshirts, and there was programming just for them. As an ex-IBM salesperson, I always found that the internal technical sponsor was a big key to growing IBM footprint at a given client. So, I think IBM increasing its focus on that community is a really smart move.

While at TechXchange, I also met Dave Nielsen from IBM, who is a leader of the AI Alliance that IBM co-chairs with Meta. As two of the biggest open source players in AI, I was quite interested to hear how IBM and Meta were collaborating on how to make AI more transparent and safe. I’m looking forward to a follow-up conversation on this soon.

I also had an opportunity to check out UiPath’s Forward and TechEd shows, which ran concurrently last week. UiPath has been a leader in business process automation and robotic process automation for some time now. CEO Daniel Dines is not unique in pitching generative AI as a potential to expand the UiPath universe. While this is not unique, what is unique is how many pieces its existing platform has that could truly accelerate agentic programming. In fact, I was introduced to UiPath after publishing my article on AI Agents last month in Forbes. UiPath is certainly a candidate for the first-mover status I mention in the piece. Also, I was quite taken with the leadership at UiPath. After meeting SVP and general manager Mark Geene and CTO Ragu Malpani, I saw a team that was open, collaborative, and in many ways humble in its approach. This is not something I get to see every day.

As mentioned in the headnote for this week’s updates, last week my colleague Melody Brue (with some help from me) published this great piece on Miro Innovation Workspace on Forbes. I got to try out the product, using an application development process as a test run of its functionality. Quite frankly, I was impressed with how much the GenAI helped me create a better set of deliverables and how Miro could help overall. While methods like agile and extreme programming push a code-first agenda, I find that in many cases there is still value in taking some time for product owners within the organization to develop a cohesive set of documents to help drive purpose and customer sentiment to the development teams. Now, I am not talking about hundreds of pages of requirements like we see in older methods, but something in between—maybe even just a sprint timeframe for POs and architects to collaborate and get the team on board. What’s nice is that Miro can improve and accelerate that Sprint 0 activity and then feed the data into familiar project management tools like Jira or Monday. It’s worth a look if you are encountering teams that are overly focused on tickets and not the journey itself.

IBM unveiled the third generation of Granite LLMs—the Granite 3.0 models—featuring Granite 3.0 2B Instruct and Granite 3.0 8B Instruct. These open-source models were trained on 12 trillion tokens in 12 human languages and 116 programming languages. The 3.0 models can be used for RAG summarization, entity extraction, and editing. According to IBM, by the end of 2024 Granite 3.0 models will be capable of understanding documents, interpreting charts, and answering questions about a GUI or product screen.

Agentic use cases are new capabilities. Agents can proactively identify needs, utilize tools, and initiate actions without human intervention. This advancement marks a significant step forward in the functionality and autonomy of IBM’s language models.

Anthropic announced some important model upgrades—Claude 3.5 Sonnet and Claude 3.5 Haiku, plus a new “computer use” feature. Anthropic has improved Claude 3.5 Sonnet’s coding and tool use. Claude 3.5 Haiku has better performance, but at the same cost and speed as the previous version. With “computer use,” AI can control computer interface actions such as cursor control and typing information with human-like precision. These improvements show even more ways that AI has the potential to automate complex tasks and improve productivity.

I shared a car service last week with a gentleman from a competing analyst firm. This man, who is based in Latin America, was talking about how he would be on the road for four straight weeks covering Huawei (yes, that Huawei) events. This got me thinking a little bit. I remember that when I first started at MI&S, Huawei was a company that most analyst firms covered—and for good reason. It had solutions that fit in every technology category: silicon, hardware, software, cloud; client, server, networking, storage, and phone; consumer and commercial; and even consulting services. I was invited to the company’s analyst meeting and walked away fully in awe of the breadth and depth of its portfolio.

Fast forward seven years or so years and Huawei is everywhere—except the U.S. and Canada. If you chat with U.S. government officials or major U.S. OEMs, you’ll hear that this all has to do with the company being under the control of the Chinese government. If you ask U.S. IT executives, you’ll hear much of the same.

Is this concern real? Or is there more to the story? I think it’s the latter. First, there was the controversy about—and banning of—China-based ZTE Systems in the United States after the company was found to be illegally shipping technology to Iran and North Korea. After that, there was a very strong anti-Chinese sentiment in the United States (and some Western European countries) with regards to technology. In that vein, we eventually saw server manufacturer Inspur get placed on the U.S. exemption list and get hit with a patent infringement lawsuit.

Second, I believe U.S.-based server vendors saw a real threat from Huawei and put a lot of money into lobbying against the company in D.C. What better way to protect against the market threat of this company than to amplify a “national security threat” concern with lawmakers and policy folks?

I am not an advocate of Huawei, and I’m not a geopolitical expert. However, when a product or technology is used pretty much everywhere except in one or two countries—I figure something is amiss.

Can’t we all just get along? There is a lot of noise around the Arm and Qualcomm licensing dispute. And there are a lot of opinions. The licensing issue originates with Qualcomm’s acquisition of Nuvia—a company that included many ex-Apple chip designers focused on developing Arm-based chips to compete with x86. Effectively, they wanted to create a commercial version of the M-Series chips that we see in the Apple MacBook. Here’s the rub: Arm says that the Qualcomm acquisition nullified the Nuvia architectural license. So, in effect, those Oryon cores that are inside the Snapdragon CPU should not be shipping to Dell, HP, Lenovo, Microsoft, and others. And oh yeah, there is more Nuvia-derived IP that hasn’t yet come to market but could also be considered in violation of the architectural license.

Qualcomm says “no way” and that Arm is employing anti-competitive practices.

It’s easy for pundits to point fingers and pick a side. But these architectural licenses are very complex—and very specific. Enough so that they are far beyond my (and most folks’) cursory understanding of IP law and the specifics of this agreement.

Here’s what I do know. Regardless of what the outcome of a court case may be, nobody wins here. If Arm follows through with its termination of the agreement—nobody wins. If Qualcomm prevails on its own terms—nobody wins. There has to be a settled agreement between the two parties, one in which both walk away feeling good about the relationship and in which neither feels overly emboldened. This isn’t about only Arm Holdings or only Qualcomm winning. It’s about Arm maintaining a strong market position relative to x86. This licensing issue dragging out and becoming overly burdensome on one side or not profitable enough on the other side is going to have a long-lasting negative impact across the entire Arm market.

McAfee and Yahoo News are teaming up to fight deepfakes in the news with an AI-powered detection tool. This tool, driven by McAfee Smart AI, analyzes images for signs of AI manipulation and flags them for review by Yahoo’s editors. This effort is similar in spirit to Adobe’s Content Authenticity Initiative, which allows creators to attach “nutrition labels” to their digital content, providing details about how it was created and edited. While Adobe’s initiative promotes transparency across various digital media, McAfee focuses on protecting the integrity of news media, a critical area of concern as deepfakes become increasingly sophisticated. This collaboration could prove essential to preserving trust and credibility in the news, especially during critical events where misinformation can have significant consequences.

Both McAfee and Adobe are utilizing AI to combat misinformation and foster trust in digital content, but their approaches differ. McAfee’s new partnership with Yahoo News focuses specifically on detecting deepfakes in news media, while Adobe’s Content Authenticity Initiative aims for broader transparency across various digital content through “nutrition labels” detailing creation and editing history. This difference reflects their specific priorities and target audiences. However, with growing support for content authenticity across the board, I suspect that a collaboration between the two tech companies could be on the horizon, potentially leading to even more robust solutions for verifying digital content.

Jira, long known for helping software developers manage projects, has been branching out to support marketing teams. This is happening at a time when marketing workflows are becoming increasingly complex, with higher-velocity campaigns, more stakeholders, and a wider range of channels to manage. This complexity makes staying organized and on track more challenging than ever. Jira’s tools should help marketers streamline their work, improve communication, and track progress more effectively. This push to be more than a dev tool is evident in its new “Jira for Marketers” series, a live learning session set to show marketing professionals how to use Jira to manage campaigns, content creation, and events. It’s a smart move by Jira, recognizing that many teams, not just software engineers, can apply agile development and project management principles.

Zoho announced a partnership with Nvidia to boost its AI capabilities, specifically in developing and deploying LLMs for its business software. This significant move for Zoho shows its commitment to providing robust, business-focused AI solutions. Zoho plans to use Nvidia’s AI Enterprise software and accelerated computing platform to build LLMs tailored for various business needs, focusing on privacy and providing contextually relevant information. The company aims to help businesses see a fast return on their investment by using AI to speed up operations and reduce delays. This partnership allows Zoho to accelerate its LLM deployment and optimize performance. It’s clear that Zoho is serious about AI and is making strategic moves to become a leader in enterprise AI for its market.

IBM has acquired Prescinto, a company that makes software for renewable energy asset management. The acquisition will enhance IBM’s Maximo Application Suite by enabling it to manage renewable assets such as solar panels and wind turbines. This expansion adds to Maximo’s existing enterprise asset management (EAM) capabilities for managing physical assets such as buildings and infrastructure, inventory, work orders, and maintenance. Prescinto’s AI-driven tools should improve Maximo’s analytics and predictive maintenance features, enabling users to optimize the management of renewable energy assets. By consolidating renewable energy asset management into Maximo, IBM aims to simplify customers’ operations by eliminating the need for separate systems.

Spirent recently published a report highlighting the opportunity for Ethernet to benefit from the growing adoption of next-generation AI applications. It’s not a surprising conclusion, especially given recent efforts by AMD in productizing its silicon for Ultra Ethernet Consortium network interface cards that utilize RoCEv2 to power back-end data center interconnect fabrics. (For more details on that, see my recent research brief that covers the AMD NIC.)

SAP has launched its Sustainability Data Exchange, a SaaS application designed to help enterprises achieve their net-zero goals by enabling standardized carbon data sharing across supply chains. Gunther Rothermel, chief product officer and co-general manager of SAP Sustainability, said, “Managing carbon to accelerate a net-zero future makes measurability critically important. That is where technology and innovation can make a real difference. With SAP Sustainability solutions and our ERP-centric, cloud-based, AI-enabled approach, we support our customers to use integrated sustainability data and embed it holistically into their core business processes.”

One specific advantage of this SAP application is that it assists enterprises in transitioning from estimates to actual emissions data. The platform ensures accurate carbon footprint tracking by integrating with SAP’s ERP ecosystem and supporting industry standards such as Catena-X and PACT. With sustainability being a key focus for many businesses in 2025, this solution demonstrates SAP’s commitment to providing an intelligent sustainability platform for enterprises. You can read more on this in my upcoming article on sustainability practices enabled by ERP.

SAP also reported strong financial Q3 2024 results, demonstrating growth with its cloud-based ERP solutions. Total revenue increased 9% year-over-year to €8.47 billion, with cloud revenue rising 25% to €4.35 billion. The Cloud ERP Suite saw a 34% revenue increase, indicating its continuing appeal for helping businesses manage operations more efficiently in the cloud. The company’s operating profit and free cash flow also grew, improving by 29% and 44%, respectively. SAP has made a big commitment to supporting customers’ digital transformation efforts, an approach enhanced by its acquisition of WalkMe (which has already begun to contribute to the company’s backlog of cloud business). Meanwhile, SAP’s overall results show the success of its strategic efforts to shift customers towards cloud-based ERP systems.

Microsoft has launched autonomous agents for Microsoft Copilot Studio and Microsoft Dynamics 365, the company’s ERP/CRM platform. Microsoft has introduced ten new AI agents for Dynamics 365, focusing on sales, customer service, finance, and supply chain operations. These agents are designed to automate routine tasks, improve workflows, and increase efficiency. Of course, there are challenges, including ensuring data security and privacy, integrating with current systems, and maintaining accuracy.

Agents are a key advancement for Microsoft Dynamics 365 Finance and Supply Chain, bringing flexibility to data management and task execution. The company says that the supply chain agents can help identify bottlenecks and disruptions, suggest improvements, and optimize order fulfillment. AI agents can support payment processing and compliance in finance and provide real-time data for better financial planning. More to come on this topic.

Call of Duty: Black Ops 6 is out, and I’ve already had a chance to play it. What makes this year’s version stand out isn’t that it’s a particularly new or exciting game—it’s the fact that the game is available for the first time via Microsoft’s monthly game subscription service, XBOX Game Pass. This service is how Microsoft plans to continue to grow its gaming business into a recurring revenue source, and in that context Call of Duty was a big part of why Microsoft paid $69 billion for Activision Blizzard, which makes the game.

That said, the game still has to be attractive and exciting enough to get people to keep subscribing, and I believe that Microsoft could ultimately convince gamers of the value of Game Pass—with its many titles—rather than just buying an individual game. I haven’t played the new Call of Duty enough to decide whether it’ll be a success or a flop, but I do know that a lot of people are playing it right now. If they still are in a few weeks, that’ll be a good sign for Microsoft.

AWS announced the end-of-life of AWS IoT Device Management Fleet Hub, a service that sits on top of AWS IoT Device Management and overlaps some features in AWS IoT Console. Conceptually, it’s a dashboard generator that creates applications (“single panes of glass”) through which customers can monitor large numbers of IoT devices. AWS will permanently decommission Fleet Hub a year from now, on October 18, 2025.

I see two reasons for this abrupt EOL. (1) Application generators have evolved considerably since Fleet Hub’s launch four years ago, so it’s time to refresh or rewrite. (2) Extreme device diversity requires extensive (and brittle) interface customization, which often costs more than the benefits of orchestration. The EOL press release says it this way: “As technology and customer needs continue to evolve, we have made a decision to discontinue the feature.” In other words, the solution needs to be rewritten, and doing so doesn’t make sense because the device interface diversity problem remains unsolved.

This is the right decision for AWS. Device diversity should be managed and simplified in IoT middleware, not the user-facing application layer. Smarter middleware with simple APIs that enable high-level device management makes fleet management practical. My take: Efficient IoT device orchestration requires a consistent model for middleware. A few visionary companies are already working on that, and I’ll offer a deeper analysis in an upcoming paper.

This week, Honeywell and Google Cloud announced a collaboration project focused on developing AI-based solutions to two big problems common to many industries: (1) A looming talent shortage and skills gap in the industrial sector, and (2) on-device AI for autonomous operation. Honeywell and Google aim to address the talent gap by making industrial processes increasingly autonomous, leveraging Google’s Vertex AI to customize, train, and deploy ML models and AI applications.

Suresh Venkatarayalu, Honeywell’s CTO and president of Honeywell Connected Enterprise, says, “We’re moving from automation to autonomy. Our goal is to equip companies with AI agents that assist workers in real time—on factory floors and in the field. With AI running both in the cloud and at the edge, we’re making sure that systems work smarter and more efficiently.” In addition to using autonomy to reduce dependencies on scarce talent and skills, the companies are extending Honeywell Forge, a massive database of industrial knowledge, with Vertex AI and LLMs. The idea is to create AI “coaching” agents that deliver helpful information when and where employees need it.

Industrial processes require continuous operation, even when the Internet is down or cloud services are unavailable. Google’s Gemini Nano addresses this problem by providing AI services at the edge of the network, enabling devices like scanners, cameras, sensors, and controllers to operate autonomously. Honeywell’s first solutions built with Google Cloud AI will hit the market in 2025.

My take: Last year, Siemens and Microsoft made a similar deal. Big industrial suppliers are pairing up with CSPs to accelerate the development of advanced AI-based solutions. This is how AI gets a blue-collar job and starts working for a living.

At the same time Qualcomm launched its newest mobile and automotive platforms in Hawaii last week, Bloomberg reported that Arm decided to terminate Qualcomm’s v8 architectural license to escalate the two companies’ ongoing IP dispute. I consider this to be the nuclear option, and it seems like a very odd move considering that the two are expected to be in court in less than 60 days. I believe that this is a mistake on Arm’s part, especially since Qualcomm is one of its biggest partners, and won’t bode well for how other vendors see Arm. Additionally, the entire RISC-V ecosystem is salivating at the prospect of having a company like Qualcomm backing their efforts, especially considering China’s appetite for RISC-V. I believe that Arm and Qualcomm are mostly fighting over egos rather than a few million dollars here or there for either company. This will hurt the ecosystem that Arm claims to be protecting.

AT&T and Verizon both say they are seeing reduced excitement around the iPhone 16 series, even with generous trade-in offers. People just aren’t sold on Apple Intelligence as a reason to upgrade, especially since Apple’s AI product hasn’t properly launched yet and won’t be fully available until next year. While Apple will absolutely continue to market these features, the reality is that consumers won’t be convinced that they are real until they are all available and functioning outside of beta. That might not be until Q2 of next year, which is why analyst Ming-Chi Kuo has said that Apple has reduced its orders of iPhone 16 by 10 million over the next few quarters.

AST SpaceMobile continues to be on a roll with the news that it has won a contract with the U.S. Government that qualifies it as a Prime Contractor for the US DoD, which enables it to win more federal contracts. Additionally, the company has successfully unfolded its first five commercial satellites, which it had recently launched into LEO with SpaceX. AST SpaceMobile may soon be a viable alternative to Starlink from SpaceX, which also recently announced that it would be delivering commercial direct-to-cell service with T-Mobile by the end of the year.

Cisco launched its new Ceiling Microphone Pro at WebexOne last week which was designed to enhance audio quality and flexibility in meeting rooms. The microphone uses beamforming technology to capture sound from a specific area, minimizing background noise to ensure clear audio for both in-room and remote participants. It offers unidirectional and omnidirectional modes, adapting to different room sizes and configurations. It’s designed for plug-and-play installation and integrates with Cisco’s Room and Board Series endpoints. IT can also manage it remotely through an administrator portal.

I saw this microphone firsthand at Cisco’s WebexOne conference in the Miami area. The ceiling-mounted design minimizes clutter and provides a clean aesthetic that looks very modern but not cold. Notably, this microphone is the first product to showcase Cisco’s new design language, which prioritizes sustainability with a soft, organic shape contrasted with sharp, defining lines. It’s constructed entirely of aluminum, a highly recycled and recyclable material, and even the speaker grille is a structural element, not just a cosmetic cover. Cisco also considered the manufacturing process in its design, placing a seam between components on the top surface to make assembly more forgiving. Furthermore, it’s the first Cisco product to ship with zero plastic packaging, reflecting Cisco’s increasing commitment to environmentally conscious product development.

Cisco also launched Workspace Designer, an online tool that uses Cisco’s collaboration technology to simplify the process of planning and equipping meeting rooms. Users can choose from various room sizes and layouts, experiment with different device configurations and furniture placements, and even receive recommendations on the best technology for their needs. The tool aims to reduce the complexity often associated with designing adequate meeting spaces, which previously might have required extensive consultations and a lot of guesswork. Workspace Designer also provides helpful warnings and tips, such as flagging potential speaker or camera placement issues that could impact audio or video quality. This allows users to proactively address potential problems and optimize their meeting rooms for effective communication and collaboration. Cisco’s goal with all of its collaboration devices and technology is to reach what Cisco refers to as “Distance Zero”—where everyone feels no distance from meeting participants, regardless of their location.

Zoom has launched AI Companion 2.0, a significant upgrade to its AI assistant, with expanded capabilities for summarizing meetings and chat threads, generating content like emails and meeting agendas, and automating tasks. AI Companion 2.0 works across various Zoom products, including Team Chat, Whiteboard, Mail, and Meetings. While Zoom’s AI strategy is generally strong, it’s often underestimated. Many users may not fully grasp the sophistication of Zoom’s AI capabilities, even though they use features such as AI summaries and noise cancellation regularly—and those features are outstanding.

In my opinion, Zoom needs to better articulate the value of its AI features, even those offered for no additional charge. By clearly demonstrating the advantages of its AI-powered tools, Zoom can increase user appreciation for these capabilities and potentially drive the adoption of more advanced, paid AI features. This clearer communication is essential for Zoom to fully capitalize on its AI investments and remain competitive.

Apple has announced that this week will be full of Mac news, which I believe will be Apple’s way of releasing the M4 chip and all its variants across desktop, laptop, and mini form factors. This will also give Apple an opportunity to (try to) reassert the M4’s performance leadership over Intel, AMD, and Qualcomm—which I believe that it could do, considering the M4 iPad’s thermal and power limitations. It will likely be an interesting week of cherry-picked benchmarks with questionable scaling and no labels on any graphs. Nevertheless, we’ll get plenty of talk about AI and Apple Intelligence, I’m sure.

IBM has opened its first quantum datacenter in Europe, located in Ehningen, Germany. The datacenter has two quantum computers to support the growing demand from European businesses, research institutions, and government agencies.

The establishment of this datacenter is part of IBM’s broader strategy to advance quantum computing technology and foster a robust quantum ecosystem in Europe. The datacenter will also facilitate compliance with European data sovereignty requirements, ensuring that sensitive data remains within the region. See my full writeup on Forbes for more details.

At its Oktane 2024 conference, Okta announced new capabilities tied to securing generative AI applications. GenAI is poised to reimagine consumer and enterprise applications, but it creates security risks given the use of personal and shared data, underlying algorithms, and large language models, API calls, and more. To address these challenges, Okta announced a new product within its Customer Identity Cloud portfolio: Auth for GenAI.

Auth for GenAI enables developers to build next-generation AI agents and applications securely while not introducing unnecessary constraints that could stifle innovation or create a cumbersome customer experience. Okta’s ability to facilitate security by design for GenAI developers is potentially powerful, anchored by its leadership in identity and access management. See my Analyst Insight piece for more details.

Last week Qualcomm announced products in the mobile and automotive categories at its annual Snapdragon Summit. All the announced products leveraged Qualcomm’s new second-generation Oryon CPU-based SoCs. These new chips significantly improve upon the first generation and deliver mind-melting performance improvements north of 40% on CPU and GPU while also bringing real competition to Apple. Qualcomm is also the first Arm vendor for Android to hit 4 GHz on an ARM architecture. The new Snapdragon 8 Elite mobile processor features the new Oryon CPU cores, as do the new Snapdragon Ride Elite and Cockpit Elite. These products set an entirely new standard for mobile and automotive compute that will heat things up against Apple—and likely find their way into the next generation of Snapdragon X Elite platforms for Windows PCs.

The world of sports technology is constantly evolving, and we knew early on that our Game Time Tech podcast and sports technology advisory practice had to stay ahead of the curve. These days, it’s not just the tech conferences buzzing about sports technology—I’m also starting to see dedicated sports tech tracks emerge at finance conferences. This signals a shift in how these innovations are perceived, especially as operations teams demonstrate how technology can drive operational efficiencies and contribute directly to the bottom line. With CFOs increasingly recognizing the financial benefits, things are about to get even more interesting. The days of marketing teams lobbying for sports sponsorship dollars or IT teams justifying tech spend might be over. I’m excited to see how finance and accounting teams drive this next wave of sports innovation as the ROI of these technologies becomes increasingly apparent. Stay tuned for a finance-focused GTT pod coming soon!

Meta’s new Meta Quest back-of-shirt partnership with Wrexham AFC—both the men’s and women’s teams—should bring some exciting opportunities for fans. Through virtual reality, supporters can enjoy virtual stadium tours, behind-the-scenes views, and interactive gameplay. This collaboration opens new avenues for fans to engage and connect online and potentially in person using Meta’s cutting-edge VR technology.

“We’re so excited to welcome Meta Quest as our back-of-shirt sponsor,” said Wrexham AFC co-chairmen Rob McElhenney and Ryan Reynolds. “Meta Quest allows you to immerse yourself in new worlds and experiences and is all about connection—something that resonates with us at Wrexham AFC.”

Meta’s involvement further enhances Wrexham’s visibility, aligning with the club’s growing fanbase following the popular Welcome to Wrexham series on FX. Wrexham supporters may also get to benefit from special offers, such as discounts on Meta Quest headsets.

AT&T and T-Mobile published their respective 3Q 2024 earnings this week. AT&T continues to build momentum for its fiber franchise with an impressive 19 consecutive quarters of 200,000 net adds. Broadband continues to be a bright spot for the company, balancing flat mobility top-line revenue. I also expect that AT&T’s relationship with AST SpaceMobile will facilitate monetization of new rural mobility applications in agriculture technology as that commercial low earth orbit (LEO) satellite constellation matures.

T-Mobile continues its impressive financial performance, buoyed by significant net income growth. A key contributor is its 5G fixed wireless access business. I also believe the company will enjoy continued revenue upside in broadband services as it readies an aggressive push with fiber.

Citations

IBM / Granite 3.0 / Patrick Moorhead / Fierce Network
IBM’s new generation of models carves a path for open-source AI

IBM / Granite 3.0 / Patrick Moorhead / Info World
IBM works to address the developer skills gap with AI

IBM / Granite 3.0 / Patrick Moorhead / TechTarget
IBM launches new generation Granite language model

Intel / EU Fine / Anshel Sag / Computer World
Billion-dollar fine against Intel annulled, says EU Court of Justice

CHIPS Act / Patrick Moorhead / Investor’s Business Daily
Uncle Sam Wants Semiconductors Made In America. The CHIPS Act May Fall Short

Qualcomm / Snapdragon 8 Elite / Patrick Moorhead / Fierce Electronics
How Qualcomm mobile AI busts out as Snapdragon 8 Elite

Qualcomm & Arm / Licensing Feud / Patrick Moorhead / Fierce Electronics
Arm threatens to end Qualcomm license as ongoing spat heats up

Qualcomm & Arm / Licensing Feud / Anshel Sag / Serve The Home
Arm Moves to Cancel its Design License with Qualcomm

_____

TV APPEARANCES

CNBC Power Lunch / Google / Patrick Moorhead
Google has nice growth this year and expect the same next year, says FBB Capital’s Mike Bailey

Prof G Markets / Podcast Guest / Patrick Moorhead
Nvidia’s Rise, Intel’s Fall, and the Chips in Between — ft. Patrick Moorhead | Prof G Markets

Yahoo! Finance / Arm and Qualcomm Licensing Feud & NVIDIA Blackwell / Patrick Moorhead
Nvidia’s Blackwell woes revealed ‘drama’ between chip partners and
How did Arm and Qualcomm’s ‘symbiotic’ partnership go south?

New Gear or Software We Are Using and Testing

  • Kindle Colorsoft (Anshel Sag)
  • Google Pixel Buds 2 Pro (Anshel Sag)
  • Google Pixel Watch 3, 41mm (Anshel Sag)
  • Cisco Desk Pro (Melody Brue)
  • OnePlus Buds Pro 3 (Anshel Sag)
  • Insta360 Link2 4K AI Webcam (Anshel Sag)
  • Google Pixel 9 Pro Fold (Anshel Sag)
  • Google TV streamer – Matter and Thread features (Bill Curtis)
  • Various Matter devices (Bill Curtis)
  • ASUS Zephyrus G16 Gaming Laptop (Anshel Sag)
  • iPhone 16 Pro (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Matt Kimball, Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Matt Kimball, Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Dell Tech Analyst Summit, November 6-8, Austin (Matt Kimball, Anshel Sag, Paul Smith-Goodson)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • IBM, November 6-8, New York City (Paul Smith-Goodson)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Analyst Summit, November 12-13, San Francisco (Melody Brue)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • AWS re:Invent, December 2-6, Las Vegas (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson, Matt Kimball)
  • IBM Strategic Analyst Event, December 9, Boston (Robert Kramer)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)
  • ServiceNow Global Industry Analyst Digital Summit, December 10 (Jason Andersen, Melody Brue, Robert Kramer – virtual)
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)
  • ZohoDay25, February 3-5, Austin (Robert Kramer, Melody Brue)
  • Zendesk Analyst Day, March 35, Las Vegas (Melody Brue)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending October 25, 2024 appeared first on Moor Insights & Strategy.

]]>
VAST Data Deepens Its AI Enablement With InsightEngine https://moorinsightsstrategy.com/vast-data-deepens-its-ai-enablement-with-insightengine/ Tue, 22 Oct 2024 18:52:32 +0000 https://moorinsightsstrategy.com/?p=43969 VAST hopes that its new InsightEngine—developed with Nvidia—will remove some of the complexity of deploying enterprise AI, especially at the upper end of the market.

The post VAST Data Deepens Its AI Enablement With InsightEngine appeared first on Moor Insights & Strategy.

]]>
VAST hopes that its new InsightEngine will remove some of the complexity of deploying enterprise AI, at least at the upper end of the market. VAST Data

VAST Data launched its Data Platform about a year ago, aiming to unify storage, compute and data. The company’s bigger goal is to remove the complexity of connecting all of an enterprise’s data to the applications and tools that turn that data into intelligence.

In its latest move, the company and AI giant Nvidia have partnered to announce InsightEngine, which is designed to deliver real-time retrieval-augmented generation. Let’s take a deeper look at this announcement and consider what this means for enterprise IT organizations and the industry as a whole.

AI Is Complex, And Data Is The Problem

First, it’s worth revisiting the underlying problem that VAST addresses. Saying that AI is complex is not original or controversial. It’s complex for many reasons, including technical, operational and organizational aspects. One of the biggest challenges comes from the data used for AI. Data resides everywhere, from the edge to on-premises datacenters to the cloud. Data also resides in the applications that power the business—ERP, CRM, HRM and the like. Finally, data exists in many different formats, both structured (e.g., database tables) and unstructured (documents, pictures, etc.).

Here’s the long-running challenge: how does an enterprise that wants to extract value from its data do that easily? Historically, the answer has been: it doesn’t. That’s what VAST has tried to address with its Data Platform, which has resolved many of the challenges in this area through a number of components:

  • DataStore: A scalable solution for storing unstructured data without data tiering—meaning that all data is “hot” and critical
  • DataBase: A scalable transactional and analytical database that combines the capabilities of traditional databases, data warehouses and data lakes
  • DataEngine: The intelligence that quickly takes data and makes it AI-ready through triggers and functions
  • DataSpace: A global namespace that makes it simple to get to data wherever it resides across the enterprise—on the edge, in a co-lo, in the cloud or in an enterprise’s datacenter.
The VAST Data Platform is designed to support the AI pipeline. VAST Data

So to recap, the introduction of the VAST Data Platform was aimed directly at the challenge of how IT organizations can more easily collect, prepare and train large amounts of data that feed large language models for use in AI applications.

But the challenge continues to evolve. As AI ages a little, we have started to see the discussion shift from frontier models to enterprise inference. As the discussion shifts, so does the challenge of how we make this trained data work in the enterprise beyond simple chatbots and the like. How does inference work to drive business outcomes? And is RAG the answer? For the latter question, VAST would argue: not in its current state.

InsightEngine Delivers Real-Time RAG With Nvidia NIM

InsightEngine is where VAST has trained its focus to help enterprises extract full value from AI inference. Working with Nvidia, InsightEngine delivers more accurate, more contextualized responses to the queries that a user or another application may initiate. NIM (which stands for “Nvidia inference microservices”) is Nvidia’s framework that enables an enterprise to take trained data and use it more precisely and efficiently in each application.

By working with NIM, InsightEngine can create vector and graph embeddings in VAST’s DataBase product. Whenever new data is generated, vector embeddings are generated to update the database in real time. These vectors, graphs and tables are then used in RAG. The result is an implementation of RAG that is highly accurate and delivered in real time from VAST’s vector database, which can scale up to trillions of embeddings.

InsightEngine delivers real-time RAG with Nvidia NIM. VAST Data

Depending on how inference is used, real-time RAG’s benefit may not be as critical to a specific organization. However, for mission- and business-critical applications that are driven by AI agents—and interact with other AI agents—a lack of real-time data can be a serious issue. If you think this agentic model (i.e., one in which AI agents interact with one another across the enterprise) is a little futuristic, it’s not. Or maybe more precisely put, it is futuristic—but the future is now.

How is all of this possible? VAST employs a disaggregated, share-nothing architecture. This takes a standard storage architecture and makes it broad and shallow. This removes the notion of data tiering, so essentially all data is “hot.” Because of this, InsightEngine can quickly ingest data from enterprise applications and vectorize it in the VAST DataBase. Object, file, table, graph—all of it gets stored in this transactional/analytical database for retrieval. And whenever real-time RAG is enabled, InsightEngine also fine-tunes your large language models.

Cosmos, Because It Takes A Village

The less-covered element of VAST’s announcement is arguably the most valuable to enterprise IT today. Cosmos is a community where VAST directly connects AI practitioners with AI experts. While every organization would love to hire 20 Ph.D.s to design and deploy AI across the enterprise, the reality is that AI talent is scarce—and pricey. While the VAST Data Platform and InsightEngine are intended to simplify the process of deploying and operationalizing AI, the term “simplify” is relative. For many IT organizations, it’s still going to be really hard—and the skills gap is real.

With Cosmos, IT professionals can join a forum and interact with each other and with experts to better understand best practices and work through challenges that may otherwise seem impossible to tackle. This isn’t simply connecting a user to a VAST support person; it connects them to other users facing the same challenges, along with folks from the big consulting firms and the hardware and software vendors.

Of course, communities like Cosmos are constrained by how much they are used and how well they are moderated. If this community becomes nothing but a sales vehicle for Accenture, Deloitte and others, it will quickly lose its appeal. However, there is real potential here.

Is VAST’s Offering Unique?

When VAST announced the Data Platform last year, it was the only vendor bringing such data management to storage. With InsightEngine, it has further differentiated itself. However, NetApp recently announced its storage and data management platform ONTAP with an AI engine that performs the functions of much of the AI data pipeline.

Perhaps VAST’s biggest competitor in the high-performance storage space is Weka, which has its own data platform for generative AI. Weka’s cloud-native architecture might be the closest to VAST’s, in that the company has designed its solution from the ground up for high performance.

The addition of InsightEngine with Nvidia to VAST’s architecture delivers an advantage for VAST because it expands coverage along not just the AI data pipeline but the whole AI journey, from training to inference. VAST’s customers are a Who’s Who of data- and performance-driven organizations, such as Zoom, NASA, Pixar and GPU cloud provider CoreWeave.

Implications For VAST’s Market Approach

VAST is a data management company. Though its early years were spent designing high-performance storage, that was clearly done to build a foundation for its data management play. Further, the company has successfully built out its storage and data management platforms—otherwise, it would not have a valuation of over $9 billion.

Here are two things to consider about VAST. The first is that it caters to the needs of companies with significant data management challenges—the cream of the crop, if you will. VAST will undoubtedly continue to find success in this space, but there are questions about whether its technology can successfully come downmarket to find a larger addressable market. For that matter, does VAST even want to?

The second consideration is where I imagine myself as a VAST customer. Deploying the VAST Data Platform is a deep engagement. Once I jump in, it’s not easy to move away from it. This isn’t a bad thing, but it is undoubtedly a consideration for any enterprise IT organization considering vendors to support its AI journey.

InsightEngine Plays Into VAST’s Long-Term Focus

VAST’s evolution has been fun to watch. From a storage company that took the HPC world by storm to claiming the AI OS title, it has been a bold company that hasn’t been afraid to be the first mover.

When the company introduced the Data Platform last year, conceptualizing it was a little hard. This was partly because the company was out in front of the market, talking about AI pipelines, global namespaces, DASE and DataEngine while everybody else was talking about LLMs and ChatGPT. InsightEngine brings the VAST Data Platform into sharper focus and shows how the company is making itself an integral part of the entire AI journey—from finding and preparing data to training and inference.

The one bit of advice I would leave you with is this: AI is still complex. While VAST has removed a lot of the complexity, the AI market has seen far more failures than successes to date. Look to Cosmos and other communities to engage with experts and ensure you lay down the right foundation.

The post VAST Data Deepens Its AI Enablement With InsightEngine appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending October 18, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-october-18-2024/ Mon, 21 Oct 2024 16:47:42 +0000 https://moorinsightsstrategy.com/?p=43554 MI&S Weekly Analyst Insights — Week Ending October 11, 2024. A wrap up of what our team published during the last week.

The post MI&S Weekly Analyst Insights — Week Ending October 18, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

Welcome to this edition of our Weekly Analyst Insights roundup, which features key insights our analysts have developed based on the past week’s events.

Gelsinger _ Su

Last week the CEOs of AMD (Lisu Su, right) and Intel (Pat Gelsinger, left), did something unexpected: they joined forces to launch the x86 Ecosystem Advisory Group. The new entity aims to boost interoperability, smooth out integration, and generally simplify life for developers, ISVs, OS makers, and OEMs in the x86 space. Our own CEO and chief analyst, Patrick Moorhead, has known Gelsinger and Su for years. He had the opportunity to conduct a 1-on-2 interview with the two of them a day ahead of the public announcement, which led to this full-length analysis on Forbes.

The busy autumn conference season continues for our team. Last week, Melody was at AdobeMAX in Miami, and Patrick, Matt, Paul, and Anshel were in Bellevue, Washington, for Lenovo’s Global Analyst Summit & Tech World. Will was at Blackberry’s Analyst Day, and Matt, Robert, and Jason participated in IBM’s Analyst Day—all in the Big Apple.

On Thursday, October 17, Melody joined the RingCentral team on the webinar “Revealing the AI Communications Strategies That Work” where she shared her vision for the future of AI in UC. If you missed it, you can watch it here on demand

This week, the team continues its tech event travels. Patrick and Will are set to attend Qualcomm’s Snapdragon Summit in Maui, Melody returns to Florida for WebexOne in Ft Lauderdale, and Matt will attend the RISC-V Summit virtually.

Our MI&S team published 21 deliverables:

Over the last week, our analysts have been quoted in numerous top-tier international publications with our thoughts on AMD, Nvidia, Intel, Apple Vision Pro, Pure Storage, UL Solutions, the smartphone market, and of course, chips and AI.

MI&S Quick Insights

Last week I attended the IBM Analyst Summit in New York. As typical at these events, much of what was covered was under embargo. The good news is that the embargo will be lifted this week in concert with the IBM TechXChange event in Las Vegas. However, I was pleasantly surprised to hear what was going on in IBM Consulting. This was in terms of both the nature of their work and how they are delivering projects for clients. I will be doing more in-depth research on this topic in the coming weeks.

Over the past couple of weeks, I have gotten a great deal of feedback and dialogue regarding my recent Forbes column on Agents. The conversations have led to two very interesting pieces of feedback. First, there seems to be a bifurcation between the agent development method and the focus of tooling vendors. To wit, low- and no-code tool makers are pumping out hundreds of general-purpose agents to help knowledge workers empower themselves. This is exemplified by recent announcements from Oracle, Salesforce, ServiceNow, and others. Meanwhile, the vendors more aligned to pro-code tools are more focused on specific problems. A good example of this is AWS with application modernization. AWS is not alone, as other pro-code vendors are lining up around more specific use cases and will unveil their visions this year. This recent turn of events is leading to the second point of feedback, which is how customers should approach starting to use agents. This seems like a great piece of future research, so stay tuned.

On a more personal note, I am experimenting with how I do my research. I have always been a note-taker, carrying my pads, pens, and pencils wherever I go. The physical writing process helps my brain cement ideas in place, and my constant doodling helps me see patterns in my research. I have never been able to type my notes, and while Patrick Moorhead (among others) is a big fan of meeting transcription, I think I still need the physical act of writing things down.

However, after decades of using this method, I might be making a change—or at least an evolution. After trying to go digital for years, AI-empowered notebooks may be the catalyst to push me over the top. The challenge of using pen and paper is retention over longer periods. Locating things in old notebooks is a pain, and not available when I am on the road. The idea of adding LLMs to digital notebooks means I can combine my notes with other artifacts and start to really dig into areas of interest and be more efficient as I develop research. I must be clear: I am not using LLMs to do my writing in any way. But they are a good way to organize thoughts and possibly prompt me to look at some areas in a more nuanced way. So far, so good on making notes; there is a learning curve, but I am starting to get it. I am still testing out what will become my digital notebook, and I’ll keep you posted here. My setup is in the “New Gear” section below.

Last week I attended the IBM Analyst Summit in New York. As typical at these events, much of what was covered was under embargo. The good news is that the embargo will be lifted this week in concert with the IBM TechXChange event in Las Vegas. However, I was pleasantly surprised to hear what was going on in IBM Consulting. This was in terms of both the nature of their work and how they are delivering projects for clients. I will be doing more in-depth research on this topic in the coming weeks.

Over the past couple of weeks, I have gotten a great deal of feedback and dialogue regarding my recent Forbes column on Agents. The conversations have led to two very interesting pieces of feedback. First, there seems to be a bifurcation between the agent development method and the focus of tooling vendors. To wit, low- and no-code tool makers are pumping out hundreds of general-purpose agents to help knowledge workers empower themselves. This is exemplified by recent announcements from Oracle, Salesforce, ServiceNow, and others. Meanwhile, the vendors more aligned to pro-code tools are more focused on specific problems. A good example of this is AWS with application modernization. AWS is not alone, as other pro-code vendors are lining up around more specific use cases and will unveil their visions this year. This recent turn of events is leading to the second point of feedback, which is how customers should approach starting to use agents. This seems like a great piece of future research, so stay tuned.

On a more personal note, I am experimenting with how I do my research. I have always been a note-taker, carrying my pads, pens, and pencils wherever I go. The physical writing process helps my brain cement ideas in place, and my constant doodling helps me see patterns in my research. I have never been able to type my notes, and while Patrick Moorhead (among others) is a big fan of meeting transcription, I think I still need the physical act of writing things down.

However, after decades of using this method, I might be making a change—or at least an evolution. After trying to go digital for years, AI-empowered notebooks may be the catalyst to push me over the top. The challenge of using pen and paper is retention over longer periods. Locating things in old notebooks is a pain, and not available when I am on the road. The idea of adding LLMs to digital notebooks means I can combine my notes with other artifacts and start to really dig into areas of interest and be more efficient as I develop research. I must be clear: I am not using LLMs to do my writing in any way. But they are a good way to organize thoughts and possibly prompt me to look at some areas in a more nuanced way. So far, so good on making notes; there is a learning curve, but I am starting to get it. I am still testing out what will become my digital notebook, and I’ll keep you posted here. My setup is in the “New Gear” section below.

Dario Amodei, CEO of Anthropic, has written a very long (but interesting) paper on what he believes will be the ultimate gifts of AI. His idea is that after powerful AI is developed, we will, within a few years, make all the progress in biology and medicine that we would have made in the entire 21st century without AI. Amodei said, “I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.” Here’s a list of what he believes AI-enabled biology and medicine will give us in five to 10 years that would otherwise take 50 to 100 years without AI:

  1. Reliable prevention and treatment of nearly all infectious disease
  2. Elimination of most cancer
  3. Very effective prevention and effective cures for genetic disease
  4. Prevention of Alzheimer’s
  5. Improved treatment of most other ailments
  6. Biological freedom (To explain this, he wrote, “I suspect AI-accelerated biology will greatly expand what is possible: weight, physical appearance, reproduction, and other biological processes will be fully under people’s control.”)
  7. Doubling of the human lifespan

The list seems doable based on the work being done with AI, medicine, and healthcare. If you want to read the entire paper, you’ll find it here.

Robots seem to be a hot topic now, probably thanks to Elon Musk. Boston Dynamics and Toyota Research Institute announced a robotics research partnership to combine their expertise in AI and robotics. The partnership plans to accelerate the development of humanoids by integrating TRI’s large behavior models with Boston Dynamics’ Atlas robots. The robots will be the platform for implementing TRI’s advanced AI systems. TRI has expertise in computer vision; LLM training will also be important to develop a multitasking foundation model for robotic manipulation. Once we develop AGI and incorporate it into a humanoid, we might be approaching the danger zone. Then again, that might be a lot of fun.

At the GITEX Global conference in Dubai, Avaya showcased its latest AI-powered solutions aimed at enhancing customer experience and streamlining operations. These included “Amna,” an AI virtual assistant developed in partnership with Sestek and Cognigy for Dubai Police to handle public inquiries. Avaya also introduced a “Virtual Operations Manager” concept, demonstrating how AI can analyze contact center data and provide actionable insights to improve performance and customer journeys. Furthermore, Avaya highlighted a real-time translation solution created with Transcom and Sabio, leveraging Avaya Experience Platform’s open APIs to enable agents to communicate with customers in more than 100 languages. This solution aims to improve scalability and reduce business costs by up to 65% in specific markets and use cases.

I attended GITEX with Avaya last year, and I’ve seen firsthand how the company leverages this event to highlight its commitment to innovation in the CX space. While I couldn’t be there in person this year, I have observed how these announcements reinforce Avaya’s focus on delivering solutions that address real-world challenges. It’s particularly noteworthy to see the company’s emphasis on practical applications of AI, such as the virtual assistant for Dubai Police and the real-time translation solution, which have the potential to significantly impact customer service and operational efficiency.

Although I had to sit in my hotel room and watch the Lenovo Tech World 2024 livestream due to bronchitis, there was a lot of news to digest from this company, which I believe doesn’t get enough credit for its AI programs and the other strong work it has done in the market. One of those areas is an enabling technology: Neptune liquid cooling. Although this technology really has its roots in the IBM era, Lenovo has done a lot to advance it—and indeed has been on the forefront of the liquid cooling trend. In this vein, Lenovo made two major announcements at Tech World:

  1. The new Lenovo N1380 Neptune chassis is designed for 100% heat removal on a greater than 100kW rack consumption, without any specialized air conditioning.
  2. The ThinkSystem SC777 v4 Neptune server supports the NVIDIA Blackwell GPU and platform.

It used to be that a 15kW power budget for a whole server rack was high. Thanks to the accelerated adoption of higher-power-consuming CPUs and GPUs, it is not unusual anymore to see 15kW being consumed by a single server. Liquid cooling is quickly moving from niche usage in the datacenter to much broader adoption. In response, Lenovo has played to its strength in liquid cooling quite effectively.

Speaking of liquid cooling, infrastructure giant Schneider just secured a controlling interest in liquid air player Motivair for $850 million. Schneider does a lot in the datacenter market—electrical distribution, UPS kits, racks, enclosures, etc. This investment is a natural and smart expansion for the company. What makes it more interesting to me is the amount of money pouring into the liquid cooling market—and the innovation that is coming out of these startups.

One of the more interesting liquid cooling companies I’ve seen is JetCool out of Maynard, Massachusetts. This company, founded by a scientist from the MIT Lincoln Laboratory, has been securing more and more partnerships and recently secured tens of millions in investment from Bosch.

To think of liquid cooling as mere plumbing is silly. It has moved from low-tech to high-tech seemingly overnight as it has gone from pumping fluids to advanced physics. Look for a research note from me on this topic in the near future.

Are we living in the era of real-life science fiction? When Oracle CEO Larry Ellison talked about using nuclear power to light up his datacenters, people kind of laughed and thought of it as “Larry being Larry.” Fast-forward a month or two, and Microsoft is wanting to reactivate Three Mile Island to power its Azure datacenters, while Google and AWS have committed to acquiring and deploying small modular reactors (SMRs) that can deliver up to 300 megawatts of power per datacenter.

The power crunch is very real—and very limiting. If SMRs can be deployed and managed properly, they could deliver highly reliable and very clean energy for next-generation datacenters. Hopefully, U.S. regulators will come to their collective senses and catch up with the rest of the world in enabling nuclear power.

Adobe announced GenStudio for Performance Marketing at its AdobeMAX conference in Miami last week. This generative AI-powered application aims to help brands and agencies accelerate the creation and delivery of personalized marketing campaigns, allowing marketers to quickly generate variations of on-brand content for channels such as paid social, display ads, and e-mail. The platform integrates with Adobe Experience Cloud and with popular advertising platforms such as Google, Meta, and TikTok, offering performance insights and streamlined workflows.

This was just one of several announcements at AdobeMAX, including new Adobe Express integrations with popular enterprise apps such as Box and Miro. I’ll soon provide a more detailed analysis of these announcements and my experience at the conference.

Cloudera’s partnership with Snowflake provides enterprises with an open, unified hybrid data lakehouse powered by Apache Iceberg. The goal of this collaboration is to enable enterprises to consolidate their data, analytics, and AI workloads into a single platform, eliminating data silos. This combination could give enterprises a single source of truth for their data, enabling faster queries, real-time insights, and streamlined workflows while maintaining data integrity. “By extending our open data lakehouse capabilities through Apache Iceberg to Snowflake, we’re enabling our customers to not only optimize their data workflows but also unlock new opportunities for innovation, efficiency, and growth,” said Abhas Ricky, chief strategy officer of Cloudera.

Last week I attended IBM’s Analyst Summit in New York, which provided valuable insights into IBM’s vision for the future of data and AI, with an emphasis on accessing enterprise data. Beyond exploring IBM’s impressive new offices, I had the opportunity to hear from key leaders, starting with CEO Arvind Krishna, who outlined IBM’s strategy for AI adoption, sustainability, partnerships, and data management. SVP of software and CCO Rob Thomas detailed IBM’s software approach to leveraging data and AI, while SVP and director of IBM Research Dr. Dario Gil highlighted innovative AI research from IBM labs. I was especially interested to learn more about how IBM’s consulting services are helping clients navigate digital transformation and customer readiness, get the most out of structured and unstructured enterprise data using IBM’s data fabric solutions, and adopt sustainable data practices in anticipation of future regulations..

AMD’s recent announcements of its new DPU and Ultra Ethernet Consortium-ready NIC represent a one-two punch supporting front- and back-end networking that’s optimized for AI workloads. The AMD Pensando Salina DPU marries high-performance network interconnect capabilities and acceleration engines aimed at providing critical offload to improve AI and ML functions. AMD claims that Salina will provide a twofold improvement in overall performance over its prior DPU generations, and if it delivers on this promise, it could further the company’s design wins with public cloud service providers eager to capitalize on the AI gold rush.

The Pensando Pollara 400 NIC is purpose-built for AI workloads, with an architecture based on the latest version of RDMA that can directly connect to host memory without CPU intervention. AMD’s new NIC design could position it favorably relative to Broadcom 400G Thor, especially since the company is the first out of the gate with a design optimized for UEC performance. Both the AMD Pensando Salina DPU and AMD Pensando Pollara 400 NIC are currently sampling with cloud service and infrastructure providers; commercial shipments are expected in the first half of 2025.

SAP and UiPath have formed a partnership to integrate the UiPath enterprise automation platform with SAP Build Process Automation. Among other benefits, this move aims to enable customers to automate more of their business processes, plus it should make things easier for enterprises that are transitioning to SAP S/4HANA Cloud. UiPath touts the collaboration, offered as an SAP Solution Extension starting this month, as providing a holistic view of process automation across both SAP and non-SAP environments to enhance operations and efficiencies.

This is an option worth exploring for enterprises transitioning to SAP S/4HANA Cloud and automating processes across their IT landscape. Both partners have a focus on enabling enterprises to carry out successful business transformation projects while improving data management and reducing risks.

NXP introduced the S32J family of 80 Gbps Ethernet switches, which share a common switch core (NETC) with the NXP S32 automotive processing platform. Designed for high-speed in-vehicle networks, the switch integrates with NXP’s CoreRide platform to provide production-grade network solutions pre-integrated with software and tooling.

Sonatus, a leader in software-defined vehicle technologies, won Autotech Breakthrough’s “Connected Vehicle Innovation of the Year” award for the Sonatus Collector data collection system. Only a fraction of the massive amount of vehicle-generated data is relevant for optimizing customer experiences, improving quality, managing fleets, and ensuring safety. The Collector is a policy-based system that reduces data processing overhead and upload costs by gathering, storing, and uploading only targeted information. This solution is truly innovative, and other industrial applications should use similar design patterns.

Blecon, a new startup out of Cambridge, England, punched above its weight class last week at Embedded World NA with a simple middleware solution that connects Bluetooth Low Energy devices to cloud services—without pairing. The company just closed a $4.6 million seed round led by U.K.-based MMC Ventures. I like simple connectivity schemes, and this is a good one.

Agtonomy closed its $32.8 million Series A round, positioning the company to accelerate AI-driven agriculture automation and expand into autonomous industrial equipment. Agtonomy’s Sonatus-like business model combines advanced software with OEM partnerships to rapidly develop various autonomous, software-defined offroad products.

In recent conversations with various tech vendors, it’s become clear to me that while enterprises are eager to adopt AI, they face many of the same key challenges. IBM recently highlighted five truths about AI adoption, emphasizing the need for:

  1. Targeted AI solutions
  2. Hybrid cloud flexibility
  3. Robust governance
  4. A focus on value-driven use cases
  5. High-quality data

These points resonate with my own observations and are further validated by a recent Cisco study that revealed a disconnect between what tech companies think their customers need and the customers’ actual challenges and needs.

This misalignment is particularly evident in infrastructure scalability, data security, and access to skilled talent. While partners are understandably enthusiastic about the growth potential of the AI market, they need to better understand and address these customer pain points to capitalize on this opportunity. It’s not just about selling the “shiny new object” of AI, but about providing practical solutions that deliver real business value and foster trust in AI systems.

Lenovo held its Tech World 2024 event in conjunction with its global analyst conference. At the event, the company had a Who’s Who of tech executives on stage, including Intel’s Pat Gelsinger, AMD’s Lisa Su, and NVIDIA’s Jensen Huang. (Qualcomm’s Cristiano Amon and Microsoft’s Satya Nadella joined via video.) It was absolutely a tour de force for Lenovo to remind its partners of the company’s influence as the world’s undisputed #1 PC maker. While Lenovo didn’t announce any new consumer products, it did show off many concepts and prototypes. It also announced its foray into automotive electronics in partnership with NVIDIA and Qualcomm.

Amazon overhauled its entire Kindle lineup with new and improved models and the first-ever color Kindle, which it claims will operate in full color with zero impact on battery life. I am glad to see a color Kindle because it improves the reading experience for graphic novels. There are also a bunch of updated models of Kindle Paperwhite, Kindle Scribe and Kindle with new colors and faster page loading. These new Kindles have phased out the previous generation, including the last model with physical buttons.

Quantum Computing Inc. has won its fifth project from NASA. The company is developing quantum remote sensing for space-based lidar imaging. By using QCI’s technology, NASA will lower the cost of lidar missions. This is an important step for QCI that allows it to provide an innovative quantum solution using remote sensing for climate change investigations. Two active NASA climate change projects are (1) ICESat-2, which uses lidar to measure thickness changes in polar ice sheets and sea ice, and (2) GEDI, a test project on the International Space Station that measures forests around the world.

Blackberry recently held its investor day at the New York Stock Exchange. The company has made management changes, as well as divided its cybersecurity and IoT business into what the company calls “virtually autonomous business units”—an unconventional move. However, the strategy is yielding significant operational cost savings, as well as newfound visibility for optimizing investment into its more profitable solutions within both portfolios. Time will tell if Blackberry can improve shareholder value. However, its QNX IoT platform continues to be a bright spot, especially in automotive, as evidenced by more than 100 design wins over the last 18 months, coupled with support commitments from MediaTek, NVIDIA, NXP, Qualcomm, and other silicon providers.

At Lenovo’s Tech World 2024 event, not only did AMD and Intel announce their joint effort to create the x86 Ecosystem Advisory Group (described in the introduction to this weekly update), but they appeared together in photos with our CEO Patrick Moorhead after recording an episode of Moorhead’s podcast. Both chip CEOs spoke highly of the partnership. The advisory group includes a long list of very influential companies, and I believe it serves as a hedge against the growth of Arm in both client and server. Regardless, nobody could have imagined the day when Intel and AMD would really collaborate outside industry standards groups.

Intel CEO Pat Gelsinger also came on stage at Lenovo Tech World to show the world one of the first Panther Lake chips. Panther Lake, which is expected to ship at the end of 2025, is the first Intel product to leverage the company’s 18A process node and feature all of its latest CPU, GPU, and NPU cores. Many people are quite pleased with the just-launched Lunar Lake processor, which shares many design elements with Panther Lake.

T-Mobile’s recent partnership with McLaren Racing is a strategic move aimed at connecting with business decision-makers, who make up 54% of the U.S. Formula 1 fanbase. This collaboration goes beyond branding on McLaren’s race cars and garage headsets; it’s about leveraging a shared passion for technology and performance to showcase T-Mobile’s 5G business solutions.

T-Mobile CMO Mo Katibeh highlights the partnership’s focus on data-driven decision-making and innovation, mirroring the real-time data analysis that’s crucial to both F1 racing and modern business operations. By aligning with McLaren, T-Mobile aims to tell a compelling story that resonates with business leaders and positions it as a critical player in the future of 5G connectivity. This partnership should serve as a platform for showcasing how T-Mobile’s advanced 5G network can enhance business operations and drive innovation.

The partnership is just one example of the growing trend of technology companies investing in F1 sponsorships. I look forward to discussing T-Mobile and other prominent partnerships, such as Google’s Pixel collaboration with McLaren, with Anshel Sag and Robert Kramer on an upcoming Game Time Tech Pod. We’ll investigate how these technologies impact the sport and the vendors’ bottom line.

Globant has been given the #6 spot on Fortune’s “Change the World” list for its work on social and environmental issues. The company supports programs that bring cleaner cookstoves to Peru and help farmers in India switch to green energy. That’s making a real difference in those communities while shrinking carbon footprints. I have followed Globant’s sustainability journey and am pleased to see its efforts acknowledged on a global platform. This recognition underscores the positive impact that technology companies can have when they prioritize social and environmental responsibility alongside business growth.

The FCC has passed a series of new rules, one of which says that all hearing aids must be Bluetooth-compatible in the future. The FCC also says that all smartphones must be compatible with hearing aids for accessibility reasons. Manufacturers will have a couple of years to comply with these new rules, which I think are a step in the right direction, especially now that Apple is bringing hearing aid support to its AirPods Pro 2.

The State Fair of Texas is another example of an event needing to embrace modern connectivity improvements, including private 5G networking. During the most recent Texas-Oklahoma football game in Dallas, concession ticket kiosks were inoperable and wireless point-of-sale terminals used around the Cotton Bowl facility malfunctioned. This all led to a less than desirable experience for football fans and attendees of the Fair, plus the venue lost significant revenue as a result. Certainly, there are challenges for wireless network propagation at the site given the age, construction, and lack of fiber backhaul at the Fair Park and Cotton Bowl venues. However, my personal experience highlights an opportunity for management to consider a private 5G network deployment to not only delight attendees, but also maximize revenue potential. The cost for deploying improved connectivity infrastructure would be significant, but an innovative solution such as T-Mobile’s recently announced 5G on Demand offering could be a cost-effective consideration.

Research Papers Published

Research Notes Published

Citations

AMD / MI325X AI Accelerator / Patrick Moorhead / Guru Focus
AMD Introduces Instinct MI325X AI Accelerator to Challenge NVIDIA’s Dominance

AMD & Intel / Partnership X86 / Matt Kimball / Data Center Knowledge
What AMD and Intel’s Alliance Means for Data Center Operators

AMD & Intel / Partnership X86 / Patrick Moorhead / Digital Experience Live
AMD and Intel Unite to Strengthen Future of x86 Architecture

AMD & Intel / Partnership X86 / Patrick Moorhead / Runtime
Why Intel and AMD buried their differences to make life easier for software developers, and hold off a common enemy

Apple / Vision Pro / Anshel Sag / Tech News World
Apple Vision Pro Ecosystem Shows Sluggish Growth

Astera Labs / Scorpio Smart Fabric Switches / Patrick Moorhead / Yahoo! Finance
Astera Labs Inc (ALAB) Unveils Industry’s First PCIe 6 Switch, Revolutionizing AI Infrastructure with Scorpio Smart Fabric Switch Portfolio

Astera Labs / Stock / Patrick Moorhead / Investing.com
Astera Labs director Jack Lazar sells $139,900 in stock

Commvault / Cyber Resilience / Patrick Moorhead / Gestalt IT
Commvault Shift’s Cyber Resilience for the AI Era | The Gestalt IT Rundown: October 16, 2024

NVIDIA & Accenture / AI Partnership / Patrick Moorhead / The Ticker
Nvidia and Accenture partnership to scale corporate AI adoption

NVIDIA & Apple / Stock / Patrick Moorhead / Watcher.Guru
Nvidia or Apple: Which Stock To Buy Today to Make Profits?

NVIDIA / Stock / Patrick Moorhead / Yahoo Finance
Nvidia notches record high, looks to unseat Apple as world’s most valuable company

Pure Storage / Storage / Matt Kimball / CDO Trends
Pure Storage Declares War on Storage Complexity

Smartphones / Anshel Sag / Tech News World
Global Smartphone Shipments Rise in Q3 as Growth Streak Continues

UL Solutions / GenAI / Paul Smith-Goodson / CIO
UL’s leap into the genAI evaluation business raises key questions

New Gear or Software We Are Using and Testing

  • Google Pixel Buds 2 Pro (Anshel Sag)
  • Google Pixel Watch 3, 41mm (Anshel Sag)
  • Cisco Desk Pro (Melody Brue)
  • OnePlus Buds Pro 3 (Anshel Sag)
  • Insta360 Link2 4K AI Webcam (Anshel Sag)
  • Google Pixel 9 Pro Fold (Anshel Sag)
  • Google TV streamer – Matter and Thread features (Bill Curtis)
  • Various Matter devices (Bill Curtis)
  • ASUS Zephyrus G16 Gaming Laptop (Anshel Sag)
  • iPhone 16 Pro (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • RISC-V Summit, October 22-23 — virtual (Matt Kimball)
  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Matt Kimball, Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • RISC-V Summit, October 22-23 — virtual (Matt Kimball)
  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Matt Kimball, Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Dell Tech Analyst Summit, November 6-8, Austin (Matt Kimball, Anshel Sag, Paul Smith-Goodson)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • IBM, November 6-8, New York City (Paul Smith-Goodson)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Analyst Summit, November 12-13, San Francisco (Melody Brue)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • AWS re:Invent, December 2-6, Las Vegas (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson, Matt Kimball)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)
  • ServiceNow Global Industry Analyst Digital Summit, December 10 (Jason Andersen, Melody Brue, Robert Kramer – virtual)
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending October 18, 2024 appeared first on Moor Insights & Strategy.

]]>
Xeon 6P And Gaudi 3 — What Did Intel Deliver? https://moorinsightsstrategy.com/xeon-6p-and-gaudi-3-what-did-intel-deliver/ Wed, 16 Oct 2024 17:26:15 +0000 https://moorinsightsstrategy.com/?p=43947 Intel's new chips boost its position in the datacenter at a time when Intel's chief competitor, AMD, has been steadily claiming datacenter market share with its EPYC CPU

The post Xeon 6P And Gaudi 3 — What Did Intel Deliver? appeared first on Moor Insights & Strategy.

]]>
Intel’s new Xeon 6 processor Intel

Intel just continued the execution of its aggressive “five nodes in four years” strategy with the launch of the Xeon 6P (for performance) CPU and the Gaudi 3 AI accelerator. This launch comes at a time when Intel’s chief competitor, AMD, has been steadily claiming datacenter market share with its EPYC CPU.

It’s not hyperbolic to say that a successful launch of Xeon 6P is important to Intel’s fortunes, because the Xeon line has lagged in terms of performance and performance per watt for the past several generations. While Xeon 6E set the tone this summer by responding to the core density its competition has been touting, Xeon 6P needed to hit AMD on the performance front.

Has Xeon 6P helped Intel close the gap with EPYC? Is Gaudi 3 going to put Intel into the AI discussion? This article will dig into these questions and more.

Didn’t Intel Already Launch Xeon?

Yes and no—and it’s worth taking a moment to explain what’s going on. Xeon 6 represents the first time in recent history that Intel has delivered two different CPUs to address the range of workloads in the datacenter. In June 2024, Intel launched its Xeon 6E (i.e., Intel Xeon 6700E), which uses the Xeon 6 efficiency core. This CPU, codenamed “Sierra Forest,” ships with up to 144 “little cores,” as Intel calls them, and focuses on cloud-native and scale-out workloads. Although Intel targets these chips for cloud build-outs, I believe that servers with 6E make for great virtualized infrastructure platforms in the enterprise.

In this latest launch, Intel delivered its Xeon 6P CPU (Intel Xeon 6900P), which uses the Xeon 6 performance core. These CPUs, codenamed “Granite Rapids,” are at the high end of the performance curve with high core counts, a lot of cache and the full range of accelerators. Specifically, Xeon 6P utilizes Advanced Matrix Extensions to boost AI significantly. This CPU is Intel’s enterprise data workhorse supporting database, data analytics, EDA and HPC workloads.

The company will release the complementary Xeon 6900E and 6700P series CPUs in Q1 2025. The 6900E will expand on the 6700E by targeting extreme scale-out workloads with up to 288 cores. Meanwhile, the 6700P will offer a lower level of performance Xeon with fewer cores and less rich cache. It is still great for enterprise workloads, just not with the extreme specs of the 6900P.

Effectively, Intel has launched the lowest end of the Xeon 6 family (6700E) and the highest end (6900P). In Q1 2025, it will fill in the middle with the 6700P and 6900E.

Comparing the Xeon 6P and 6E cores Intel

The other part of Intel’s datacenter launch was Gaudi 3, the company’s AI accelerator. Like Xeon 6, Gaudi 3 has been talked about for some time. CEO Pat Gelsinger announced it at the company’s Vision conference in April, where we provided a bit of coverage that’s still worth reading for more context. At Computex in June, Gelsinger offered more details, including pricing. The prices he cited suggest a significant advantage for Intel compared to what we suspect Nvidia and AMD are charging for their comparable products (neither company has published its prices). However, for as much as Gaudi 3 has already been discussed, it has only now officially launched.

What’s Inside Xeon 6P?

Xeon 6P is a chiplet design built on two processes. The compute die, consisting of cores, cache, mesh fabric (how the cores connect) and memory controllers, is built on the Intel 3 process. As the name implies, this is a 3nm process. The chip’s I/O dies are built on the old Intel 7 process—at 7nm. This process contains PCIe, CXL and Xeon’s accelerator engines (more on those later).

The result of the process shrink in Xeon 6 is a significant performance-per-watt advantage over its predecessor. When looking at the normal range of average utilization rates, Xeon 6P demonstrates a 1.9x increase in performance per watt relative to the 5th Gen Xeon.

Intel’s testing compared its top-of-bin (highest-performing) CPU—the 6890P with 128 cores and a 500-watt TDP—against the Xeon Platinum 8592+ CPU, a top-of-bin 5th Gen Xeon with a TDP of 350 watts. Long story short, Intel has delivered twice the cores with a roughly 7% increase in per-core performance and a considerably lower power draw per core.

Xeon 6P shows a 1.9x PPW advantage over its predecessor Intel

It’s what’s inside the Xeon 6P that delivers a significant performance boost and brings it back into the performance discussion with its competition. Packed alongside those 128 performant cores is a rich memory configuration, lots of I/O and a big L3 cache. Combine these specs with the acceleration engines that Intel started shipping two generations ago, and you have a chip that is in a very competitive position against AMD.

Xeon 6P specifications Intel

When looking at the above graphic, it may seem strange to see two memory speeds (6400 MT/s and 8800 MT/s). Xeon 6P supports MRDIMM technology, or multiplexed ranked DIMMs. With this technology, memory modules can operate two ranks simultaneously, effectively doubling how much data the memory can transfer to the CPU per clock cycle (128 bytes versus 64 bytes). As you can see from the image above, the bandwidth increases dramatically when using MRDIMM technology, meaning that more data per second can be fed to those 128 cores. Xeon 6P is the first CPU to ship with this technology.

I point out this memory capability to give an example of the architectural design points that have led to some of Intel’s performance claims for Xeon 6P. Despite what some may say, performance is not just about core counts. Nor is it simply about how much memory or I/O a designer can stuff into a package. It’s about how quickly a chip can take data (and how much data), process it and move on to the next clock cycle.

Does Xeon 6P Deliver?

When I covered Intel’s launch of its 4th Gen Xeon (codenamed “Sapphire Rapids”), I talked about how I thought the company had found its bearings. This was not because of Xeon’s performance. Frankly, from a CPU perspective, it fell short. However, the company designed and dropped in a number of acceleration engines to deliver better real-world performance across the workloads that power the datacenter.

The design of Xeon 6P, building on what Intel introduced with Sapphire Rapids, sets it up to handle AI, analytics and other workloads well beyond what the (up to) 128 Redwood Cove cores can handle. And frankly, the Xeon 6P delivers. The company makes strong claims in its benchmarking along the computing spectrum—from general-purpose to HPC to AI. In each category, Intel claims significant performance advantages compared to AMD’s 4th Gen EPYC processors. In particular, Intel focused its benchmarks on AI and how Xeon stacks up.

Xeon 6P AI inference performance versus AMD EPYC Intel

As I say with every benchmark I ever cite, these should be taken with a grain of salt. These are Intel-run benchmarks on systems configured by its own testing teams. When AMD launches its “Turin” CPU in a few weeks, we’ll likely see results that contradict what is shown above and favor AMD. However, it is clear that Intel is back in the performance game with Xeon 6P. Further, I like that the company compared its performance against a top-performing AMD EPYC of the latest available generation, instead of cherry-picking a weaker AMD processor to puff up its own numbers.

One last note on performance and how Xeon 6P stacks up. In a somewhat unusual move, Intel attempted to show its performance relative to what AMD will launch soon. Based on AMD’s presentations at the Hot Chips and Computex conferences, AMD has made some bold performance claims relative to Intel. In turn, Intel used this data to show Xeon 6P’s projected performance relative to Turin when the stack is tuned for Intel CPUs.

Xeon 6P versus Turin for AI inference Intel

Again, I urge you to take these numbers and claims with a grain of salt. However, Intel’s approach with these comparisons speaks to its confidence in Xeon 6P’s performance relative to the competition.

Does Gaudi 3 Put Intel In The AI Game?

As mentioned above, we covered the specifications and performance of Gaudi 3 in great detail in an earlier research note. So, I will forego recapping those specs and get straight to the heart of the matter: Can Gaudi compete with Nvidia and AMD? The answer is: It depends.

From an AI training perspective, I believe Nvidia and to a lesser extent AMD currently have a lock on the market. Their GPUs have specifications that simply can’t be matched by the Gaudi 3 ASIC.

From an AI inference perspective, Intel does have a play with Gaudi 3, showing significant price/performance advantages (up to 2x) versus Nvidia’s H100 GPU on a Llama 2 70B model. On the Llama 3 8B model, the advantage fell to a 1.8x performance per dollar advantage.

This means that, for enterprise IT organizations moving beyond training and into inference, Gaudi 3 has a role, especially given the budget constraints many of those IT organizations are facing.

More importantly, Gaudi 3 will give way to “Falcon Shores” over the next year or so, the first Intel GPU for the AI (and HPC) market. All of Intel’s important work in software will move along with it. Why does that matter? Because organizations that have spent time optimizing for Intel won’t have to start from scratch when Falcon Shores launches.

While I don’t expect Falcon Shores to bring serious competition to Nvidia or AMD, I do expect it will lead to a next-generation GPU that will properly put Intel in the AI training game. (It’s worth remembering that this is a game in its very early innings.)

Xeon 6P Gives Intel Something It Needed

Intel needed to make a significant statement in this latest datacenter launch. With Xeon 6P, it did just that. From process node to raw specs to real-world performance, the company was able to demonstrate that it is still a leader in the datacenter market.

While I expect AMD to make a compelling case for itself in a few weeks with its launch of Turin, it is good to see these old rivals on more equal footing. They make each other better, which in turn delivers greater value to the market.

The post Xeon 6P And Gaudi 3 — What Did Intel Deliver? appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending October 11, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-october-11-2024/ Mon, 14 Oct 2024 13:00:18 +0000 https://moorinsightsstrategy.com/?p=43391 MI&S Weekly Analyst Insights — Week Ending October 11, 2024. A wrap up of what our team published during the last week.

The post MI&S Weekly Analyst Insights — Week Ending October 11, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

Welcome to this edition of our analyst insights roundup, collecting some of the key insights our analysts have developed based on the past week’s news.

Pixel 9 Night Sight panorama of Padres game

Anshel Sag, our principal analyst for mobile devices and personal computing—and a terrific photographer—made this glorious panoramic photo of Petco Park in San Diego using Google’s new Pixel 9 phone. (You can read his review here.) Unfortunately, his beloved Padres—also the hometown team of our VP and principal analyst Melody Brue—lost their National League Division Series to the Dodgers.

It’s another busy week for our team! 

This week, the team is attending various tech events nationwide. Melody is at AdobeMAX in Miami, while Matt, Paul, and Anshel are in Bellevue, Washington for Lenovo’s Global Analyst Summit & Tech World. New York is home to two significant events: Will is at Blackberry’s Analyst Day, and Matt, Robert, and Jason are participating in IBM’s Analyst Day.

On Thursday, October 17, Melody will join the RingCentral team on the webinar “Revealing the AI Communications Strategies That Work” where she’ll share her vision for the future of AI in UC. It’s free to attend!

Last week was very productive, with team members covering multiple events. Robert visited Los Angeles for Teradata, and Melody attended Zoomtopia in San Jose and SAP TechEd virtually. Bill was in Austin for Embedded World NA. Will traveled to Las Vegas for MWC Americas and the T-Mobile for Business Unconventional Awards. Patrick, Anshel and Matt took part in AMD’s Advancing AI Event in San Francisco, while Jason and Robert were in Seattle for the AWS GenAI Summit.

Looking ahead to next week, the team continues its tech event travels. Patrick and Will are set to attend Qualcomm’s Snapdragon Summit in Maui, Melody returns to Florida for WebexOne in Ft Lauderdale, and Matt will attend the RISC-V Summit virtually.

Our MI&S team published 15 deliverables:

Over the last week, our analysts have been quoted multiple times in top-tier international publications with our thoughts on Adobe Express, AI networking, AMD, Astera Labs, AI,  Marriott, cybersecurity, NVIDIA, Samsung, the 5G Americas Summit, and T-Mobile.

MI&S Quick Insights

The other day I had a great talk with Diya Wynn from Amazon Web Services. Wynn has been a key evangelist on setting up guardrails for generative AI. AWS’s own Amazon Bedrock Guardrails is a very interesting service that enables responsible AI that can span multiple LLMs in an enterprise. However, Wynn recently took on an expanded role in AWS’s advocacy for responsible AI, in which she is helping educate both federal and state governments in shaping good AI policies. What stuck out from the conversation is that AI has some unique properties when it comes to governmental policies. The first is that the pace of innovation for GenAI has been faster than many new technologies, which is tough for governments to handle as they tend to move much slower. The second are concerns associated with possible AI future outcomes including job losses or civil upheaval. The best part was a discussion about the role the government plays in innovation and the potential for providing the right infrastructure (such as an updated power grid) so that AI can continue to grow. Technologists don’t always appreciate this type of collaboration, but I think it’s great that AWS is taking this on.

A hot topic last week was the pricing models used for AI agents. There are many different approaches out there. For instance, with its Agentforce offering, Salesforce will be charging a fee for every time an agent runs. Others will still use a capacity-based model or a per-user subscription. While the merits of each of these can be debated, a more critical nuance in all of this is whether vendors will be able to execute these strategies from a systems or relationship management perspective. It’s going to be a challenge for everyone moving forward.

The other day I had a great talk with Diya Wynn from Amazon Web Services. Wynn has been a key evangelist on setting up guardrails for generative AI. AWS’s own Amazon Bedrock Guardrails is a very interesting service that enables responsible AI that can span multiple LLMs in an enterprise. However, Wynn recently took on an expanded role in AWS’s advocacy for responsible AI, in which she is helping educate both federal and state governments in shaping good AI policies. What stuck out from the conversation is that AI has some unique properties when it comes to governmental policies. The first is that the pace of innovation for GenAI has been faster than many new technologies, which is tough for governments to handle as they tend to move much slower. The second are concerns associated with possible AI future outcomes including job losses or civil upheaval. The best part was a discussion about the role the government plays in innovation and the potential for providing the right infrastructure (such as an updated power grid) so that AI can continue to grow. Technologists don’t always appreciate this type of collaboration, but I think it’s great that AWS is taking this on.

A hot topic last week was the pricing models used for AI agents. There are many different approaches out there. For instance, with its Agentforce offering, Salesforce will be charging a fee for every time an agent runs. Others will still use a capacity-based model or a per-user subscription. While the merits of each of these can be debated, a more critical nuance in all of this is whether vendors will be able to execute these strategies from a systems or relationship management perspective. It’s going to be a challenge for everyone moving forward.

Evaluating large language models is important for determining their capabilities and effectiveness. Traditionally, though, this evaluation has required a reliance on human judgment or expensive manual annotations. A group of academics have now published a research paper that addresses these challenges with a process called TICK (for “Targeted Instruct-evaluation with Checklists”). TICK is an automated and interpretable evaluation protocol that utilizes LLMs to generate instruction-specific checklists that break down complex instructions into yes/no questions, making the evaluation process more structured and objective.

The checklist format provides a clear and understandable breakdown of the evaluation criteria. TICK has been shown to significantly increase the agreement between LLM judgments and human preferences. It also streamlines the evaluation process by automating checklist generation. Having a structured checklist format reduces subjectivity, improves consistency in evaluations, and provides insights into the LLM’s reasoning and understanding of instructions.

Tesla’s We Robot event finally showed to the world where Tesla is going with its autonomous vehicles and its robotics Optimus platform. Based on the market’s reception on Friday, it seems that people are not convinced of Tesla’s timelines or the viability of its autonomous vehicles, especially since the vehicles will be two-seaters—compared to Waymo’s five-seaters. Additionally, Waymo is already delivering 100,000 rides per month and continues to scale up every month at an even higher pace. I believe that Tesla’s offering is too little too late, and that a two-seater is not a great fit for many applications. That said, pricing will be important. Additionally, Tesla neglected to mention that most of the Optimus robot demos it showed people were not powered by AI, but instead teleoperated by pilots offsite. Robotics has a long way to go, but it’s quite disingenuous of Tesla to present its robots that way.

AWS and Salesforce have teamed up to offer a new contact center solution that integrates Salesforce Contact Center with Amazon Connect. This partnership aims to make it easier for businesses to implement and manage their contact center operations, with a focus on faster deployment, reduced complexity, and improved AI capabilities. Essentially, it combines the strengths of Salesforce’s CRM with Amazon’s cloud-based contact center technology. This move also reflects a broader trend of closer integration between CCaaS (contact center as a service) and CRM platforms, driven by customer demand for more unified and efficient solutions.

At its Advancing AI 2024 Event, AMD officially launched the 5th Generation EPYC processor, codenamed “Turin.” Turin launches at a time when AMD has seen its share of the server CPU market increase to 34% and its chief rival, Intel, looking to find its footing. 5th Gen EPYC will launch in two variants. One of them, labeled 5c, targets scale-out and cloud workloads with up to 192 cores; the other, labeled 5, addresses traditional scale-up workloads with up to 128 cores. As expected, the chip will ship with a richness of capabilities: 12 channels of memory, 128 lanes of PCIe, enhanced security, and up to 5GHz clock speed. Also as expected, the OEM community was lined up to talk about their partnerships with AMD.

Having gone through the initial launch of EPYC back in 2017, I think it’s incredible to see how the tides have turned. In 2017, Opteron (EPYC’s predecessor) had less than 2% market share. The number of OEM platforms it secured were just a few. Enterprise customers wouldn’t even take a meeting with the company. Fast-forward seven years and this 34% market share means that AMD’s datacenter business is contributing half of the revenue to the company. More than 950 cloud instances are powered by EPYC, and more than 350 OEM platforms are built on this processor. And the EPYC is just starting to penetrate the enterprise, as most of its success so far has been found in the hyperscale space.

The wind is at AMD’s back. Congratulations to the team—from the design engineers to the marketeers.

In addition to EPYC, AMD also launched its Instinct MI325X GPU, targeting the AI training and inference space as well as the HPC market. Along with the MI325X comes ROCm 6.2—the company’s software stack that enables customers and ecosystem partners to build on top of the Instinct GPU. Like 5th Gen EPYC, the MI325X ships with lots of memory (256GB HBM3E), lots of memory throughput (6TB/s), and incredible performance. So much so that the company is able to demonstrate inference advantages over the market leader, NVIDIA. Additionally, the company is showing near parity on the training front.

Even though we always view benchmark and performance claims with a tinge of cynicism, the fact that AMD is able to demonstrate leadership in some applications is a big deal.

I think that what the company is doing with ROCm is perhaps the biggest enabler for Instinct MI325X’s success. With ROCm 6.2, the company has not only simplified the process of developing software for AMD GPUs, but also greatly increased performance. In fact, when comparing against ROCm 6.0, the company is claiming a 1.8x improvement in training performance and a 2.4x improvement in inference.

At the event, AMD brought out partners such as Oracle and Meta to demonstrate the growth of Instinct in the market. It is clear this GPU is making both performance and market share gains against NVIDIA.

At the Commvault SHIFT event in London, the company made several announcements, including the launch of Cloud Rewind, a cyber resilience solution built on technology from its acquisition of Appranix. This feature gives organizations enhanced, automated recovery capabilities, allowing them to quickly rebuild cloud applications after an attack. Commvault also introduced enhanced solutions for Amazon Web Services users, offering direct support for AWS environments to improve the protection of Amazon S3 data, as well as protection for Google Workspace, including Gmail, Google Drive, and shared drives. Additionally, Commvault’s partnership with Pure Storage adds an extra layer of security for enterprises using Pure’s storage solutions, while the company’s recent acquisition of Clumio further strengthens its capabilities in AWS environments. For more details, check out my latest Forbes article, co-authored with Patrick Moorhead, CEO and chief analyst of Moor Insights & Strategy: Commvault Enhances Cyber Resilience With Cloud-First Focus.

Marriott Hotels suffered three significant data breaches between 2014 and 2020, affecting over 344 million customers, partially due to its acquisition of Starwood Hotels & Resorts. The company has since settled with the Federal Trade Commission and nearly all U.S. states. However, some cybersecurity experts are raising concerns over the terms of these settlements. Check out the linked article, which includes my thoughts on the impact and broader implications of the Marriott breaches.

At SAP TechEd 2024, SAP announced updates to its AI capabilities, focusing on its generative AI copilot, Joule. Joule will now include AI agents that can collaborate to automate complex tasks such as dispute resolution and financial accounting. This move towards increased automation aligns with the broader trend of AI impacting entry-level jobs; McKinsey estimates that 12 million jobs may be affected by 2030. While SAP emphasizes increased efficiency and employee focus on less repetitive tasks, the potential for job displacement due to AI, even in white-collar roles, should be considered. SAP is also introducing a Knowledge Graph solution to link data with business context, aiming to improve decision-making and AI development. These changes and new AI features for developers in SAP Build show SAP’s ongoing efforts in business AI.

In addition, SAP has already achieved its goal of upskilling 2 million people worldwide by 2025. This milestone suggests a commitment to addressing the digital skills gap and preparing the workforce for a future where AI plays a more significant role in various jobs, potentially mitigating some of the displacement caused by AI-driven automation.

Adobe has introduced a new tool to increase transparency and trust in digital content. The Content Authenticity web app, scheduled for public beta release in Q1 2025, allows creators to attach Adobe’s Content Credentials to their work, providing verifiable information about the content’s origin and edit history. With this initiative, Adobe seeks to address concerns surrounding misinformation and unauthorized content use, particularly in the context of rising AI-generated content and deepfakes. The app also offers creators greater control over how their work is used, including the ability to specify whether it can be used for AI model training. Additionally, Adobe is releasing a Content Authenticity extension for Chrome (available in beta now) to enable users to view these credentials easily. While this tool’s full impact and uptake remain to be seen, the tool represents a significant step towards fostering a more accountable and transparent digital media landscape. Currently, creators can utilize Content Credentials within existing Adobe Creative Cloud applications.

Smartsheet has updated its work management platform with a focus on improving user experience and adding new features such as “collections” for secure file sharing and a “file library” to simplify collaboration. The platform has a new look, with better data visualization tools and an improved table view for working together in real time. These changes align with Smartsheet’s focus on growing subscription revenue and expanding its customer base. By making the platform more user-friendly and efficient, the company should attract new users and encourage existing ones to upgrade or renew their subscriptions to access advanced features.

Oracle announced new AI features for its Fusion Cloud Service and Field Service, emphasizing a shift in service organizations. Jeffrey Wartgow, VP of product management for the Oracle CX Service, stated that these AI tools will transform, not replace, service teams. “Workers will curate knowledge, optimize automation, and address AI failures,” Wartgow explained, highlighting the need for human intervention in complex situations. This marks a shift towards proactive service design, demanding more strategic and analytical service teams.

Oracle also affirmed its commitment to accessible AI, including these advancements in existing service licenses. “We want service costs to go down,” Wartgow said. These new capabilities empower organizations to balance automation with a human touch, which should provide efficient customer service.

Last week I attended Teradata’s Possible 2024 event in Los Angeles as well as the AWS Analyst Summit in Seattle. Part of the focus was on managing the challenges that AI and data present across different industries. AI is projected to contribute $15.7 trillion to the global economy by 2030. At the same time, 65% of executives prioritize sustainability, emphasizing the need to align AI’s growth with environmental goals. Effective data management is huge, as 80% of businesses report revenue increases from real-time analytics. While many vendors claim to offer sustainability solutions, the question remains whether these solutions address the full scope of customer needs for end-to-end carbon footprint transformation. This involves the entire production cycle—from sourcing raw materials to operational processes, transportation, and waste management—affecting all departments, suppliers, partners, employees, and customers. Additionally, companies must navigate the external factors of regulations and public reputation. I’ll be providing further analysis on sustainability’s impact on industries in my areas of specialty.

The AWS Analyst Summit was a great preparation for the upcoming AWS re:Invent conference. There was an informative discussion on Amazon Q, AI, data, ERP, SCM, and industries (specifically automotive). More to come on this in December when re:Invent rolls around.

Cloudera has announced its AI Inference service, powered by NVIDIA NIM microservices as part of the NVIDIA AI Enterprise platform. This service enables enterprises to efficiently deploy and manage large-scale AI models for both on-prem and cloud workloads to deliver on the potential of GenAI from pilot phases to production. Key features include auto-scaling, high availability, real-time performance monitoring, and integration with CI/CD pipelines via open APIs. The service also ensures strong enterprise security with access control and auditing and supports controlled updates through A/B testing and canary rollouts, providing a scalable and secure AI deployment solution.

Qualcomm recently announced its Networking Pro A7 Elite platform, which infuses GenAI and Edge AI with Wi-Fi 7. Users stand to benefit from performance improvements as well as personalized application and service delivery. What stands out for me is the ability to use the Edge AI feature to support privacy controls on infrastructure, potentially enhancing security outcomes by complementing endpoint protection.

XBOX Cloud gaming will let users stream their own games starting in November. This means that users will be able to stream games beyond the XBOX Game Pass Library, making the service even more useful to gamers who might have quite a broad library of titles. I believe that this is a sensible continuation of Microsoft’s expansion of capabilities for its XBOX gaming services. It also comes right on the heels of a court ruling in an Epic Games case that forces Google to stop requiring Google Play billing for apps in the Play Store starting on November 1.

EWNA — I attended the inaugural Embedded World North America conference in Austin last week. Embedded World is now international, with 2024 conferences in Nuremberg, Shanghai, and Austin. With about 3,500 attendees and 180 exhibitors, the inaugural EWNA offshoot was much smaller than its parent Nuremberg conference (32,000 attendees). Still, I was impressed with the coverage and quality of EWNA presentations, exhibitors, and attendees. In engineering terms, the conference’s signal-to-noise ratio was excellent. The second EWNA conference is slated for next year in Anaheim, California, and I plan to be there.

Silicon Labs CEO Matt Johnson and CTO Daniel Cooley delivered the opening keynote at EWNA. I agree with Johnson’s list of four developments that determine IoT’s potential: (1) robust platforms, (2) business models with significant ROI, (3) connectivity (with Matter and Sidewalk as examples), and (4) symbiosis between AI and IoT. This analysis set the stage for the introduction of the company’s Series 3 SoCs. Series 1 optimized embedded processing, Series 2 added connectivity, and Series 3 is a complete IoT platform built for inferencing, with post-quantum security and extensible memory and storage. Cooley gave us one of the best quotes from EWNA: “You can’t scale IoT on bare metal.” To show that Silicon Labs is all-in on platform-based IoT, he held up a sample of a new 22nm Series 3 chip. Embedded product companies that use off-the-shelf RTOSes (and OSes) pre-integrated with silicon platforms can concentrate on writing application code and minimize (or eliminate) the cost, time, security risks, and technical debt of creating custom system software. The economic benefits of this strategy outweigh the additional hardware cost for all but the most cost-constrained, power-limited, or air-gapped products. The company published technical details about Series 3, and I’ll provide insights in future posts and papers.

Qualcomm hosted an Embedded World NA event to introduce “The Age of Industrial Intelligence.” Nakul Duggal, general manager of the company’s automotive, industrial, and cloud business, walked the audience through the company’s industrial IoT strategy in detail—architecture, technologies, connectivity, processors, and AI platforms (Qualcomm IQ series). I was impressed with the company’s sharp focus on key vertical industries. IoT is a large set of horizontal technologies that are customized and sold into vertical markets. Most of the ingredient technologies are mature, but not the customization step. Customization is responsible for most of the cost and complexity of IoT deployments. To address this shortcoming, Mr. Duggal introduced the “chassis” concept—a set of use cases, products, enabling technologies, development tools, and system software unique to each vertical industry. The catchphrase “Industrial chassis for every vertical” means that each chassis supports customer-specific adaptation and differentiation, much like a car chassis supports multiple bodies. This approach reduces the need for extensive industry-specific and customer-specific development, and Qualcomm’s impressive list of “scaling partners” confirms the attractiveness of this approach. I’ll have much more to say about this in a future article.

Qualcomm and STMicroelectronics announced a strategic collaboration agreement that combines STM’s microcontrollers with Qualcomm’s wireless connectivity solutions. STM plans to start with a modular approach, integrating Qualcomm Wi-Fi/Bluetooth/Thread combo SoCs with various STM32 microcontrollers. While STM’s existing portfolio offers Thread and Bluetooth combinations, Qualcomm integrates all three into a single solution with coexistence logic. The first wave of collaborative products hits the market early next year, and STM aims to extend the roadmap “over time” to include cellular connectivity for industrial IoT applications. The combined products fill STM’s connectivity gaps and add mature microcontroller options to Qualcomm’s portfolio.

NXP recently hosted a Smart Home Innovation Lab tour on the company’s Austin campus. NXP has long recognized the importance of multi-vendor interoperability; it sponsored Thread and Matter from the start, and is now funding the hard work required to break down the deployment and usability barriers that impede growth in smart home technology.

Google has launched NotebookLM, an experimental AI tool that converts documents into engaging podcasts, offering a new way to consume information. The AI technology summarizes documents and generates discussions hosted by AI voices, making even complex texts such as legal briefs and academic papers more accessible for those who prefer auditory learning or have limited time. However, users should be aware of potential inaccuracies and biases in AI-generated summaries. Inaccuracies can range from subtle misinterpretations of the original text to outright hallucinations of information, particularly with nonfiction content. Beyond addressing these concerns, it seems like a fun tool, and I’m looking forward to trying it out.

Many in the tech industry, including at Google itself, seem to think that the breakup of the company is coming due to actions by regulators in the U.S. and Europe. This breakup would force Chrome and Android to be set apart from the company’s search business to avoid anticompetitive behavior where the company may prefer its own services above others. While it remains to be seen how this would work, I have been getting a sense that the company is already compartmentalizing certain apps and services in a way that would prepare it for such a split. Although Google would become a smaller company if this did come to pass, I also think it would potentially allow the company to focus on other businesses and give it a chance for more growth.

To demonstrate the superior speed and circuit quality of its Qiskit software stack for quantum computing, IBM recently conducted extensive tests against leading quantum software development kits. Qiskit was the overall obvious winner; it was faster, successfully completed more tests than any other SDK, and created circuits with fewer two-qubit gates. More specifically, Qiskit was 13x faster and 24% more efficient than TKET, which was the second-best-performing SDK. Even better, IBM is releasing a benchmarking open source suite called Benchpress that will allow users to perform their own performance evaluations and gain important insights about how other SDKs perform relative to Qiskit. (For more on IBM’s work with Qiskit, you can check out my recent article in Forbes.)

Quantinuum researchers have developed a method for gradient computation of quantum algorithms implemented on linear optical quantum computing platforms. Photonic quantum computers use photons to perform calculations, and it is difficult to calculate the mathematical gradient values needed to find the best way to improve the performance of these computers. Methods normally used for calculating gradients in gate-based quantum computers don’t work with photonic computers because of the special properties of light being used. Quantinuum researchers used a photonic parameter-shift rule to overcome this limitation and provide gradient computation for linear optical quantum processors.

The new method is efficient because the amount of work required is directly proportional to the number of photons being used. It also works well with VQA, an algorithm that can be optimized using gradients. The researchers tested the new method on quantum-chemistry and generative-modeling tasks and determined it performed better than other gradient-based and gradient-free methods. Although Quantinuum’s primary interest is trapped-ion quantum computers, it is possible it could be interested in using photonics for transmitting quantum information over long distances. Quantum computers can act as powerful nodes in a quantum network.

Intel is leaning into its extensive research efforts, silicon depth, and strong ecosystem and partnerships to deliver silicon-level secure AI at scale. In doing so, the company is providing enterprises with the ability to extend protection for datacenters and clients from cloud to network edge with both hardware and software. The success of Intel’s efforts can be measured by significant improvements in security controls, as well as the higher resilience of new AI PCs and datacenter applications. I recently published a Moor Insights & Strategy research paper that goes into more depth on Intel’s secure AI efforts.

MediaTek’s new Dimensity 9400 has adopted the latest Arm v9.2 CPU cores as well as GPU IP from Arm. The new chip also follows the Dimensity 9300 in abandoning the “little” cores and going with all-“big”-core designs. This does have a small impact on battery life, but because the big cores have become so power-efficient, the difference is negligible and results in better CPU benchmark performance, with MediaTek clocking up to 3.63 GHz on TSMC’s N3E process node. The result is a staggering increase over the Dimensity 9300 of up to 35% in single-core performance and 28% in multicore performance. The GPU is also expected to be up to 41% faster, while also boosting ray tracing performance by 40%. Although it isn’t built on Arm IP, the NPU is also improved, with 35% better energy consumption. Overall, the Dimensity 9400 looks to be yet another competitive flagship offering from MediaTek, and I expect we’ll see designs from Chinese OEMs using this chip very soon.

AMD launched the new Ryzen AI Pro 300 series at its Advancing AI event, after it sort of already launched it with HP last month without fully unveiling it. At the event, AMD also announced a new design with Lenovo for the storied ThinkPad line, which is a huge win for AMD to improve the enterprise credibility of the Ryzen Pro line. AMD also says that it has more than 100 design wins with the Ryzen AI Pro line through the end of 2025, which could potentially take a chunk of market share from Intel if units move in volume next year, aligned with the end of Windows 10 support.

Amazon Web Services has announced that it will remain the Seattle Seahawks’ official cloud provider, as well as its partner for machine learning, artificial intelligence, and generative AI. As the world of sports continues to embrace tech innovation, the Seahawks can take advantage of AWS’s breadth and depth of technologies. The Seahawks will use AWS’s Bedrock AI-powered system to automate content distribution by transcribing, summarizing, and distributing press conferences to millions of fans across online, social, and mobile channels in English, German, and Spanish.

T-Mobile recently announced a 5G on Demand solution that is designed to make it easier to deploy cellular infrastructure for portable use cases. The applications are limitless, including pop-up retail, special events, and more. The company claims that the private cellular network platform can be deployed in under 48 hours, and it is expected to be commercially available by the end of the year. T-Mobile has already leveraged the core components of 5G on Demand to support recent PGA men’s and women’s events, and at MWC Las Vegas 2024, T-Mobile for Business awarded CBS and Sony a first-place prize at its Unconventional Awards event to recognize the accomplishment.

Research Papers Published

Citations

Adobe / Adobe Express / Melody Brue / CFO Tech
Adobe Express expands with AI & key integrations

Adobe / Adobe Express / Melody Brue / Mi3
Adobe Express expands enterprise capabilities, delivers new integrations with Slack, Box, Google and more

AI Networking / Will Townsend / SDX Central
Exploring the new world of AI networking

AMD / AI Chip / Patrick Moorhead / The Rio Times
AMD’s New A.I. Chip Announcement Leads to $11 Billion Market Value Drop

AMD / AI Chip / Patrick Moorhead / OpiniPublik
AMD reveals AI-infused chips within Ryzen, Intuition and Epyc manufacturers

AMD / AI Chip / Patrick Moorhead / Venture Beat
AMD unveils AI-infused chips across Ryzen, Instinct and Epyc brands

Astera Lab / AI / Patrick Moorhead / Street Insider
Astera Labs Introduces New Portfolio of Fabric Switches Purpose-Built for AI Infrastructure at Cloud-Scale

Marriott / Cybersecurity / Robert Kramer / CSO
Do the Marriott cybersecurity settlements send the wrong message to CISOs, CFOs?

NVIDIA / Stock / Patrick Moorhead / Yahoo! Finance
Nvidia stock eyes record high as AI boom continues

Samsung / 5G Americas Summit / Anshel Sag / Samsung
Samsung to Participate in 5G Americas Industry Analyst Forum

T-Mobile / 2024 Unconventional Awards / Will Townsend / Aventiv
T‑Mobile Celebrates Innovative Customers at Third Annual Unconventional Awards

TV Appearances: 

AMD / AI Chip / Patrick Moorhead / Yahoo! Finance
Why Wall Street wasn’t excited by AMD’s new AI chip

New Gear or Software We Are Using and Testing

  • Cisco Desk Pro (Melody Brue)
  • OnePlus Buds Pro 3 (Anshel Sag)
  • Insta360 Link2 4K AI Webcam (Anshel Sag)
  • Pixel Watch 3 (Anshel Sag)
  • Pixel 9 Pro Fold (Anshel Sag)
  • Google TV streamer – Matter and Thread features (Bill Curtis)
  • Various Matter devices (Bill Curtis)
  • ASUS Zephyrus G16 Gaming Laptop (Anshel Sag)
  • iPhone 16 Pro (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • AdobeMAX, October 14-16, Miami (Melody Brue)
  • Lenovo Global Analyst Summit & Tech World, October 14-17, Bellevue, WA (Matt Kimball, Paul Smith-Goodson, Anshel Sag, Patrick Moorhead)
  • Blackberry Analyst Day, October 16, New York City (Will Townsend)
  • IBM Analyst Summit, October 16-18, New York City (Matt Kimball, Robert Kramer, Jason Andersen)
  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • RISC-V Summit, October 22-23 — virtual (Matt Kimball)
  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Matt Kimball, Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • AdobeMAX, October 14-16, Miami (Melody Brue)
  • Lenovo Global Analyst Summit & Tech World, October 14-17, Bellevue, WA (Matt Kimball, Paul Smith-Goodson, Anshel Sag, Patrick Moorhead)
  • Blackberry Analyst Day, October 16, New York City (Will Townsend)
  • IBM Analyst Summit, October 16-18, New York City (Matt Kimball, Robert Kramer, Jason Andersen)
  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • RISC-V Summit, October 22-23 — virtual (Matt Kimball)
  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Matt Kimball, Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  •  
  • Dell Tech Analyst Summit, November 6-8, Austin (Matt Kimball, Anshel Sag, Paul Smith-Goodson)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • IBM, November 6-8, New York City (Paul Smith-Goodson)
  • Fyuz, November 11-13, Dublin (Will Townsend)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Analyst Summit, November 12-13, San Francisco (Melody Brue)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • AWS re:Invent, December 2-6, Las Vegas (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson, Matt Kimball)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)
  • ServiceNow Global Industry Analyst Digital Summit, December 10 (Jason Andersen, Melody Brue, Robert Kramer – virtual)
  • Acumatica Summit, January 26-29, Las Vegas (Robert Kramer)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending October 11, 2024 appeared first on Moor Insights & Strategy.

]]>
Datacenter Podcast: Episode 31 – Talking AMD UEC NIC & EPYC & MI300, OpenAI, Qualcomm, IBM https://moorinsightsstrategy.com/data-center-podcast/datacenter-podcast-episode-31-talking-amd-uec-nic-epyc-mi300-openai-qualcomm-ibm/ Fri, 11 Oct 2024 17:02:18 +0000 https://moorinsightsstrategy.com/?post_type=data_center&p=43280 Join the Datacenter team for episode 31, as they talk AMD UEC NIC & EPYC & MI300, OpenAI, Qualcomm and IBM

The post Datacenter Podcast: Episode 31 – Talking AMD UEC NIC & EPYC & MI300, OpenAI, Qualcomm, IBM appeared first on Moor Insights & Strategy.

]]>
On this week’s edition of “MI&S Datacenter Podcast” hosts Matt, Will, and Paul  analyze the week’s top datacenter and datacenter edge news. This week we are talking AMD UEC NIC & EPYC & MI300, OpenAI, Qualcomm, IBM

Watch the video here:

Listen to the audio here:

2:22 AMD Is Bringing Sexy Back To Networking
10:49 OpenAI o1 Is PhD Smart
20:38 Mo’ Cores Mo’ Cache – AMD PI
28:26 Qualcomm Gets Edgy With Campus & Branch Connectivity Infrastructure
32:48 Europe Gets A Quantum Data Center
39:55 Mind The (AI) Gap – AMD PII

AMD Is Bringing Sexy Back To Networking
https://x.com/WillTownTech/status/1844465726226301209

OpenAI o1 Is PhD Smart
https://openai.com/index/learning-to-reason-with-llms/

Mo’ Cores Mo’ Cache – AMD PI
https://www.amd.com/en/products/processors/server/epyc/9005-series.html

Qualcomm Gets Edgy With Campus & Branch Connectivity Infrastructure
https://x.com/WillTownTech/status/1843375274781749430

Europe Gets A Quantum Data Center
https://www.ibm.com/quantum/blog/europe-quantum-datacenter-software

Mind The (AI) Gap – AMD PII
https://www.amd.com/en/products/accelerators/instinct/mi300.html

Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show, the contents of this show should not be taken as investment advice.

The post Datacenter Podcast: Episode 31 – Talking AMD UEC NIC & EPYC & MI300, OpenAI, Qualcomm, IBM appeared first on Moor Insights & Strategy.

]]>
Pure Storage Keeps Removing Complications From Enterprise Data Storage https://moorinsightsstrategy.com/pure-storage-keeps-removing-complications-from-enterprise-data-storage/ Fri, 11 Oct 2024 14:43:31 +0000 https://moorinsightsstrategy.com/?p=43674 At its London event this week, Pure Storage released a number of updates across its portfolio aimed at driving improvements to performance, cost and simplicity.

The post Pure Storage Keeps Removing Complications From Enterprise Data Storage appeared first on Moor Insights & Strategy.

]]>
Pure Storage keeps pursuing its mission to uncomplicate data storage for enterprise IT organizations. Getty Images
Pure Storage keeps pursuing its mission to uncomplicate data storage for enterprise IT organizations. Getty Images

This week’s Pure Accelerate London event kicked off with a bang. Pure Storage released a number of updates across its portfolio aimed at driving improvements to performance, cost and simplicity. And, of course, AI had to be part of this update release lest the tech gods be upset.

There was quite a bit in this release cycle to unpack and explore—and the following few sections will do precisely that.

A Primer On Pure

Since its founding in 2009, Pure Storage has been focused on modernizing the enterprise storage environment. It was the first storage company to only support flash storage and it pioneered storage-as-a-service and the cloud operating model. The company has also been at the forefront of the shifting economics of storage consumption with its Evergreen program.

In a nutshell, Pure is the embodiment of the modern storage company. For folks in the IT business for a while, some of the changes Pure has driven can seem to be borderline heresy: No spinning media? No tape? By Zeus, what will we do?

Yet Pure’s approach is how IT consumes storage now—cloud connectivity, cloud operating models and cloud economics (the way cloud economics is supposed to operate). The days of dedicated IT teams performing very specific functions in the datacenter are firmly in the rearview mirror. When an embedded development team in a business unit requires a development environment to be created, they want it now—not in four weeks after six different specialists meet to spin up the environment. Otherwise, they will simply go to the public cloud.

Adding to this tension is a modern IT workforce that consumes and interacts with technology differently than the generation that precedes them. These are smart IT folks that grew up on apps and the cloud.

Pure seems almost singularly focused on abstracting all the complexity away from storage management. This is critical, as storage is a foundational building block for our IT environments. And Pure attacks this challenge from every angle.

Given this context, it’s no surprise that Pure’s latest updates cover hardware, software and services.

Real-Time Enterprise File Removes Barriers

Here’s the setup. The legacy way of file storage is really legacy—like 20-plus years old. Teams would design and build a storage architecture and grow it over time. In this scenario, what inevitably happens is that silos grow and are managed independently of one another. One day, IT realizes just how inflexible this is.

In today’s world, storage has to be more flexible. An enterprise’s AI and analytics apps want access to all of the available data that exists across the enterprise, regardless of where it resides and regardless of where the apps using it are running. What’s needed is a single architecture that accesses data around the enterprise with a single control plane. This, in a nutshell, is what Pure’s Real-time Enterprise File does.

Modern file systems benefit from the Pure storage platform. Pure Storage
Modern file systems benefit from the Pure storage platform. Pure Storage

With Real-time Enterprise File, all storage is seen as a global pool (think clustering with no limitations). This is all managed as a single architecture from a single control plane. What the company has introduced is a realization of its cloud vision for storage—only it’s sitting on your premises.

As new workloads and applications are introduced into the environment, Pure’s implementation of zero-move tiering will be extremely helpful in improving resource utilization and efficiency. So, what is zero-move tiering? It’s better to start with what tiering is.

Storage tiering is a way of prioritizing your storage, data and applications. Hence, the most mission-critical applications have access to the fastest storage, and less critical applications access the appropriately performant storage. For example:

  • Tier 0 data storage is for workloads that require the fastest, lowest-latency storage available. Think real-time stock trading and other financial workloads.
  • Tier 1 would be for hot data, such as customer transactional data. A good example of this would be the retail platform that handles checking out of a store.
  • Tier 2 is for warm data where performance is required but not in real time. A good example is a back-office ERP application that employees access.
  • Tier 3 is for cold data—i.e., data that is archived.

Tiering can vary from organization to organization, but the concept is the same: get more important workloads and applications connected to the fastest and best storage. In the past, IT required a lot of work to do this. With zero-move tiering, that work disappears.

Thanks to the single-layer architecture and global storage pool, all data is already together. In other words, there are no data store tiers. In this case, Pure’s FlashBlade product intelligently prioritizes mission-critical workloads (and data) for processing, and no data moves from one storage class to another. Instead, the compute and networking resources dictate the tiering class.

Zero-move tiering on FlashBlade Pure Storage
Zero-move tiering on FlashBlade Pure Storage

To make this a little easier to deploy and manage, Pure has extended its AI copilot (announced at Accelerate in Las Vegas) to manage file services. This goes more directly to the earlier point about the modern IT organization consisting of a lot of smart people and not just specialists. With Pure’s AI copilot, IT folks can manage their Pure storage environment through natural language, not strange semantics. I am a fan of the copilot concept in general and how Pure has developed its own. It makes everybody a specialist and can turn specialists into experts through prompt-level engineering.

VM Assessment Tool

Pure also announced the availability of a VM assessment tool to help admins better manage their virtualized environments. Virtualized environments have forever promised to drive up utilization and overall datacenter efficiency. For many organizations, the reality is far different. Too many virtual machines run on servers that are not even close to being utilized to their full extent. This tool, when available, will be a good way for organizations to become more efficient.

Given the recent VMware turbulence, this could be a great help for organizations in the midst of figuring out their go-forward strategy. Not necessarily for moving away from VMware’s VCF offering, but certainly for rationalizing licensing and deployments.

Universal Credits

Finally, Pure has introduced Universal Credits to the market. Here’s the scenario: I oversubscribe to one service and undersubscribe to another as an IT organization. This happens all the time. In one scenario, I’ve got to shake the couch cushions to find a budget. In the other, I’m throwing money out the window. With this service, I can use my credits across the Pure portfolio—Evergreen//One, Pure Cloud Block and Portworx. Further, if I end my subscription term and have extra credits, I can carry those credits forward (with some conditions). This is pretty cool.

Here’s what I would like to see at some point. For some organizations, IT budget centers come from different funding buckets and are managed separately. A good example of this is when I was an IT executive in state government. There were 39 or so agencies with 39 or so IT budgets. What would be great is if I could share my credits with a sister agency to leverage Pure’s services even better. But hey, I’m just wishful thinking.

What To Make Of Pure’s Announcements?

At the bottom of every Pure PowerPoint deck is “Uncomplicate Data Storage, Forever.” From my perspective, this is exactly what the company is doing in every release of updates and services across its portfolio: making life easier for IT. While the majority of my words here have described Pure’s Real-time Enterprise File solution, it’s the combination of all of these services (plus the launch of the entry-level FlashBlade//S100) that delivers a lot of value to IT across operations, organization and finances.

There is a reason why Pure’s revenue was up significantly year over year while others (apart from NetApp) saw down quarters in their storage portfolio. And that reason is simple: IT wants its storage consumption to be like its cloud consumption—frictionless and easy. Further, it wants to do so with the promise of cloud-style economics.

It is fair to say that Pure’s strategy is spot-on, and its message is landing with the market. The only question is, what’s next?

The post Pure Storage Keeps Removing Complications From Enterprise Data Storage appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending October 4, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-october-4-2024/ Mon, 07 Oct 2024 18:52:06 +0000 https://moorinsightsstrategy.com/?p=43114 MI&S Weekly Analyst Insights — Week Ending October 4, 2024. A wrap up of what our team published during the last week.

The post MI&S Weekly Analyst Insights — Week Ending October 4, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

Welcome to this edition of our analyst insights roundup, collecting some of the key insights our analysts have developed based on the past week’s news.

Quantinuum Model H2 chip

This is a quantum computer chip from Quantinuum, one of Microsoft’s partners in its Azure Quantum project. This collaborative effort brings together quantum, AI, and high-performance computing to accelerate breakthroughs in quantum computing as well as other areas such as chemistry and materials science. Our own Paul Smith-Goodson has been covering this area for years, from household names such as Microsoft and IBM to startups like Quantinuum and Atom Computing.

As usual, our team is busy this week! Robert is in Los Angeles at Teradata. Melody is attending SAP’s TechEd event virtually and will be in San Jose for Zoomtopia. Bill is in Austin for Embedded World NA. Will is attending the MWC Americas and serving as a judge for the T-Mobile for Business Unconventional Awards event in Las Vegas. Patrick and Matt are attending AMD’s Advancing AI Event in San Francisco, and Jason and Robert will be at the AWS GenAI Summit in Seattle.

Last week, Robert attended the Infor Annual Summit in Las Vegas and LogicMonitor’s event in Austin. Melody was at the Cadence Fem.AI Summit in Menlo Park, California, and Microsoft’s Industry Analyst Event in Burlington, Massachusetts.

Next week, Melody will be at AdobeMAX in Miami. Matt, Paul, and Anshel will be attending Lenovo’s Global Analyst Summit & Tech World in Bellevue, Washington. Will is headed to New York for Blackberry’s Analyst Day, while Matt, Robert, and Jason will be in NYC for IBM’s Analyst Summit. Stay tuned for updates from these events!

Our MI&S team published 24 deliverables:

Over the last week, our analysts have been quoted multiple times in top-tier international publications with our thoughts on Accenture, Nvidia, China’s AI breakthrough, Meta, Microsoft, Pure Storage, Vast, and the WordPress and WP Engine lawsuit.

MI&S Quick Insights

Last week I got to spend some time with John Capobianco from Selector AI. Selector is a company that is developing a number of AI-based network monitoring and management tools. In particular the Selector team has been creating AI agents and embeddings. Notably, they can show you how a network ops person can use conversational AI to fix network problems from Slack. I was very impressed since what’s being done is very job-contextual and easy to understand. If you are managing networks, you should check it out. But if you don’t manage networks and want to see how someone builds and hacks on agents, you really need to see Capobianco’s YouTube Channel. What’s great is that the videos do a better job of showing how agents actually work than the more polished vendor versions you might see at a show or a demo pod.

Also last week I published a piece on CodeSignal and its developer benchmark. One of the items that stuck out to me was how OpenAI’s new Strawberry model has similar performance across the larger and mini sizes. It was an outlier versus the competition, for which smaller models did not perform as well. After digging into Strawberry a bit, I learned that the model is being positioned as a deeper reasoning model. This does mean it’s moving slower at times, but it’s also “thinking” more. The underlying action driving the reasoning is that the model is performing a chain of thought prompts upon itself as it performs a task. So the model is prompting itself to look for other answers. It’s an interesting departure from what we all have been seeing in the model space. Model size used to be a determining factor in response accuracy—but if the model can reason with itself, what will be the response speed? This is something to keep an eye on, because smaller models are gaining momentum thanks to their lower costs.

Last week I got to spend some time with John Capobianco from Selector AI. Selector is a company that is developing a number of AI-based network monitoring and management tools. In particular the Selector team has been creating AI agents and embeddings. Notably, they can show you how a network ops person can use conversational AI to fix network problems from Slack. I was very impressed since what’s being done is very job-contextual and easy to understand. If you are managing networks, you should check it out. But if you don’t manage networks and want to see how someone builds and hacks on agents, you really need to see Capobianco’s YouTube Channel. What’s great is that the videos do a better job of showing how agents actually work than the more polished vendor versions you might see at a show or a demo pod.

Also last week I published a piece on CodeSignal and its developer benchmark. One of the items that stuck out to me was how OpenAI’s new Strawberry model has similar performance across the larger and mini sizes. It was an outlier versus the competition, for which smaller models did not perform as well. After digging into Strawberry a bit, I learned that the model is being positioned as a deeper reasoning model. This does mean it’s moving slower at times, but it’s also “thinking” more. The underlying action driving the reasoning is that the model is performing a chain of thought prompts upon itself as it performs a task. So the model is prompting itself to look for other answers. It’s an interesting departure from what we all have been seeing in the model space. Model size used to be a determining factor in response accuracy—but if the model can reason with itself, what will be the response speed? This is something to keep an eye on, because smaller models are gaining momentum thanks to their lower costs.

A recent study is centered on an AI model called Future You, which may reduce anxiety by helping people feel better about how they might look and talk at a future age. A chatbot allows subjects to have realistic conversations with a future version of themselves. Researchers concluded that when test subjects interacted with a Future You version of themselves, it reduced their anxiety about getting older.

While there are positive aspects of the Future You, the researchers also have some cautions:

  • It is possible that the AI Future You won’t represent the real person and may alter the real, present-day person’s behavior.
  • Some personalities may become overly dependent on AI for decision-making, causing them to ignore their own judgment and intuition.

The scientists believe further research is needed to study the possibility of these negative issues and ensure the promotion of ethical AI development.

Many readers here will know that OpenAI’s long-range goal is to develop AGI. Although it has already made amazing progress with ChatGPT, the company continues to create models with more capabilities, such as its recent preview o1 model with benchmarked increased reasoning. Developing even larger models with increased capabilities requires huge amounts of funding. Toward that end, OpenAI just secured a staggering $6.6 billion in funding, which puts the company’s valuation at around $157 billion. Look for OpenAI to build more powerful models over the next 12 to 18 months. In addition, that staggering amount of funding will no doubt set the stage for more AI companies to bring in extraordinary funding rounds of their own.

Waymo is adding the Hyundai Ioniq 5 to its fleet of self-driving vehicles. This means that Waymo is seeing continuing demand and also likely wants a more modern EV platform to work with in the form of Hyundai/Kia’s E-GMP platform. The new Ioniq 5 also brings NACS charging, self-closing doors, and 800-volt charging, all of which are desirable features in an EV that could also make running a self-driving fleet even easier. NACS charging also means that Hyundai’s cars could theoretically take advantage of Tesla’s supercharging network without any adapters—potentially expanding Waymo’s network potential.

VAST Data announced InsightEngine, a solution aimed at delivering real-time retrieval augmented generation (RAG) in collaboration with NVIDIA NIM. InsightEngine builds on the company’s previously released Data Platform, which is designed to streamline the AI pipeline. By delivering a disaggregated, scalable architecture with a global namespace, the Data Platform removes data tiering and enables fast access. InsightEngine embeds vectorized data in the Data Platform’s scalable DataBase every time new data is inserted, which ensures that RAG happens in real time and the data is current.

Why is this important? For functions like support chatbots and other customer-facing interactions, is this “real-time” RAG—down to milliseconds—as important? Probably not. However, in the agentic era of AI, where application-specific agents are working together for more critical functions, this real-time nature is an absolute must. And VAST is unique in delivering this capability in conjunction with NVIDIA.

Equally important is VAST’s announcement of its Cosmos community. Cosmos is where AI practitioners can connect with peers, VAST, and industry experts to help plan and drive AI projects. If Cosmos realizes its potential, it could be a big win for customers—and for VAST.

MongoDB is the fifth most popular database distribution on the market, and by far the most popular NoSQL distribution. It is used by some of the largest organizations on the planet, and the company just released MongoDB 8.0. Yet many still view it as not ready for mission critical duty. Is this a fair argument to make? Or is it just the traditional players sowing doubt to protect their market positions?

The challenges center on scale and reliability—at the heart of MongoDB’s sharding capabilities built into v 8.0. The document database architecture and its loose schema with no normalization typically does not serve the needs of an organization like a transactional database. These databases are flexible and very good for mobile use cases—less so for the entrenched OLTP use cases. While MongoDB would maybe argue this and point to a customer or two, I find it difficult to see, say, an Oracle customer migrating away. Especially as Oracle has opened up its database to support document, graph, and key.

Maybe MongoDB challenges the OLTP giants at some point—but the market isn’t yet ready.

Barcelona-based liquid cooling vendor Submer has secured $55 million in funding as the hype—and genuine need—for alternative cooling methods has exploded in the AI era. In fact, studies by the International Energy Agency and other organizations show that datacenter energy consumption will more than double between now and 2030. However, for organizations looking to employ liquid cooling in their datacenters, the path is not so simple. There are multiple ways to cool infrastructure with varying degrees of efficiency.

Submer delivers a single-phase immersion-based cooling solution to the market. By this method, infrastructure is fully immersed in tanks filled with dielectric fluid that moves over the surface of equipment with the aid of pumps.

Power usage efficiency (PUE) of single-phase cooling averages roughly 1.1 (1.0 is optimal). For reference, air cooling delivers a PUE of roughly 1.5. While this PUE number is attractive, immersion cooling is disruptive. From deployment to IT operations, utilizing immersion cooling forces major changes for facilities teams, datacenter architects, and IT organizations.

Direct liquid cooling (DLC), otherwise known as direct-to-chip (D2C), is far less disruptive to deploy for datacenter operators. It doesn’t require specialized plumbing, reinforced floors, special tanks, and equipment to deploy and remove infrastructure from tanks. The flipside to DLC is that its PUE isn’t quite as good as immersion cooling, averaging between 1.15 and 1.2, depending on the subtype.

What’s the message from all of this? Despite the difficulty in deploying immersion cooling (and the smaller market opportunity), Submer secured $55 million. The cooling market is real, and datacenter operators and architects are still looking for the right solution—and vendor.

At its Dreamforce 2024 conference, Salesforce introduced Agentforce, a platform for AI agents designed to automate business tasks. Salesforce CEO Marc Benioff emphasized, “This is about humans and robots driving customer success together.” The success of AI depends not just on data but on having the right data, making effective data management critical. Agentforce, built on Salesforce’s Data Cloud, integrates data from internal and external sources, including ERP and SCM systems, to improve workflows—while also presenting unique benefits and challenges for businesses. Read more in my latest Forbes article.

The LogicMonitor Analyst Conference 2024 took place last week. It was an intimate gathering of analysts and customers, offering a closer look at the company’s strategies, innovations, and market directions. It felt more personal, with candid discussions covering key topics such as hybrid cloud monitoring/observability, platform vision, DevOps, AI and ML, and security. Customer success stories from McKesson, TopGolf, and AppDirect really brought these concepts to life, showing how LogicMonitor’s solutions make an impact. I also had some valuable face time with the executive team. LogicMonitor operates at the infrastructure level, providing monitoring and observability for IT environments. While not directly associated with ERP systems, LogicMonitor’s technology plays a complementary role by monitoring the infrastructure that ERP systems rely on. I.e., it ensures the uptime and performance of the underlying systems that support ERP platforms. There could be a potential for future integration with ERP environments.

Cisco recently announced that it is planning to wind down its support for LoRaWAN by the end of 2029. The news comes on the heels of the company’s restructuring plans and is likely an effort to direct more resources to shore up the recent decline in its networking business. IoT is a tricky segment to monetize, and the growing momentum for 5G RedCap—given its reduced power and ability to support industrial sensors—may also factor into Cisco’s decision to end its investment in and eventual support for the LoRaWAN standard.

I attended Infor’s Velocity Summit in Las Vegas last week, where the company introduced several updates and features to its industry-specific CloudSuite platform. The updates focused on refining core functionalities and adding tools such as AI-powered assistants and process mining. With me personally, the company also reviewed the details for the soon-to-be-released sustainability modules intended to support production, inventory, operational, compliance, and environmental goals. Infor also emphasized the importance of helping clients understand the business impact of adopting these new technologies.

Many ERP customers face the challenge of running legacy systems while wanting to transform to the vendor’s modern cloud-based version. This transition takes time and requires careful planning, updated processes, and the right team. Effective change management is essential to help employees adapt. Trusting your vendor and improving data quality is also key. Without clean data and a good partnership with the vendor, it’s hard to fully benefit from the new features offered by modern ERP systems.

Acumatica has released its Acumatica Cloud ERP 2024 R2 update with 350 new features based on feedback from over 26,000 users. The update includes a new user interface, AI integration, automation features, and industry-specific improvements that apply to the construction, distribution, and manufacturing industries as well as general business.

In a conversation I had with Acumatica’s chief product officer, Ali Jani, he said, “We prioritize understanding customer problems and align those requirements with our product strategy. We have built a vibrant customer community through communication and collaboration so that customers can engage with us and vote on features. Many of our product managers visit customers on-site to learn more about their needs.”

Change is challenging, but with transparency and trust, it can be managed. In my discussion with Acumatica, I emphasized how these elements are critical for customers to adapt to new or updated systems.

Oracle aims to transform Imperial College London with its Oracle Cloud ERP and Oracle Cloud HCM. By shifting from a legacy on-premises system to Oracle’s solutions, Imperial hopes to eliminate manual tasks, reduce costs, and improve employees’ overall experience. This change is necessary for Imperial College and other organizations in similar situations. Though the transformation may be challenging, modernizing these systems is crucial for maximizing an ERP solution and improving overall operations.

There is a lot of talk about the university researchers that used Meta’s Ray-Ban smart glasses to dox people in real time in public spaces. This is, first and foremost, well outside of Meta’s ToS—clearly a way to hack the glasses to enable a use case that isn’t authorized. That said, these privacy issues will continue to arise as wearable cameras on smart glasses become more prominent. In this context, we as a society should have more discussions about how and where they are used.

Hasbro is working with Epic Games to bring classic board games to Fortnite. The first game it is launching is Clue, which should be one of the most fun board games to play as a 3-D character. This is an extension of what Epic Games had discussed earlier in the week during Epic Games Fest in Seattle, where it talked about unifying Unreal Engine and Fortnite’s development environments to make it easier to ship games on both platforms. This is also how I believe Epic Games plans to build up its Launch Everywhere on Epic platform where you get lower royalties (from 5% down to 3.5%) for launching once on Epic on all platforms.

Every week brings a wave of new AI agent announcements, and Workday is the latest to join the trend. The company says that its new AI agents are designed to revolutionize HR and finance departments across various industries. These agents aim to automate routine tasks, such as generating onboarding materials and drafting financial reports, to free professionals for more strategic work. Workday also reports that AI can provide valuable insights to improve decision-making, like predicting employee attrition or identifying potential budget issues by analyzing data. The company claims that the agents can enhance employee experiences by personalizing communications, answering questions, and offering career guidance.

Workday believes that industries with complex HR and finance needs, including healthcare, financial services, and education, are poised to benefit significantly. According to Workday, with a focus on streamlining processes and improving efficiency, these AI agents can potentially transform how HR and finance departments operate. HR is a probable place for companies to start testing agents. Although there is some risk with compliance issues, HR workflows are typically very defined around a clear set of rules, policies, and procedures, with access only given to approved roles—a good fit for how AI agents work.

Cisco LoRaWAN EOL — On October 1, Cisco announced an abrupt exit from the LoraWAN space. Sales end 1 January 2025, and maintenance stops in 2026. The company offers no product migration path for any LoRaWAN products, including gateways. Fortunately, Cisco customers can easily find alternate suppliers, and replacement products are not expensive. I advise our clients not to read too much into this announcement. It’s most likely a cost-saving move as Cisco doubles down on faster-growing markets. Although LoRaWAN faces more competition from 5G RedCap, Bluetooth Class 1, LEO satellite constellations, and low-power mesh networks (e.g., Thread), the technology is still expanding in low bandwidth use cases where low cost and long range are deciding factors.

RPi, Sony AI camera — I’m impressed with Raspberry Pi’s new $70 AI camera. It uses Sony’s IMX500 intelligent vision sensor with on-board inferencing. The camera connects to any RPi board with a standard flat cable and uses the well-known libcamera vision stack. Sony’s AI tools can convert TensorFlow or PyTorch models to run on the camera.

Honeywell — At the company’s user group meeting last week, Jason Urso, CTO of Industrial Automation, described the confluence of process digitization and AI as “digital cognition.” More sensors (10x, he reckons) coupled with AI, more processing power, and 5G connectivity lets customers see what they could not see before. It’s a compelling vision that aligns with my observation that AI is IoT’s killer app.

It appears that Apple’s first iPhone with its own 5G modem will indeed be the iPhone SE 4, which should sport the same processor as the iPhone 16, but with an all-new Apple Silicon 5G modem. While details are fairly limited on what the 5G modem’s specs will be, it is very unlikely that it will match the Snapdragon X71 modem currently in the iPhone 16. That said, it will probably support fewer bands and very likely not have mmWave support like its iPhone SE predecessors. Having the new 5G modem launch on Apple’s cheapest and lowest-stakes product is a good move for the company and will give it a much lower risk profile for testing out the new 5G chip. I expect that over the course of the next year or two we’ll see Apple phase out Qualcomm’s chips for its own 5G chip—if this launch is successful.

October is International Women In AI month, and I was fortunate to attend Cadence Design Systems’ Fem.AI event in Palo Alto last week. As we talk about bringing more women into the field of AI, Cadence is speaking loud and clear through a $20 million investment and leading the Cadence Fem.AI initiative. There was an incredible lineup of women and allies in AI who spoke at the conference about the challenges and opportunities for gender parity in STEM and AI degrees, what can happen if students are supported and mentored in their AI journeys, intentionality, responsible AI, venture funding for women in AI, and more. I will publish a complete analysis of the event and initiative shortly. I will also have Nicole Johnson, president of the Cadence Giving Foundation, on a Six Five podcast in the coming weeks to dig into the program and what’s next, including some great new founding partner companies that have joined Cadence in supporting women in AI. Stay tuned for a great discussion!

Microsoft has announced significant improvements to Copilot’s capabilities and a redesign to make it more user-friendly and better looking. Microsoft also talked about Copilot+ improvements to Windows 11 PCs that have compatible hardware, including the launch of Recall, which I believe is one of Microsoft’s most compelling AI features. While Copilot is getting many improvements, Microsoft is also making lots of adjustments to Windows 11 as well, basically rebuilding the operating system while removing some apps and updating and enhancing others.

Europe is one of the key global players in quantum computing. IBM recently announced the establishment of a quantum datacenter in Ehningen, Germany, to provide easier access to cutting-edge quantum computing resources for the ecosystem of more than 80 European organizations using quantum computing and almost 1,000 Europeans with IBM Quantum learning badges. This center will provide companies, researchers, and governments the capability to run their workloads on utility-scale Eagle QPUs, which are planned to be upgraded to 156-qubit Heron processors later this year. It is important to note that client user workflow data (circuit inputs and outputs) will stay in the EU for regional services. The new datacenter is part of IBM’s long-term worldwide plan for quantum.

As IT infrastructure vendors investigate nuclear power to feed hungry next-generation AI applications, it potentially represents a new cyberthreat. Recently, the U.K. nuclear site Sellafield was fined nearly half a million dollars for inadequate cybersecurity controls, and penalties for other sites could follow. Nuclear energy is a promising power alternative for datacenters given its clean energy footprint, but the obvious danger in disrupting operations will require stringent protection and possibly new cybersecurity tools.

WNS, a global business process management provider, and Uniqus Consultech, a consulting firm specializing in accounting, ESG, and technology, have partnered to offer clients a comprehensive suite of sustainability and technical accounting services. This collaboration leverages WNS’s expertise in finance and accounting, including AI capabilities, and combines it with Uniqus’s areas of specialized knowledge. The partnership aims to address the growing demand for integrated sustainability reporting and complex accounting solutions. This includes services ranging from ESG compliance and decarbonization strategies to technical accounting advisory and financial system integration. The joint offering is designed to provide clients with a one-stop solution for streamlining data management, optimizing decision-making, and achieving sustainability and accounting goals. The companies report that the alliance has already yielded successful outcomes, such as assisting a biopharma company with post-acquisition integration of financial and accounting systems across multiple countries.

Oura finally announced the new Oura Ring 4—after two years—and it’s a little underwhelming. Yes, the company has introduced slimmer sensors, increased accuracy, and more sizes. But after trying the Samsung Galaxy Ring, I believe that Oura should have targeted a slimmer ring. Samsung’s ring is noticeably thinner and lighter than the Oura Ring 3, and based on the images of the Oura Ring 4, there doesn’t seem to be much of an improvement on thickness other than for the sensors, which never bothered me.

Ericsson announced the integration of Cradlepoint into its overall private 5G network portfolio on September 16 with the creation of a new business unit dubbed Ericsson Enterprise Wireless Solutions. It is a smart move, one that is intended to provide a broad set of services that span neutral host, wireless WAN for fixed locations, IoT, and vehicles, and cellular-optimized zero trust, SASE, and SDWAN. The consolidation should also improve Ericsson’s route to market, leveraging Cradlepoint’s established channel sales footprint and access to enterprise customers.

Research Papers Published

Citations

Accenture-NVIDIA Deal / Gen AI / Jason Andersen / CIO
Accenture-Nvidia deal: A first peek into the new world of gen AI-centric strategies

China AI Breakthrough / Patrick Moorhead / Baseline Magazine
China achieves breakthrough in AI training

China AI Breakthrough / Patrick Moorhead / Techopedia
China Makes Breakthrough in AI Training Across Multiple Data Centers

China AI Breakthrough / Patrick Moorhead / Tom’s Hardware
China makes AI breakthrough, reportedly trains generative AI model across multiple data centers and GPU architectures

China AI Breakthrough / Patrick Moorhead / Windows Central
China is “the first to train a single generative AI model across multiple data centers” with an innovative mix of “non-sanctioned” GPUs forced by US import blocks on AI tech

Meta / Orion Glasses / Anshel Sag / Fast Company
Meta’s Orion glasses show that consumer AR wearables are almost here

Microsoft / HoloLens 2 Headset / Anshel Sag / Computer World
It’s a wrap for the HoloLens 2 headset

NVIDIA / Stock / Patrick Moorhead / Yahoo Finance
Nvidia stock slips on China trade fears

NVIDIA / Stock / Patrick Moorhead / MoneyCheck
Nvidia’s AI Dominance Undeterred: Stock Poised for Further Gains

Pure Storage / Data Storage / Matt Kimball / IT Brief UK
Pure Storage unveils innovations to enhance data storage

Vast / Enterprise AI / Matt Kimball / Tech Target
Vast unveils InsightEngine, a move to support enterprise AI

WordPress.org & WP Engine / Lawsuit / Melody Brue / CIO
Things get nasty in lawsuit between WordPress.org and WP Engine

TV APPEARANCES: 

CNBC Closing Bell: Overtime / NVIDIA & Accenture Partnership / Patrick Moorhead
Nvidia and Accenture partnership is ‘watershed’ moment, says Wedbush’s Dan Ives

New Gear or Software We Are Using and Testing

  • Cisco Desk Pro (Melody Brue)
  • OnePlus Buds Pro 3 (Anshel Sag)
  • Insta360 Link2 4K AI Webcam (Anshel Sag)
  • Pixel Watch 3 (Anshel Sag)
  • Pixel 9 Pro Fold (Anshel Sag)
  • Google TV streamer – Matter and Thread features (Bill Curtis)
  • Various Matter devices (Bill Curtis)
  • ASUS Zephyrus G16 Gaming Laptop (Anshel Sag)
  • iPhone 16 Pro (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Teradata, October 7-10, Los Angeles (Robert Kramer)
  • SAP TechEd, October 8 (Melody Brue – virtual)
  • Zoomtopia, San Jose, October 8-9 (Melody Brue)
  • Embedded World NA, Austin, October 8-10 (Bill Curtis)
  • MWC Americas and T-Mobile for Business Unconventional Awards event judge, October 8-10, Las Vegas, October 8-10, Las Vegas (Will Townsend)
  • AMD Event, San Francisco, October 8-10 (Matt Kimball)
  • AWS GenAI Summit, October 9-10, Seattle (Jason Andersen, Robert Kramer)
  • AdobeMAX, October 14-16, Miami (Melody Brue)
  • Lenovo Global Analyst Summit & Tech World, October 14-17, Bellevue, WA (Matt Kimball, Paul Smith-Goodson, Anshel Sag)
  • Blackberry Analyst Day, October 16, New York City (Will Townsend)
  • IBM Analyst Summit, October 16-18, New York City (Matt Kimball, Robert Kramer, Jason Andersen)
  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • RISC-V Summit, October 22-23 — virtual (Matt Kimball)
  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Matt Kimball, Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Teradata, October 7-10, Los Angeles (Robert Kramer)
  • SAP TechEd, October 8 (Melody Brue – virtual)
  • Zoomtopia, San Jose, October 8-9 (Melody Brue)
  • Embedded World NA, Austin, October 8-10 (Bill Curtis)
  • MWC Americas and T-Mobile for Business Unconventional Awards event judge, October 8-10, Las Vegas, October 8-10, Las Vegas (Will Townsend)
  • AMD Event, San Francisco, October 8-10 (Matt Kimball)
  • AWS GenAI Summit, October 9-10, Seattle (Jason Andersen, Robert Kramer)
  • AdobeMAX, October 14-16, Miami (Melody Brue)
  • Lenovo Global Analyst Summit & Tech World, October 14-17, Bellevue, WA (Matt Kimball, Paul Smith-Goodson, Anshel Sag)
  • Blackberry Analyst Day, October 16, New York City (Will Townsend)
  • IBM Analyst Summit, October 16-18, New York City (Matt Kimball, Robert Kramer, Jason Andersen)
  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • RISC-V Summit, October 22-23 — virtual (Matt Kimball)
  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Matt Kimball, Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Dell Tech Analyst Summit, November 6-8, Austin (Matt Kimball, Anshel Sag, Paul Smith-Goodson)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • IBM, November 6-8, New York City (Paul Smith-Goodson)
  • Fyuz, November 11-13, Dublin (Will Townsend)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Analyst Summit, November 12-13, San Francisco (Melody Brue)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • AWS re:Invent, December 2-6, Las Vegas (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)
  • ServiceNow Global Industry Analyst Digital Summit, December 10 (Jason Andersen, Melody Brue, Robert Kramer – virtual)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending October 4, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending September 27, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-september-27-2024/ Tue, 01 Oct 2024 01:28:45 +0000 https://moorinsightsstrategy.com/?p=42979 MI&S Weekly Analyst Insights — Week Ending September 27, 2024

The post MI&S Weekly Analyst Insights — Week Ending September 27, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

Welcome to this week’s edition of the Moor Insights & Strategy analyst insights roundup. Conference season is heating up, which means we are crisscrossing the country to see and hear the latest from Microsoft, SAP, Teradata, and more, on top of our usual briefings and advisory sessions. If you’re wondering where we’ll be, check out the event listing toward the bottom of this update — and please don’t hesitate to reach out if you’d like to book a meeting, or just to arrange a face-to-face hello.

This week, we’re going to start you off with AST SpaceMobile’s satellite telecom technology before we take our usual tour of the many industry segments we cover. Enjoy!

Last week, Will Townsend (with content partner and podcast editor Anshel Sag behind the scenes) hosted Chris Sambar, president of Network for AT&T, and Abel Avellan, CEO at AST SpaceMobile, for a standalone “G2 on 5G” podcast: AT&T and AST SpaceMobile’s Vision to Bridge the Digital Divide.

Get a front-row view into the vision for a world where broadband connectivity is accessible to everyone, everywhere, through a revolutionary network of large satellites. Discover how AST SpaceMobile is pushing the boundaries of space-based connectivity, aiming to bridge the digital divide and bring high-speed internet access to even the most remote corners of the globe.

Don’t miss this opportunity to hear from true visionaries in the field!

This week, Robert will be at the Infor Annual Summit in Las Vegas, and LogicMonitor’s event in Austin. Melody will be attending the Fem.AI Summit in Menlo Park, and Microsoft’s Industry Analyst Event in Burlington, Massachusetts.

Last week, Patrick and Anshel attended HP Imagine in Palo Alto while Melody attended virtually. Anshel also traveled to San Jose for Meta Connect, and Melody attended Verint Engage in Orlando and SAP CX Live virtually. 

Our team will be busy next week! Robert will be in Los Angeles at Teradata. Melody will be attending SAP’s TechEd event virtually and in San Jose for Zoomtopia. Bill will be in Austin for Embedded World NA. Will is attending the MWC Americas and serving as a judge for the  T-Mobile for Business Unconventional Awards event in Las Vegas. Matt will be in San Francisco for AMD’s event, and Jason and Robert are attending the AWS GenAI Summit in Seattle. Stay tuned for updates from all of those exciting events!

Our MI&S team published 23 deliverables:

Over the last week, our analysts have been quoted multiple times in top-tier international publications with our thoughts on Apple, Box, HP, Hybrid Cloud Infrastructure, IBM, Infoblox, Intel, Meta, and Pure Storage.

MI&S Quick Insights

This week I published a primer on AI agents. This is already an area of intense activity for many of our clients which I believe is ushering in a new generation of AI capabilities in the enterprise. As opposed to training an AI model, an agent actually constrains it to follow a specific set of rules or processes. While that may sound limiting, it’s actually the opposite because it allows an enterprise to dictate aspects of how it wants AI to behave and execute a process. That’s important because then a business can figure out and measure the business impact that AI will provide. ROI is still the key to all technology decisions in the business world, and agents may be the key to building an ROI-centric narrative.

One of my favorite things about IBM is how committed it has been to open source. That commitment is demonstrated by mountains of contributions before and after acquiring Red Hat. Eclipse, Tomcat, and Redshift are all examples of how IBM has contributed to open source in a non-commercial way for the benefit of the entire industry. This week I took a look at AI Fairness 360, which IBM recently committed to the Linux Foundation’s LF AI projects. It’s an open source toolkit designed to help place better guardrails on bias and hate speech. It’s intriguing for three key reasons. First, it’s open source so anyone can contribute and use it, which provides a common shareable platform for this important aspect of AI. Second, it goes beyond words by using over 70 fairness metrics to understand if there is bias in underlying machine learning processes like credit scoring or fraud detection. And third, unlike a lot of AI tech, it’s not a black box, which means that its workings are open to public scrutiny. This should be a welcome aspect for privacy advocates. While many companies—such as AWS with its Bedrock Guardrails service—are also doing work in this area, the notion of a common cross-industry capability is quite interesting.

Recently, CodeSignal released one of the most interesting AI developer benchmark studies that I have seen. Like many great studies, it not only informs the reader but also prompts more questions for further research. CodeSignal sells a skills framework that many enterprises use to evaluate developers during the hiring process. The company now has more than 500,000 test results, so it has a very good feel for a wide range of developers and their relative skills. Now CodeSignal has let a bunch of different LLMs take the test to see what happened. I have a piece on this coming out next week, but the two big takeaways are that (1) AI is keeping up pretty well with humans and (2) the selection of LLM has a big impact on the results. Stay tuned for more on this one.

This week I published a primer on AI agents. This is already an area of intense activity for many of our clients which I believe is ushering in a new generation of AI capabilities in the enterprise. As opposed to training an AI model, an agent actually constrains it to follow a specific set of rules or processes. While that may sound limiting, it’s actually the opposite because it allows an enterprise to dictate aspects of how it wants AI to behave and execute a process. That’s important because then a business can figure out and measure the business impact that AI will provide. ROI is still the key to all technology decisions in the business world, and agents may be the key to building an ROI-centric narrative.

One of my favorite things about IBM is how committed it has been to open source. That commitment is demonstrated by mountains of contributions before and after acquiring Red Hat. Eclipse, Tomcat, and Redshift are all examples of how IBM has contributed to open source in a non-commercial way for the benefit of the entire industry. This week I took a look at AI Fairness 360, which IBM recently committed to the Linux Foundation’s LF AI projects. It’s an open source toolkit designed to help place better guardrails on bias and hate speech. It’s intriguing for three key reasons. First, it’s open source so anyone can contribute and use it, which provides a common shareable platform for this important aspect of AI. Second, it goes beyond words by using over 70 fairness metrics to understand if there is bias in underlying machine learning processes like credit scoring or fraud detection. And third, unlike a lot of AI tech, it’s not a black box, which means that its workings are open to public scrutiny. This should be a welcome aspect for privacy advocates. While many companies—such as AWS with its Bedrock Guardrails service—are also doing work in this area, the notion of a common cross-industry capability is quite interesting.

Recently, CodeSignal released one of the most interesting AI developer benchmark studies that I have seen. Like many great studies, it not only informs the reader but also prompts more questions for further research. CodeSignal sells a skills framework that many enterprises use to evaluate developers during the hiring process. The company now has more than 500,000 test results, so it has a very good feel for a wide range of developers and their relative skills. Now CodeSignal has let a bunch of different LLMs take the test to see what happened. I have a piece on this coming out next week, but the two big takeaways are that (1) AI is keeping up pretty well with humans and (2) the selection of LLM has a big impact on the results. Stay tuned for more on this one.

Meta AI announced the Llama 3.2 model with new 1B and 3B model sizes, which will be absolutely crucial for wearables and other consumer products that want to leverage LLMs but don’t have the memory or processing footprint to run 70B- or 90B-parameter models. The company also announced an 11B multi-modal version of Llama 3.2, which Qualcomm says it already has running on its latest smartphone SoC. For the 1B and 3B models, Meta has already qualified the model with Arm, Qualcomm, and MediaTek.

It may seem too early to replace a CEO with an AI model, but three Harvard professors ran an experiment along those lines that involved 344 students and executives versus GPT-4o, a new LLM from OpenAI. In the simulation of the U.S. automotive industry, the people made strategic decisions that spanned several simulated years. The objectives were to maximize the company’s market cap and to remain employed.

GPT-4o performed well on most metrics and efficiently responded to the market by designing products. However, the model didn’t respond well to “black swan” events. That failure caused the AI CEOs to be dismissed more than the human players. Unpredictable events like market collapse and things that require human intuition and foresight led to GPT-4o’s dismissal by the virtual board faster than the top human players. The experiment nevertheless showed that AI is a strategic resource, although it was decided that accountability is a human requirement.

Considering the overall results, our executives are probably safe for another decade.

Intel held what may be the most unsurprising launch event ever with the release of Xeon 6P (for performance) and Gaudi 3. I say this jokingly, as these products have both been talked about and covered for some time. This launch was and is critical because the company desperately needs to re-establish itself in the datacenter. The question is, did Intel succeed?

  • Xeon 6P is a performance beast. While I take any company-produced benchmarking with a grain of salt, the spirit behind the advantages Intel demonstrated against its rival AMD hit the spot. From IPC to performance per watt to raw performance, Xeon 6P is a significant leap forward. This chiplet design includes components at 3nm (compute) and 7nm (I/O) to deliver a 1.9x performance-per-watt improvement over its previous generation. Further, the company did a good job of demonstrating Xeon 6P performance across the datacenter, from traditional virtualized workloads to HPC and analytics to AI. One more thing Xeon 6P does is to match AMD’s gaudy specs. Cores, memory, I/O—it’s all there and at or near parity with EPYC. This takes away one of the biggest sticks the EPYC marketing team has been using to beat Xeon.
  • Gaudi 3 is what we expected it to be. That means a good enterprise inference platform that delivers strong performance-per-dollar value. Gaudi is an ASIC, not a GPU, but it is an ASIC with a strong software toolchain and ecosystem that will grow over time. And when Gaudi gives way to Falcon Shores (Intel’s GPU), that software ecosystem will move with it, putting the company in a better position on the AI training front. However, Gaudi 3 will not compare with NVIDIA or AMD until that time comes.

Pure Storage announced a number of updates to its portfolio as it kicked off its Accelerate London event. These included Real-time Enterprise File (with zero-move tiering), a new entry level storage server (FlashBlade//S100), universal credits, and a VM assessment tool. I have a few thoughts on these.

  • I like how the company bundles its updates and releases them like a grouping of cloud services. It’s not just clever—it conditions customers to consume services like the cloud.
  • The company clearly still uses “simplicity” as a mantra and design principle. These updates really focus on abstracting complexity across three vectors: product, operations, and finances.
  • Zero-tier moving is in particular a great feature to incorporate as it flips tiering from storage-class to compute and network resource allocation.
  • Copilot for File continues with this “remove complexity” theme by enabling natural language management of the storage environment. This means that smart people in IT organizations can focus on doing smart things and not focus on specific syntax and semantics.
  • The VM assessment tool is another understated gem, as it allows IT orgs to rationalize their virtualization deployments and fully explore the what-if scenarios that every virtualization administrator is exploring.
  • Universal credits allows customers to spread their Pure Storage spend across services without leaving any budget on the table.

While other storage companies want to run away from their primary function to focus messaging and product on AI and nothing else, Pure continues to focus on solving the enterprise storage challenges that virtually every organization has. And when the AI craze has given way to the next big inflection point in tech, companies like Pure will still be relevant.

For more on this topic, see my detailed writeup of Pure Accelerate London on Forbes.

Cohesity plans to create a new data-security powerhouse through its business combination with Veritas. The global data protection and management sector is undergoing rapid change, driven by rising cyber threats, stricter regulations, and the increasing use of cloud services. In my new research paper, I explore how Cohesity has put itself at the forefront of this transformation. Its platform leverages AI and machine learning to detect threats, classify data, and protect critical workflows while utilizing RAG AI through its Gaia insights assistant.

Talking about the business combination, Cohesity’s president and CEO, Sanjay Poonen, noted, “This deal combines Cohesity’s speed and innovation with Veritas’ global presence and installed base.” The combined entity will serve more than 13,000 customers, including more than 85 of the Fortune 100, with projected revenues of around $2 billion for the 2025 fiscal year.

Data security ecosystems have been a key focus this year, with vendors in the space making strategic moves to enhance their technology and operations. Commvault has been active in this regard, acquiring Appranix, which offers technology for recovering cloud resources. Building on that move, Commvault has recently announced the acquisition of Clumio, strengthening its capabilities in cloud-based cyber resilience, particularly for AWS customers.

Clumio specializes in protecting AWS cloud data, including services such as Amazon S3, and will help Commvault improve its data protection and recovery offerings. The acquisition is expected to close in October 2024. The data backup and recovery market, valued at $12.9 billion in 2023, is expected to grow at a 10.9% annual rate.

I have followed the development of Box Hubs closely, and wrote about it when Box first announced the product, so I was glad to see Box Hubs become generally available recently. Hubs aims to address a common challenge enterprises face: organizing and publishing critical information so it’s easy to find and accessible to the right people inside and outside the company. The Box Hubs press release includes my thoughts on modern businesses’ challenges in managing and utilizing their growing volumes of data and content—and how AI-powered solutions like Box Hubs should improve content accessibility and value.

AWS unveiled the first-ever generative AI-inspired trophy at the Formula 1 AWS Grand Prix Du Canada. Engineers and creatives designed the trophy using the Amazon Bedrock managed service and Amazon Titan models, marking a pioneering instance of harnessing generative AI for trophy design. Inspired by the airflow dynamics of an F1 car, the design features a unique, wing-like shape that went through hundreds of iterations using GenAI. After the design was in place, a traditional silversmith in the U.K. crafted the silver trophy.

The associated PartyRock Sweepstakes, which invites participants to create their own trophy designs using a custom generative AI app, further highlights the innovative spirit of this endeavor. PartyRock is a broader initiative by AWS that seeks to democratize access to generative AI, enabling individuals and businesses to leverage its capabilities. The winner of this sweepstakes will receive a VIP trip to a 2025 F1 race. This initiative aims to showcase the transformative potential of generative AI in creative fields and actively engages the audience, inviting them to experience the possibilities of the technology in a fun and rewarding context.

Salesforce is acquiring Zoomin to enhance its Data Cloud by integrating new features and functionalities. Zoomin is known for organizing and delivering unstructured data across multiple platforms. By incorporating Zoomin, Salesforce looks to increase the use of unstructured enterprise data, which is often underutilized, to enhance the intelligence of its AI agents.

This acquisition has the potential to give businesses using Salesforce a deeper understanding of their enterprise data, leading to smarter interactions and better business outcomes, including improved customer experiences. The acquisition is expected to be finalized in the fourth quarter of Salesforce’s fiscal year 2025.

Infoblox recently announced its Universal DDI Product Suite; on the surface, it looks like it could deliver significant management simplification for hybrid multi-cloud services. It offers an orchestration capability that allows IT operators to streamline historically disparate DNS, DHCP, and IP address management processes across public cloud providers and on-premises deployments. It also has the potential to eliminate manual errors, lower operational cost, improve network availability, and reduce exposure to security risks through three new services.

Infoblox appears to be the first company to bring this level of consolidation to market, and it could provide the company with revenue upside in the near term as a first mover.

Microsoft announced that it is phasing out Microsoft Dynamics GP to make way for its successor, Microsoft Dynamics 365 Business Central. I appreciate how Microsoft has set clear, reasonable timelines for this transition. It’s also reassuring that many GP business partners are already well-versed in Business Central, which should ease the migration process for customers.

Dynamics GP product support ends on September 30, 2029, and security updates end on April 30, 2031. For most SMBs, Dynamics 365 Business Central is the logical next step because it offers a modern, cloud-based solution that enhances GP’s capabilities with advanced AI and seamless integration across the Microsoft ecosystem. For enterprises with more complex requirements, Microsoft also offers an alternative in Microsoft Dynamics Finance and Supply Chain Management. This option offers an extensive ERP/SCM platform capable of handling more intricate needs, ensuring that businesses of all sizes can find the right fit as they move forward.

Meta announced the Quest 3S, which returns Meta to the $299 price point but now allows the company to unify its low-cost offering with its high-end offering (Quest 3). While the Quest 3S doesn’t have the same optics or design as the Quest 3, it does have many of the same capabilities at a lower cost and using the same processor. This makes things much easier for developers when building for Horizon OS, Meta’s software platform for its headsets. Meta also announced that it would be opening up its passthrough cameras with API access—a much-requested capability for mixed-reality headsets.

Meta also announced Orion, its AR glasses prototype. Last week I had the pleasure of trying out these glasses, which have refined the augmented reality category with an incredible form factor and wide horizontal field of view of 60 degrees. I had the opportunity to demo many apps on Meta Orion, including the use of the EMG wearable for neural inputs combined with eye-tracking inside the glasses and hand-tracking. Meta has successfully combined many of the breakthroughs it has achieved through its research and trial and error in Orion. While Orion is not yet a consumer product—and still has some shortcomings in resolution and its chunky form factor—it has finally shown the industry and the world the level of functionality that’s coming soon. I suspect this product will reinvigorate the AR space.

Edge Impulse CEO Zach Shelby opened the company’s Imagine conference last week with a keynote covering the future of edge AI. He addressed the three big challenges holding companies back from shipping AI- and ML-based edge applications at scale: (1) generating industry-specific data, (2) optimizing AI and ML production workloads, and (3) deploying at very large scale (millions, not dozens or hundreds). While LLMs capture the headlines, it is domain-specific machine-learning techniques that are quietly revolutionizing edge application deployments. Shelby (and Gartner, by the way) predict that the majority of edge computing deployments (not devices) will use ML techniques by 2026, and I think that number is low.

Dave Kranzler, general manager of AWS IoT, joined Shelby on stage to emphasize the importance of edge intelligence and explain the cyclical nature of edge inference and cloud training. Edge ML provides detailed real-world data for training and updating domain-specific cloud models. Enhanced models improve edge inference, generate more high-quality data, and the cycle repeats. At the end of the talk, Kranzler and Shelby announced that Edge Impulse is now available in the AWS marketplace.

The Google TV Streamer is now available, and here’s my first take. At $100, it competes with the Apple TV 4K ($149 with Ethernet) and the $100 Roku Ultra. Compared with the Chromecast device it replaces, the new streamer offers a big step up in performance and capabilities—but for twice the price. In particular, it’s a Matter controller and a Thread border router (hence my interest in the box). I’m testing the Matter features now, but it’s too early to offer an analysis. So far, I haven’t encountered any big surprises. Installation is easy, the streamer supports all my subscribed apps, the UI is snappy and less cluttered than most other streamers, and the 4K video quality is comparable to its competitors. Also, it appears to convert surround sound formats to the ones your AV system supports, similar to the Apple TV.

I’m impressed with the streamer, even though the Apple TV 4K (still my favorite) has better usability, slightly faster performance, and TV apps with fewer bugs. Although the box is sleek and attractive, Google didn’t get the memo that the default color for AV equipment is black. Fortunately, I have a can of black spray paint in the garage. One more thing: I hear persistent rumors of a “pro” version of the streamer, but I can’t confirm them yet. Stay tuned!

The second annual HP Work Relationship Index (WRI), a global study examining how people feel about their work, reveals that, despite a slight improvement, most knowledge workers still don’t have a healthy relationship with their jobs. The survey, which involved over 15,000 individuals across various industries and countries, suggests that AI and personalized work experiences may offer solutions to improve this situation. The WRI findings offer valuable insights into the evolving needs and expectations of the workforce. In my upcoming write-up from last week’s HP Imagine conference, I will delve deeper into some critical points. You can also read my colleague Anshel Sag’s initial thoughts on the event in the “Personal Computing” section of this MI&S Weekly.

Also, at its Imagine event, HP announced the acquisition of Vyopta, an analytics and monitoring provider for unified communications and collaboration networks. It represents a strategic move aimed at enhancing HP’s Workforce Experience Platform. This acquisition has the potential to provide HP’s customers with a more comprehensive understanding of their collaboration ecosystem, thereby facilitating data-driven decision-making to optimize employee experiences and productivity. By incorporating Vyopta’s features, HP could offer enhanced fleet management, comprehensive insights into device and application usage, and AI-powered recommendations. Integrating Vyopta’s extensive dataset may further differentiate HP’s Workforce Experience Platform, contributing to its ability to provide intelligent and productive workplace solutions.

Microsoft is establishing a dedicated Security Skilling Academy to invest in its employees’ ability to stay ahead of evolving threats and prioritize security in their roles, regardless of their technical background. This emphasis on continuous learning acknowledges the rapidly changing landscape of cybersecurity and equips employees to make security-conscious decisions. I also appreciate the company tying senior leadership compensation to security performance. These initiatives demonstrate Microsoft’s investment in its employees’ security ownership and cultivating a workforce that is well-informed, empowered, and accountable for maintaining a secure environment.

HP announced a bunch of new AI PCs including the HP OmniBook Ultra Flip, which uses Intel’s latest Lunar Lake chipset and is a convertible version of the AMD-based OmniBook Ultra that HP announced during Imagine AI a little over a month ago. It also announced the EliteBook X, which slots in right below the EliteBook Ultra I reviewed as part of my Copilot+ PC roundup. The new model features the AMD Ryzen Pro processor, giving it a 55 TOPS NPU and helping to fill out HP’s consumer and enterprise notebook offerings. HP is demonstrating its ability to handle silicon diversity while keeping its new lineup coherent.

In addition to new PCs, HP also announced a new printer, the Envy 6100/6500, which launched alongside the company’s new Print AI feature. I believe that HP is innovating with this new Print AI feature and I think it will significantly improve the printing experience with its ability to understand what output the user is looking for even if the formatting is completely wrong. My biggest problem with it is that I believe HP still has to overcome printer driver issues and should prioritize the reliability of those drivers over enabling new AI features.

HP also announced a new software feature for its commercial clients called Z by HP Boost, which helps data scientists and other knowledge workers access otherwise idle GPUs from other workstations or laptops that might not have discrete GPUs. While HP currently supports only up to 4 GPUs per workstation, I believe that the full potential of Z by HP Boost is realized when many systems can be utilized together. This should be a very strong complement to HP’s other AI services it offers as part of its AI Studio.

Google Quantum scientists have created a new type of quantum memory that can reduce error rates in quantum computers. The research uses a surface code algorithm to correct errors by increasing the number of logical qubits, from 72 to 105. By adding even more qubits, error correction could be increased further, which might create a quantum computer with low-enough error rates to build a practical quantum computer. The researchers also discovered that logical qubits in their system remained coherent longer than the physical qubits, creating the potential for quantum memory. This research brings us closer to a quantum computer that could outperform classical supercomputers.

The governor of Illinois, JB Pritzker, announced that the state has made another quantum investment, this time in EeroQ Corporation, which is based in Chicago. EeroQ will be investing $1.1 million in its headquarters located in the Humboldt Park area. The State of Illinois will provide tax credits to support these efforts. EeroQ is developing a quantum computer based on electrons on liquid helium. This new technology is yet to be proven. Illinois previously struck a deal with PsiQuantum, which uses a quantum technology based on photonics. The state is counting on advanced companies like EeroQ to create more jobs that will grow the Illinois economy.

I continue to be impressed with what Microsoft is doing with its Secure Future Initiative. Prioritizing security over new features and functions and building accountability and measurement into the product development process is not a trivial undertaking. What the company is doing goes far beyond current CISA Secure by Design pledges (which are great first steps) and is a model for others to follow.

Microsoft’s initial SFI progress report shared recently demonstrates an incredible level of transparency for the company. The addition of a new cybersecurity governance council and security skilling for all employees has the potential to level the playing field against bad actors and put defenders in the cyber defense driver’s seat.

The Carolina Hurricanes’ home arena has been renamed “Lenovo Center” thanks to a 10-year naming rights agreement with Lenovo. This expanded collaboration, building on an existing relationship since 2010 when Lenovo served as the team’s helmet decal sponsor, underscores the growing role of technology in shaping the modern sports fan experience. This partnership extends beyond a simple name change; as the hockey team’s official technology partner, Lenovo will integrate its technology throughout the arena to enhance the fan experience across the 150 events the arena hosts annually, including major concerts, comedy tours, and family shows that cumulatively draw in about 1.5 million guests each year. Fans can anticipate upgraded digital signage, interactive displays providing real-time stats and replays, and potentially even immersive experiences like augmented reality incorporated into the gameday experience. The facility will also integrate Lenovo technology to help streamline arena operations, which should improve ticketing, concessions, and overall venue management.

If you’ve been watching a lot of baseball before the postseason like my family has, you may have noticed an increase in Google ads. Google Cloud AI is working to enhance the baseball fan experience by using advanced analytics and real-time data processing. Baseball produces an astonishing 15 million data points per game, which helps teams strategize and gives fans many different real-time statistics to better understand the game. The technology provides in-depth data analysis, including how weather affects player performance. It also offers personalized content on the MLB Film Room and Gameday 3D sites, and improves broadcasts with real-time insights during games. My colleague Robert Kramer and I are following along to see how these technologies are deployed leading up to the World Series; check out future installments of our Game Time Tech podcast for more.

PUMA Group is partnering with Google Cloud to enhance its digital shopping experience. Using Google Cloud’s Imagen 2 on Vertex AI, PUMA creates personalized product images based on customer locations, with the aim of improving engagement and accelerating digital campaign launches. PUMA plans to further explore Google Cloud’s AI tools to continue improving personalization and customer experience.

“Google Cloud is helping companies in every industry improve the customer experience with GenAI-powered agents, and our partnership with Puma is an excellent example of this. The creative agent Puma has built with our leading Imagen technology is taking personalization to a new level—and driving real business results,” says Thomas Kurian, CEO of Google Cloud.

This highlights how retailers can use AI to sharpen their understanding of consumer behaviors, allowing them to adjust their products to fit what customers really want, while also making sure they have the right stock available when needed.

T-Mobile recently held its first capital markets day event since the pandemic. The operator has accomplished a lot over the past three years, growing its 5G fixed wireless access business by nearly 3x since launch and announcing strategic partnerships with OpenAI and NVIDIA to improve its customer services and mobile network operations.

The company has also transformed itself from a consumer-oriented business to one that addresses enterprise and public sector mobility service needs with first-responder and security services anchored to slices of its public network. T-Mobile appears to be taking full advantage of its complete, 5G Standalone network and continues to use it as competitive differentiation.

Research Papers Published

Citations

Apple / AI / Anshel Sag / Get On News
Apple Weaves AI Into Its Latest Watch, AirPods, iPhone Models

Box / Cloud – Box Hubs / Melody Brue / Business Wire
Box Announces General Availability of Box Hubs to Revolutionize Content Publishing in the Enterprise

Box / Cloud – Box Hubs / Melody Brue / Silicon Angle
Box makes company documents easier to organize and find with Box Hubs

HP / AI PCs / Anshel Sag / Indian Express
At its Palo Alto event, HP sends out a strong message that it’s more than just a PC vendor

Hybrid Cloud – Infrastructure / Matt Kimball / BizTech
How Hybrid Cloud Ended the Infrastructure Debate for Good

IBM / Jason Andersen / Computerworld

IBM has reportedly laid off thousands 

Infoblox / Multi-Cloud Management / Will Townsend / NetworkWorld
Infoblox tackles integrated DDI across multi-cloud environments

Intel / Stock / Patrick Moorhead / Yahoo Finance
Intel stock jumps after report of possible Apollo investment

Meta / Smart Glasses / Anshel Sag / Wired
Meta Missed Out on Smartphones. Can Smart Glasses Make Up for It?

Pure Storage / Storage / Matt Kimball / Pure Storage Press Office
Pure Storage reinvents file services, redefining standards for enterprise-grade agility, simplicity

Pure Storage / Storage / Matt Kimball / NetworkWorld
Pure Storage brings storage-as-a-service to files

New Gear or Software We Are Using and Testing

  • Cisco Desk Pro (Melody Brue)
  • OnePlus Buds Pro 3 (Anshel Sag)
  • Insta360 Link2 4K AI Webcam (Anshel Sag)
  • Pixel Watch 3 (Anshel Sag)
  • Pixel 9 Pro Fold (Anshel Sag)
  • Google TV streamer – Matter and Thread features (Bill Curtis)
  • Various Matter devices (Bill Curtis)
  • ASUS Zephyrus G16 Gaming Laptop (Anshel Sag)
  • iPhone 16 Pro (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Infor Annual Summit, September 30-October 2, Las Vegas (Robert Kramer)
  • Infor Annual Summit, September 30-October 2, Las Vegas (Robert Kramer)
  • Fem.AI Summit, Menlo Park, October 1 (Melody Brue)
  • Microsoft Industry Analyst Event, Burlington, Mass, October 2 (Melody Brue)
  • LogicMonitor, Austin, October 2-4 (Robert Kramer)
  • Teradata, October 7-10, Los Angeles (Robert Kramer)
  • SAP TechEd, October 8 (Melody Brue – virtual)
  • Zoomtopia, San Jose, October 8-9 (Melody Brue)
  • Embedded World NA, Austin, October 8-10 (Bill Curtis)
  • MWC Americas and T-Mobile for Business Unconventional Awards event judge, October 8-10, Las Vegas, October 8-10, Las Vegas (Will Townsend)
  • AMD Event, San Francisco, October 8-10 (Matt Kimball)
  • AWS GenAI Summit, October 9-10, Seattle (Jason Andersen, Robert Kramer)
  • AdobeMAX, October 14-16, Miami (Melody Brue)
  • Lenovo Global Analyst Summit & Tech World, October 14-17, Bellevue, WA (Matt Kimball, Paul Smith-Goodson, Anshel Sag)
  • Blackberry Analyst Day, October 16, New York City (Will Townsend)
  • IBM Analyst Summit, October 16-18, New York City (Matt Kimball, Robert Kramer, Jason Andersen)
  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • RISC-V Summit, October 22-23 — virtual (Matt Kimball)
  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Matt Kimball, Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Dell Tech Analyst Summit, November 6-8, Austin (Matt Kimball, Anshel Sag, Paul Smith-Goodson)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • IBM, November 6-8, New York City (Paul Smith-Goodson)
  • Fyuz, November 11-13, Dublin (Will Townsend)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Analyst Summit, November 12-13, San Francisco (Melody Brue)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • AWS re:Invent, December 2-6, Las Vegas (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)
  • ServiceNow Global Industry Analyst Digital Summit, December 10 (Jason Andersen, Melody Brue, Robert Kramer – virtual)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending September 27, 2024 appeared first on Moor Insights & Strategy.

]]>
Datacenter Podcast: Episode 30- Talking Infoblox, PensionDanmark, Intel, HPE, Google, Pure Storage https://moorinsightsstrategy.com/data-center-podcast/datacenter-podcast-episode-30-talking-infoblox-pensiondanmark-intel-hpe-google-pure-storage/ Fri, 27 Sep 2024 21:26:48 +0000 https://moorinsightsstrategy.com/?post_type=data_center&p=42879 On this episode the Datacenter team talks Infoblox, PensionDanmark, Intel, HPE, Google and more!

The post Datacenter Podcast: Episode 30- Talking Infoblox, PensionDanmark, Intel, HPE, Google, Pure Storage appeared first on Moor Insights & Strategy.

]]>
Welcome to this week’s edition of “MI&S Datacenter Podcast” I’m Patrick Moorhead with Moor Insights & Strategy, and I am joined by co-hosts Matt, Will, and Paul. We analyze the week’s top datacenter and datacenter edge news. This week we cover Infoblox, PensionDanmark, Intel, HPE and more!

Watch the video here:

Listen to the audio here:

2:07 Did Infoblox Crack The Code On Hybrid Multi-Cloud Management?
8:08 Qubits For Kroners
15:15 Intel Makes A Statement In The Datacenter
26:29 HPE Super Sizes AI With Aruba Central Updates
33:10 CAPTCHA If You Can
37:16 Making Storage Simple 101
46:49 The Top 3 List – Getting To Know Us

Did Infoblox Crack The Code On Hybrid Multi-Cloud Management?

https://x.com/WillTownTech/status/1839033352797495571

Qubits For Kroners

https://www.pensiondanmark.com/en/press/news/2024/pensiondanmark-invests-in-the-supercomputers-of-the-future/?AspxAutoDetectCookieSupport=1

Intel Makes A Statement In The Datacenter

https://www.intel.com/content/www/us/en/newsroom/news/next-generation-ai-solutions-xeon-6-gaudi-3.html#gs.f7uipm

HPE Super Sizes AI With Aruba Central Updates

https://x.com/WillTownTech/status/1838629239857320341

CAPTCHA If You Can

https://tik-db.ee.ethz.ch/file/7243c3cde307162630a448e809054d25/

Making Storage Simple 101

https://www.networkworld.com/article/3538618/pure-storage-brings-storage-as-a-service-to-files.html

Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.

The post Datacenter Podcast: Episode 30- Talking Infoblox, PensionDanmark, Intel, HPE, Google, Pure Storage appeared first on Moor Insights & Strategy.

]]>
Oracle Cloud Infrastructure And AWS Form Strategic Partnership https://moorinsightsstrategy.com/oracle-cloud-infrastructure-and-aws-form-strategic-partnership/ Thu, 26 Sep 2024 21:50:56 +0000 https://moorinsightsstrategy.com/?p=42890 Considering its effects on enterprises & the nature of multicloud deployments, OCI-AWS partnership could have a big impact on customers

The post Oracle Cloud Infrastructure And AWS Form Strategic Partnership appeared first on Moor Insights & Strategy.

]]>
Oracle and AWS are teaming up to provide better access to Oracle's databases and other products for AWS's enterprise customers. 123RF
Oracle and AWS are teaming up to provide better access to Oracle’s databases and other products for AWS’s enterprise customers. 123RF

Oracle and AWS have entered into a strategic relationship, announced this week at the Oracle CloudWorld conference in Las Vegas, in which Oracle’s cloud infrastructure will be deployed and run in AWS datacenters. This partnership, modeled after Oracle’s existing relationships with Microsoft Azure and Google Cloud or GCP, will see Oracle Autonomous Database and Exadata infrastructure physically reside in and integrate with the entirety of the AWS portfolio of technologies and services.

This announcement is significant for enterprise IT organizations that consume both Oracle and AWS services—meaning virtually every large enterprise. However, it may be even bigger for the industry as whole because it indicates a move toward native multicloud integration to better meet customers’ needs. Let’s dig into why this partnership between OCI and AWS is such a big deal for customers and the industry.

The Multicloud World Requires True Multicloud

We live in a multicloud world. This is so obvious—almost a cliché—that it is easy to lose sight of what this actually means. Unless, of course, you happen to be an IT pro responsible for connecting applications and data for the business, or an application developer tasked with building a cloud app fueled by data that resides everywhere.

In many ways, however, the multicloud that we’ve seen to date has meant nothing more than consuming services from multiple cloud providers. But shouldn’t it also mean cloud-to-cloud connectivity that is performant, secure and frictionless? Unfortunately, that really hasn’t been the case in practical terms. More than that, the cost of moving data from cloud to cloud can be prohibitive. In some cases, even moving data from region to region—within the same cloud!—can become prohibitively expensive.

Some CSPs have addressed this through dedicated interconnects. In the case of OCI, Oracle has already developed partnerships with Azure (which I covered here) and Google Cloud (which I wrote about here) to enable low-latency, highly secure connections between the cloud environments. This allows customers to move data from cloud to cloud and from app to database fast and without those dreaded egress costs.

Oracle Database@CSP Is Native Multicloud

The concept of Oracle Database@CSP took has taken this multi-cloud enablement to new levels. Under this model, Oracle deploys its Exadata infrastructure and Autonomous Database in another CSP’s datacenter. This means that the database is fully connected to the CSP network and natively accessible by the portfolio of services in that datacenter.

In this model, customers buy, consume and manage Oracle database services through the console of the host CSP. It is effectively a first-party service that a consumer can spin up like any other service, so it is very simple. However, the Oracle Cloud team still maintains the Oracle environment.

Over the past few years, OCI has partnered with Azure and GCP to deliver this Database@CSP model (Oracle Database@Google Cloud was just made generally available at the time of this writing). In the case of Azure, we know that the partnership was delivered for enterprise customers that standardized on Oracle and Microsoft many years ago. While the GCP flavor of this was just recently released, I have no doubt this partnership will see similar success. That said, the GCP partnership differs from the Azure one because the GCP customer profile is different. While Azure is very popular with enterprise IT, GCP tends to be more attractive to smaller organizations. GCP also has a rich history in areas of advanced computing such as AI.

The one missing piece of the Database@CSP strategy has been the biggest CSP of all: AWS. While this may seem a little surprising on its surface, it really isn’t. AWS is the largest CSP on the planet by a considerable margin and is pretty strong in its opinions about having third-party infrastructure in its datacenters—especially infrastructure from a competitor, and even more so from a competitor as aggressive as Oracle.

But here’s the deal: the largest CSP and the largest database vendor are sure to have many customers in common. Those customers want to easily and cost-effectively marry AWS’s goodness with all the data in their Oracle environments. To take one example, imagine seamlessly feeding the AWS Bedrock development platform for generative AI with decades of your enterprise data residing in Oracle. This is what customers want, and this is what AWS and Oracle can uniquely deliver—but only through a thoughtfully constructed partnership.

Oracle Database@AWS — What Is It?

Oracle Database@AWS is precisely what was described previously for Azure and GCP, but tailored to AWS. Oracle’s Autonomous Database and Exadata infrastructure are deployed in AWS data centers and made available for AWS customers to consume just like they would any other AWS service. From selection to billing to monitoring, the Oracle database environment looks like every other AWS service from the customer’s perspective.

Oracle Database@AWS will be available as a first-party service. AWS
Oracle Database@AWS will be available as a first-party service. AWS

Once stood up, Database@AWS also integrates directly with other AWS offerings—as in the Bedrock example already given. Companies (mostly enterprises) that have invested in Oracle for their database needs will find this integration especially compelling, as they will be able to make that data available to AWS services, once again in a highly secure and low-latency environment. If a customer has technical issues with their instance, AWS handles first-level support. If the problem isn’t resolved, AWS and Oracle work together to resolve it.

I believe that enterprise IT organizations will find it compelling to be able to remove the extract-transform-load process when using tools such as AWS Analytics. This kind of streamlining is the very definition of speed and simplicity in our data-driven era. Likewise, the ability to connect AWS Bedrock with all that rich data sitting in the Oracle Database immediately makes GenAI in the enterprise easier, faster and more secure.

How OCI Works In AWS

It’s important to restate that this setup is not simply Oracle’s database running in AWS as a service. Rather, this is Oracle Cloud Infrastructure residing and running in AWS datacenters, with Autonomous Database and supporting services (networking etc.) along for the ride—a cloud region running inside a cloud region. Like with Azure and GCP, Oracle’s play with AWS is completely differentiated from any other vendor. No other cloud provider deploys a region in another cloud provider’s datacenter.

This is a crucial detail to tease out because it speaks to a couple of things. First and foremost, it delivers guarantees for performance, reliability and resiliency that are aligned with Oracle’s standards. This is not to imply that AWS is a less reliable cloud. However, Exadata and the Autonomous Database infrastructure are designed and tuned specifically for the Oracle Database environment and, as such, deliver better performance than third-party hardware ever could.

The second thing to note is that these OCIs are building plumbing between clouds. Oracle Database@Azure and Oracle Database@GCP are OCI regions. These OCI regions can distribute data among themselves, effectively enabling organizations to move data easily from one cloud to another—with, let me remind you once again, low latency and strong security.

What Does This Partnership Mean For Oracle?

This is a significant win for Oracle for several reasons. First, it allows the company to meet its customers on their own terms. For any enterprise that made AWS its primary CSP years ago and now wants to migrate its Oracle environment, this partnership finally enables it to happen. AWS has tons of enterprise applications and Oracle has tons of enterprise data; as previously mentioned, this move allows customers to bring all that data to all those applications.

This partnership is also important for Oracle because it enables the company to drive toward market expansion of its database platform. Many existing Oracle customers are large enterprises that have been using the platform for decades—very many of them since the 20th century. Some of those cited in the Oracle press release are Vodafone, Fidelity and State Street Bank. While these sizeable organizations are on the leading edge of technology, Oracle is trying to educate and bring a new generation of companies and developers into its community as well. The partnership with AWS (like the existing GCP partnership) should help Oracle drive this market expansion strategy.

Why Would AWS Do This?

If one were to draw a Venn diagram of AWS and Oracle customers, its intersection would be large. Given that Oracle seems to be in virtually every Fortune 1000 company, it is fair to say that AWS’s biggest customers are also overwhelmingly Oracle customers. This partnership enables AWS to better meet the needs of these customers that want to take advantage of an Oracle Autonomous Database but consume it through AWS—and the budget already allotted to AWS.

Not just incidentally, I believe this could also be a good defensive tack for AWS. Azure has established a strong “enterprise cloud” position thanks to Microsoft’s legacy in on-prem enterprise IT environments. This new partnership with Oracle enables AWS to maintain parity with Microsoft from an enterprise serviceability perspective.

Oracle Is Playing The Long Game

Oracle has been quite aggressive with OCI since it launched its Gen 2 back in 2018—and the company has seen considerable success with it. In fact, in its latest earnings, Oracle saw its cloud revenue grow 21% year over year and its IaaS revenue grow a staggering 45% YoY. That is partly tied to the company’s footprint in the enterprise.

Oracle has been building what I call a native multicloud offering for some time. It started with building dedicated interconnects with Azure and Google and has expanded to deploying its cloud within the CSPs to deliver performance, security and value on customers’ terms. This kind of cooperation makes today’s version of Oracle hardly recognizable as the company I used to write checks to when I was in enterprise IT leadership.

How will this all play out? Will Oracle succeed in turning the next generation of app developers and businesses into customers? Will AWS, Azure and Google aggressively position their Oracle offering?

Time will tell. It’s very early in the game, and I expect the first surge of business will come from existing customers migrating Oracle databases to the cloud. The real work begins after that, with Oracle’s outreach efforts wrapped in awareness and education campaigns.

One thing is for certain: Oracle has positioned itself well.

The post Oracle Cloud Infrastructure And AWS Form Strategic Partnership appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending September 20, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-september-20-2024/ Tue, 24 Sep 2024 13:00:56 +0000 https://moorinsightsstrategy.com/?p=42662 MI&S Weekly Analyst Insights — Week Ending September 20, 2024

The post MI&S Weekly Analyst Insights — Week Ending September 20, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

The Moor Insights & Strategy team hopes you had a great weekend!

This week, Patrick, Anshel, and Melody (virtually) will be in Palo Alto at HP Imagine, Anshel will be in San Jose at Meta Connect, and Melody will be at Verint Engage in Orlando.

Last week, Anshel attended the Snap Partner Summit in Santa Monica and Patrick, Jason, Melody, and Robert attended Salesforce Dreamforce in San Francisco (and virtually).

If you missed Will Townsend’s webinar with Zayo, “What’s Next for Your Network’s Foundation?” It is now available on demand!

Our MI&S team published 16 deliverables:

Over the last week, our analysts have been quoted multiple times in top-tier international publications with our thoughts on Apple, IBM, Intel, and Nokia.

Patrick was on Yahoo! Finance with the Morning Brief team to talk about Intel’s AI chipmaking partnerships, and joined CNBC to discuss recent reports that Qualcomm approached Intel about a takeover.

MI&S Quick Insights

Microsoft, BlackRock, Global Infrastructure Partners, and MGX have partnered to raise $100 billion to build AI infrastructure; the group will invest in datacenters and energy infrastructure to support demand for AI computing power, primarily in the United States. The group’s initial objective is to raise $30 billion, with long-term expectations of expanding it to $100 billion with additional debt financing. The partnership’s main focus will be on datacenters and the power supply needed to run giant AI applications.

A group of Chinese researchers published a paper exploring memory in large language models. The scientists believe that LLMs have a unique type of memory similar to Schrödinger’s cat. The memory can only be observed when a question is asked. The universal approximation theorem (UAT) was used to explain how LLMs can dynamically fit inputs to outputs, making it appear to remember information.

Experiments were run on LLMs by training them on poems, then testing the LLM’s ability to recall the poems based on very little information. It surprised me that the LLMs could remember entire poems based only on titles and authors, even though LLMs don’t store information in a traditional memory structure. The scientists wrapped up the experiment by comparing LLM memory to human cognition. They highlighted similarities and differences and emphasized the potential of the dynamic fitting capability for creativity and innovation.

Microsoft, BlackRock, Global Infrastructure Partners, and MGX have partnered to raise $100 billion to build AI infrastructure; the group will invest in datacenters and energy infrastructure to support demand for AI computing power, primarily in the United States. The group’s initial objective is to raise $30 billion, with long-term expectations of expanding it to $100 billion with additional debt financing. The partnership’s main focus will be on datacenters and the power supply needed to run giant AI applications.

A group of Chinese researchers published a paper exploring memory in large language models. The scientists believe that LLMs have a unique type of memory similar to Schrödinger’s cat. The memory can only be observed when a question is asked. The universal approximation theorem (UAT) was used to explain how LLMs can dynamically fit inputs to outputs, making it appear to remember information.

Experiments were run on LLMs by training them on poems, then testing the LLM’s ability to recall the poems based on very little information. It surprised me that the LLMs could remember entire poems based only on titles and authors, even though LLMs don’t store information in a traditional memory structure. The scientists wrapped up the experiment by comparing LLM memory to human cognition. They highlighted similarities and differences and emphasized the potential of the dynamic fitting capability for creativity and innovation.

Salesforce hosted its Dreamforce event last week, and the big story was Agentforce—its portfolio of tools and capabilities that enable business users to create highly productive AI agents. Agents are pretty exciting AI technology in that they can leverage AI and deterministic programming to let an AI drive a business process and minimize human intervention. Salesforce was able to articulate a set of existing technologies (Mulesoft, Prompt Builder) and new ones (Data Cloud) as well as no-code tools that enable users to easily build agents. While this is very promising, I will caution that, like many developer toolsets embedded in application platforms, Agentforce will still need more work when it comes to enterprise or external deployment. We will need to look closely at how testing and maintenance will function in the new world of agents—and figure out what the right business model is.

A few days prior to Dreamforce, ServiceNow announced its latest AI capabilities in its Xanadu release. In addition to a raft of new features similar to what we are now seeing from Agentforce, Servicenow is releasing a new database to improve performance and scaling, plus a host of new features in its Integration Hub. This aligns well with the thoughts on ServiceNow’s AI aspirations that Melody Brue, Robert Kramer, and I published in June.

IBM is continuing its strategy to cultivate a broad and deep IT automation portfolio. This week it announced its intention to acquire Kubecost, aligning with its Apptio acquisition of 2023. IBM is betting big on FinOps, and Kubecost has the ability to deliver optimized insights to improve the efficiency and costing of Kubernetes infrastructure.

Salesforce has launched Agentforce, an AI-powered suite designed to automate various tasks across an enterprise. Agentforce utilizes autonomous agents to improve efficiency in sales, service, marketing, and commerce. Salesforce emphasizes the platform’s ease of use, accuracy, and ability to deliver immediate results. The AI agents within Agentforce can perform tasks such as drafting e-mails, scheduling meetings, and offering recommendations based on customer data. Salesforce provides pre-built agents such as Service, Sales Development Representative, Sales Coach, and Campaign, while also allowing users to configure their own custom agents. The overarching goals of Agentforce are to empower sales teams, elevate customer experiences, optimize marketing campaigns, and streamline commerce operations through AI-driven automation. Robert Kramer and I talked about Agentforce on the latest episode of the Hot Desk Podcast, and I’ll have more to say in an upcoming analysis article.

The semiconductor space is about as hot as I’ve ever seen it, and it’s only getting hotter. We should be seeing new server CPUs from the two x86 giants hitting the market soon, and of course the AI accelerator market seems to have new, well-funded startups jumping in the game every day. Finally, Arm has driven a new dynamic through its penetration into the cloud (CSP) market that I believe will move downmarket to tier-2 cloud providers and eventually the enterprise.

The CPU is not commoditized. However, the server market is overserved from a scalar compute perspective. Core counts are ridiculously high and the integer performance of chips is beyond what traditional datacenter workloads require. Yes, more is good. And yes, faster is better. But for the enterprise IT organization, we have seen this “cores war” and billboard-style specification comparisons giving way to real value markers such as performance per watt (sustainability, datacenter capacity) and performance per dollar (ROI, TCO).

CSPs have very specific requirements around performance and power which translate into the very specific core counts and performance levels that the CPU makers tout. These are often not CPUs that will be found on price sheets. Furthermore, CSPs require a multi-vendor market. By having more than one supplier, prices are more competitive and different services can be offered.

CPU vendors need to focus marketing spend on real differentiation if they hope to play and win in the enterprise. That differentiation can be virtual machine density or it could be from application acceleration or something else—but the discussion needs to move beyond core counts and memory.

Finally, it is critical to understand that the IT consumer has little faith in published benchmarks from vendors, be it CPU vendors or server vendors. When comparisons are made between your latest technology and a technology that is a generation (or often two) behind—buyers see this. Or when a company publishes a benchmark that shows them with two, three, four orders of magnitude better performance than the competition, their audience realizes it’s synthetic. Let’s move beyond the “benchmarketing” era and into some truth in advertising, so to speak.

Interesting numbers from the last quarter’s financial reporting would indicate that there is softness in storage for some of the major OEMs, despite incredible revenue increases for server sales. Why is this? First, it’s worth picking at those numbers a little more closely. While server revenue numbers were up dramatically across the board, these increases are attributed to AI sales. Non-AI business continues to be flat for most.

What we’ve seen in the market over the last few quarters is storage companies such as Pure and NetApp growing their business as companies like Dell and HPE have seen a flat market. I believe we can thank AI, even if indirectly, for this growth. The focus on AI has led to a focus on data, and this has led organizations to re-examine their storage environments and move toward storage solutions from companies that solely focus on storage and data management. Want more proof? Look at the incredible growth of companies like VAST and Weka—companies that don’t even put an emphasis on the storage element of their solutions.

Lenovo has been the outlier and has seen strong growth. This is due in part to its relatively small customer base and its footprint in the hyperscalers. While I don’t have specific insights, I suspect its enterprise storage business is in line with what we’ve seen from HPE and Dell.

I am certain the OEMs will regain their footing in the storage market. But I don’t believe it will happen until each company examines the way companies like Pure position their products and message to the market.

Veeam Software has acquired AI-powered startup Alcion, which focuses on cyber resilience for Microsoft 365. Alcion’s co-founder, Niraj Tolia, who previously played a key role in Veeam’s Kubernetes data resilience solution Veeam Kasten, has been appointed as Veeam’s new CTO. Tolia will lead the company’s product strategy for Veeam’s new Data Cloud, integrating Alcion’s AI and security features to enhance data resilience. This acquisition is part of Veeam’s broader expansion, which also includes a recent partnership with Lenovo to provide the TruScale Backup Service.

A recent Adobe study highlights the escalating concerns of U.S. consumers regarding misinformation in the lead-up to the 2024 presidential election. The findings reveal that most respondents are worried about the impact of misinformation on the election and have become less trusting of online content.

The study also found a growing demand for transparency in how digital content is created and edited, with a large majority (93%) of consumers emphasizing the importance of understanding content origins and modifications. This demand is particularly strong for election-related content. 95% of respondents said they wanted to see attribution details attached to such information. A significant portion of respondents (48%) have reduced their social media usage due to the prevalence of misinformation, with 89% believing social media platforms should take more decisive action. Most (74%) feel the U.S. government’s efforts to combat online misinformation are inadequate.

Adobe has done a nice job of calling attention to the need for more transparency in digital content—particularly for the company whose tools are designed to manipulate images (yet not in a harmful way). These types of studies are a good way to educate people about the rise in misinformation while promoting the Adobe-led Content Authenticity Initiative.

ServiceNow has introduced AI Agents for automation and intelligent problem-solving to change customer and employee experiences. ServiceNow’s vision for AI Agents is not entirely unique. It is to leverage increasingly powerful AI models to create agents capable of independently identifying and resolving problems. These agents are built to operate within predefined company parameters and with human oversight, ensuring a mix of autonomy and control. It is the human oversight part that I think sets ServiceNow apart from competitors in these early days of AI agent announcements.

Ultimately, ServiceNow envisions a future where humans act as supervisors, guiding teams of AI agents that proactively manage workflows across departments. This represents a significant shift in the human-AI relationship, with AI agents taking on—not taking over—a more active and collaborative role in driving business productivity and transformation. ServiceNow’s initial focus is on customer service management and IT service management.

Introduced last week, Salesforce’s Agentforce is a suite of AI-powered agents designed to enhance business functions. Let’s review a few benefits and challenges when integrating AgentForce with ERP and SCM systems. First, the benefits:

  • Automation — AI agents can handle repetitive ERP and SCM tasks such as order processing, inventory management, customer service, procurement processes, etc.
  • Data Integration — Tools such as Salesforce’s MuleSoft allow data to flow between systems, although this can also pose challenges.
  • Scalability — Agentforce supports increased ERP and SCM workloads without the need for additional human resources.
  • Predictive Analytics — Salesforce can enable AI-driven insights drawn from enterprise data that resides in ERP and SCM systems to improve decision-making.

Here are some of the challenges:

  • Integration Complexity — Integrating Agentforce with ERP and SCM systems often requires IT expertise; trust in the integration process is critical to avoid operational disruptions.
  • Security — Ensuring the protection of sensitive ERP and SCM data when using AI agents deserves significant attention.
  • Trust — Users must trust that the data handled by AI agents in ERP and SCM systems is used properly. Errors could impact key functions in the enterprise systems.
  • Transparency — It’s important to understand how AI agents make decisions. Transparent AI processes can build trust by helping users understand how decision-making happens in areas such as supply chain optimization and demand forecasting.

More to come on all of this in my upcoming article digging into the details of Agentforce’s impacts on enterprise systems.

Cisco recently announced a second round of layoffs for the year, affecting 5,600 team members, or 7% of its overall workforce. It was a widely anticipated move, given the softness in Cisco’s networking business and an uncertain economy heading into a U.S. presidential election. I expect that the company will use the cost savings to reinvigorate demand for all its infrastructure. This applies especially to cybersecurity, as the integration of Splunk continues to strengthen Cisco’s offering to the market.

Globant is acquiring Blankfactor, a U.S.-based IT consulting firm specializing in payments, banking, and capital markets. This acquisition should strengthen Globant’s financial services offerings, particularly in card issuing, merchant acquiring, and securities finance. Blankfactor’s expertise in consulting-led product engineering, cloud technologies, and AI solutions should complement Globant’s capabilities and help it better serve clients in the rapidly evolving financial services industry.

Amazon has added PayPal as a Buy with Prime checkout option. This builds on last week’s news of PayPal’s expanded partnership with Shopify. This is noteworthy because it expands PayPal’s reach in the e-commerce space, given that it is currently not a payment option on Amazon’s main platform. This strategic move strengthens PayPal’s position in the market and offers more choices for online shoppers. Under the leadership of new CEO Alex Chriss, PayPal seems to be making some strides in creating products and services that compete with newer rivals such as Stripe for payments and Apple for mobile wallets.

HTC has announced a new VR headset, the VIVE Focus Vision. HTC appears to be building this headset for both wireless and wired streaming from a desktop PC—while also enabling it to function as a fully standalone headset. It features mixed reality passthrough for AR-like experiences thanks to two RGB front-facing cameras. It seems that HTC wants this headset to become the standard for PC VR applications; it’s equipped with a 120-degree FoV, 90 Hz LCD panels, and a Qualcomm Snapdragon XR2 chipset. It also has ample RAM (12GB) and storage (128GB, with up to 2TB of expandable storage via MicroSD), as well as swappable batteries. At $999, it will have a hard time competing with Meta’s Quest 3, but given its eye-tracking capabilities and arguably better ergonomics, there is a bit of premium capability. That said, I believe the market fit for this headset is fairly small at the price.

Snap, Inc., parent company of Snapchat, announced a new pair of AR glasses, the fifth generation of its Spectacles family. This is the second generation of Spectacles to have dual see-through waveguide displays. These new Spectacles are powered by a Qualcomm Snapdragon processor running a new Snap OS operating system designed to work with Snap’s developer tools and be compatible with the Snapchat app. While the horizontal field of view is only 46 degrees, the vertical FoV is much taller and seems to lend well to porting Snapchat AR lenses. While the glasses themselves appear quite bulky, I do believe that Snap is taking the right approach to AR by embracing AI and natural interfaces like hand tracking and voice. Other than their appearance and limited FoV, these glasses are still very much targeted towards developers at $99 a month for 12 months.

Matter at the tipping point — At CES 2023, I predicted that Matter, the smart home standard from the Connectivity Standards Alliance, would hit its tipping point in 2025. I figured the CSA and its member companies would iron out the first wave of Matter and Thread bugs during the first year—improving usability, adding more device types, and paving the way for the second wave of commercial products. As it played out, first-year deployments revealed new (but not unexpected) concerns about usability and deployment. This year, Matter and its members addressed the second-order problems, and the Thread Group released Thread 1.4 with essential Matter-related enhancements. I covered Thread 1.4 in these pages in my September 6 weekly update.

Meanwhile, Apple, Amazon, Google, and Samsung have turbocharged Matter’s market acceptance by incorporating Thread and Matter into high-volume consumer products. Google, Apple, and Amazon smart speakers and hubs have Thread and Matter support built-in, so millions of consumers already have the technologies in-house, even though they might not be aware of it. Also, if you have an iPhone 15 Pro or newer with iOS 18, your phone can directly connect with Thread-based Matter devices via its built-in Thread radio—no hub required.

Better usability, increased ecosystem support, and direct device connections combine to reduce initial adoption barriers and improve user experiences. So, as CES 2025 approaches, CE manufacturers are rolling out waves of new products, such as Eve’s recently released wall-mounted, Thread-connected light switch. Using that switch with a Thread-enabled iPhone is the closest thing to a one-click Matter installation I’ve seen. It does look like 2025 will be the tipping point where Matter’s market share accelerates on its way to becoming the leading smart home ecosystem for new products by the end of 2026.

Ikea and Samsung collaborate on Matter support — Last week, Ikea added Matter support to DIRIGERA smart home hubs via a software update. Ikea’s smart home product line, introduced in 2012, includes lighting, remote switches, air purifiers, motorized blinds, and Wi-Fi speakers. Ikea has always used the well-established Zigbee protocol for these products. In 2022, the company launched DIRIGERA for smartphone integration.

Ikea is on the CSA board of directors and a strong Matter supporter, so I wasn’t surprised that the new hub was “Matter-ready” from the start. In this case, Matter-ready meant that a future software update could add “bridging” support, which the Matter specification defines in detail. Bridges translate Zigbee protocol to and from Matter protocol, allowing Matter ecosystems such as Amazon, Apple, Google, and Samsung to control the Ikea non-Matter (Zigbee) devices. Ikea followed through, making good on the promise of a Matter upgrade.

Also, last week, Samsung announced native SmartThings support for DIRIGERA and its Matter bridge. Of course, customers may choose a different smart home system (Apple, Google, Amazon, or other), but I assume Samsung has thoroughly tested SmartThings with Ikea’s bridging. Hear that alarm bell? It’s Ikea and Samsung with a wake-up call for non-Matter smart home suppliers to offer Matter bridges as soon as possible. Proprietary hubs are rapidly becoming obsolete.

T-Mobile launched a new network slice called T-Priority, which is specifically designed to be prioritized above all other users on its network. This service depends on the company’s 5G Standalone network, which it will be upgrading to 5G Advanced by the end of this year. Additionally, this gives it a service to compete with AT&T’s FirstNet, which has been the standard for most first responders. I believe this service will be complementary in many ways and will potentially serve as a backup in some applications as well as a primary line for new 5G applications thanks to its prioritization and larger bandwidth resources. I believe that T-Priority could be very powerful when many emergency services are sharing the same limited FirstNet spectrum and could benefit from added capacity on demand.

The new iPhone 16 might not be selling as well as Apple had anticipated, but the reality is that plenty of consumers are aware that many Apple Intelligence features, including the much-hyped new Siri, won’t be available until next year. Based on comments from T-Mobile’s CEO, it makes sense that there might be a slower start for iPhone sales in Q4, but an eventual ramp-up once the AI features become available broadly. I also believe that this could explain why so many carriers have offered such sweetheart deals on the new iPhone 16 Pro series. This is partially because of the slow rollout of Apple intelligence, but I believe it’s also because the base series iPhone 16 is the closest to the Pro in terms of specs that it has ever been.

8×8 announced the availability of its Video Elevation feature for 8×8 Contact Center. This capability enables contact center agents to initiate one-way video interactions with customers so agents can help quickly resolve issues that may otherwise require a service call or a lengthy discussion. I really like this “show me what you see” functionality, which I have used in different scenarios—most recently when diagnosing a router issue with AT&T. The  solution should ensure that the caller and agent are discussing the same thing when trying to resolve an issue. As AI agents start proliferating in the contact center, it might be some time before they can diagnose issues using multimodal recognition.

Zoom has expanded its contact center offerings with three new tools to streamline agent workflows and boost efficiency. Zoom Virtual Agent uses conversational AI to handle routine customer inquiries, freeing human agents for more complex tasks. Agent Assist leverages generative AI to provide real-time support and guidance during customer interactions. The Quality Management tool offers automated transcription and scoring of interactions to facilitate performance evaluation and coaching. These features have become more or less table stakes in contact centers. The significance in this case is that they show Zoom’s strategic focus on leveraging AI to optimize contact center operations and enhance both agent and customer experiences. In addition, Zoom’s AI is quite good. This is suggested by the number of awards and accolades it has received, but also from my own experience. For example, Zoom’s AI noise cancelation is so good that I’ve had people in Zoom meetings apologize for their barking dogs or other background noise that I couldn’t even hear. And more than once when I’ve had to miss or join a meeting late, I’ve been able to accurately and quickly get up to speed thanks to Zoom’s AI meeting summary.

Qedma is one of IBM’s application partners in the initial release of the Qiskit Functions Catalog. Its QESEM (Quantum Error Suppression and Error Mitigation) product is designed to suppress noise created by decoherence and calibration errors in QPUs. That means users can accurately run quantum algorithms on noisy QPUs. According to Qedma, QESEM achieves better results than algorithms that are run without error mitigation.

The QESEM workflow begins by compiling quantum circuits into operations compatible with the QPU. It uses both native and additional operations calibrated by Qedma. Following that, Qedma characterizes errors in the newly compiled circuits. Based on error data, the circuits are reconfigured for optimal QPU execution and then run on the QPU. Lastly, classical postprocessing refines the results and provides estimations with error bars for measured observables.

QESEM provides unbiased output with errors that are primarily statistical and reducible by increasing QPU time. It offers scalability across different qubit numbers without a proportional increase in required QPU time. It also supports several state-of-the-art QPUs, including superconducting qubits and trapped ions. Even though those two cover the biggest part of the quantum market, I expect this will be expanded to other modalities over time.

Ivanti is the latest cybersecurity company to expose a vulnerability that has been exploited in a cyberattack. The endpoint protection provider recently revealed a critical security flaw impacting its cloud service appliance that allows remote access to restricted functionality. Ivanti reports that a limited number of customers have been affected, but regardless of the blast radius it points to broader concerns about the company’s software development process. The timing is not ideal, given the scrutiny over CrowdStrike’s flubbed endpoint protection update. Consequently, Ivanti would be wise to provide additional details and deeper transparency about what it is doing to prevent future vulnerabilities.

I do not believe that most of the talk about Qualcomm acquiring Intel is credible. While I do believe that Qualcomm could potentially absorb or acquire Mobileye, even that would be questionable considering the current FTC climate. Realistically, there’s no way that Intel would sell its PC division, Wi-Fi business, or any of its other businesses—other than potentially its networking business—to Qualcomm. Intel’s PC business is keeping the company afloat right now, and selling it would be corporate suicide. I don’t know which divisions Qualcomm has expressed interest in, but this rumor has been bubbling up for weeks. Frankly, I believe that Qualcomm’s greatest interest in Intel is in supporting its foundry business to enable it to be a more competitive player to challenge TSMC.

The National Football League and Amazon Web Services have renewed their technology partnership, which began in 2017. A key development is the introduction of a new AI-powered Next Gen Stat that changes how tackles in football are analyzed. The Tackle Probability machine-learning model predicts the likelihood of a defender successfully making a tackle during a play, helping to identify the most reliable tacklers and the most elusive ball carriers. The Next Gen Stats platform, supported by AWS, collects over 500 million data points each season, providing detailed statistics and different viewing options for fans. This collaboration also includes tools like the Digital Athlete for injury prevention and the Big Data Bowl, which encourages the use of data insights to improve the experience for fans and players. Check out the details.

SAP provides an ERP-centered approach to carbon management that uses AI to maintain data quality and simplify reporting. Sustainability data helps enterprises track and manage carbon footprints across operations, share sustainability data with partners, and integrate carbon accounting into financial decisions. Though ERP systems can be complex, they are vital for meeting today’s environmental and regulatory demands. By making use of these features, businesses can ensure compliance, improve efficiency, reduce costs, and make informed decisions aligned with sustainability goals.

Look for an upcoming research piece exploring how SAP demonstrates the impact of ERP on sustainability.

AST SpaceMobile’s launch of five commercial low earth orbit satellites on September 12 was a watershed event in supporting direct-to-unmodified-smartphone satellite connectivity. AT&T has been working with the company behind the scenes for nearly four years, and the operator’s financial investment signals confidence in the viability of satellite communications to bridge terrestrial mobile network coverage gaps. Moor Insights & Strategy will be publishing a podcast soon highlighting a conversation with Chris Sambar, president of AT&T Network, and Abel Avellan, CEO of AST SpaceMobile, discussing the launch and its broader implications.

Research Papers Published

Research Notes Published

Podcasts Published

MI&S Game Time Tech (Melody Brue, Robert Kramer, and IBM’s Noah Syken )

The Futurum Group Enterprising Insights Podcast (Guest: Robert Kramer)

Don’t miss future MI&S Podcast episodes! Subscribe to our YouTube Channel here.

Citations

Apple / Airpods 2 / Anshel Sag / Soft Impact
Apple’s AirPods Pro 2 could forever change how people access hearing aids

IBM / Layoff’s / Jason Andersen / Computer World
IBM has reportedly laid off thousands

Intel / AWS Partnership & Government contracts / Patrick Moorhead / New York TImes
Intel, Aiming to Reverse Slump, Unveils New Contracts and Cost Cuts

Intel / AWS Partnership & Government contracts / Patrick Moorhead / TechTarget
Intel gets boost from AWS, government contracts

Intel / AWS Partnership & Government contracts / Patrick Moorhead / AOL – Business Insider
Intel, once a Silicon Valley star, has been floundering. Now it’s mounting a turnaround.

Intel / Re-organizaiton of Foundry / Patrick Moorhead / MarketWatch
Why Intel’s latest move for its foundry business is so significant

Intel / Re-organizaiton of Foundry / Patrick Moorhead / RCR Wireless News
Intel announces re-org focused on foundry business

Intel /  Re-organizaiton of Foundry / Patrick Moorhead / Silicon Angle
On theCUBE Pod: Analysts debate Intel Foundry spinout, AI tsunami and Oracle-AWS cloud moves

Nokia / APIs / Will Townsend / Fierce Network
Nokia: Ericsson’s new JV validates our approach to APIs

TV Interviews

Intel /  AWS Partnership  / Patrick Moorhead / Yahoo! Finance
Intel’s partnerships are boosting investor confidence: Analyst

Intel / Shares / Patrick Moorhead / CNBC
Intel shares climb after reports Qualcomm approached Intel about a takeover

New Gear or Software We Are Using and Testing

  • Cisco Desk Pro (Melody Brue)
  • OnePlus Buds Pro 3 (Anshel Sag)
  • Insta360 Link2 4K AI Webcam (Anshel Sag)
  • Pixel Watch 3 (Anshel Sag)
  • Pixel 9 Pro Fold (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Intel Innovation, September 23-26 — EVENT CANCELED
  • HP Imagine, September 24, Palo Alto (Patrick Moorhead, Anshel Sag)
  • Meta Connect, September 25, San Jose (Anshel Sag)
  • Verint Engage, September 23-25, Orlando (Melody Brue)
  • Infor Annual Summit, September 30-October 2, Las Vegas (Robert Kramer)
  • Intel Innovation, September 23-26 — EVENT CANCELED
  • HP Imagine, September 24, Palo Alto (Patrick Moorhead, Anshel Sag)
  • Meta Connect, September 25, San Jose (Anshel Sag)
  • Verint Engage, September 23-25, Orlando (Melody Brue)
  • Infor Annual Summit, September 30-October 2, Las Vegas (Robert Kramer)
  • Fem.AI Summit, Menlo Park, October 1 (Melody Brue) 
  • Microsoft Industry Analyst Event, Burlington, Mass, October 2 (Melody Brue)
  • LogicMonitor, Austin, October 2-4 (Robert Kramer)
  • Teradata, October 7-10, Los Angeles (Robert Kramer)
  • Zoomtopia, San Jose, October 8-9 (Melody Brue)
  • Embedded World NA, Austin, October 8-10 (Bill Curtis)
  • MWC Americas and T-Mobile for Business Unconventional Awards event judge, October 8-10, Las Vegas (Will Townsend)
  • AMD Event, San Francisco, October 8-10 (Matt Kimball)
  • AWS GenAI Summit, October 9-10, Seattle (Jason Andersen, Robert Kramer)
  • AdobeMAX, October 14-16, Miami (Melody Brue)
  • Lenovo Global Analyst Summit & Tech World, October 14-17, Bellevue, WA (Matt Kimball, Paul Smith-Goodson, Anshel Sag)
  • IBM Analyst Summit, October 16-18, New York City (Matt Kimball, Robert Kramer)
  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • RISC-V Summit, October 22-23 — virtual (Matt Kimball)
  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Dell Tech Analyst Summit, November 6-8, Austin (Matt Kimball, Anshel Sag, Paul Smith-Goodson)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • IBM, November 6-8, New York City (Paul Smith-Goodson)
  • Fyuz, November 11-13, Dublin (Will Townsend)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Analyst Summit, November 12-13, San Francisco (Melody Brue)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • AWS re:Invent, December 2-6, Las Vegas (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)
  • ServiceNow Global Industry Analyst Digital Summit, December 10 (Jason Andersen, Melody Brue, Robert Kramer – virtual)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending September 20, 2024 appeared first on Moor Insights & Strategy.

]]>
RESEARCH NOTE: Is Lenovo’s AI Strategy Working? https://moorinsightsstrategy.com/research-notes/is-lenovos-ai-strategy-working/ Wed, 18 Sep 2024 18:20:28 +0000 https://moorinsightsstrategy.com/?post_type=research_notes&p=42568 Stop me if you’ve heard this one before: AI is top-of-mind for virtually every IT organization. And GenAI is the elixir that will cure all inefficiencies that slow down businesses. (It must be true; I read it on X.) Don’t believe me? Just look at any marketing literature from both old and new companies that […]

The post RESEARCH NOTE: Is Lenovo’s AI Strategy Working? appeared first on Moor Insights & Strategy.

]]>

Stop me if you’ve heard this one before: AI is top-of-mind for virtually every IT organization. And GenAI is the elixir that will cure all inefficiencies that slow down businesses. (It must be true; I read it on X.) Don’t believe me? Just look at any marketing literature from both old and new companies that have reoriented their positioning to win in this AI gold rush.

Lenovo is one of the many enterprise IT solutions companies chasing the AI pot of gold; fortunately for it and its customers, Lenovo actually has the products and the know-how to deliver practical results. Like its competitors, it is combining hardware, software, and services to deliver differentiated value.

As part of its AI strategy, the company has just announced a number of new offerings to help ease the cost and the operational and complexity challenges presented by AI. Do these strike a chord? Are they relevant? Let’s start by setting the relevant context for what’s going on with enterprise AIOps, then dig into what Lenovo is doing about it and what it means for customers.

GenAI Introduces a New Set of Opportunities—and Challenges—for Enterprise IT

Before GenAI can solve all the world’s problems, enterprise IT first has to figure out how to deploy, power, manage, and pay for the hardware and software stacks that make GenAI’s magic happen. I didn’t read this on X—I’ve heard it from every IT executive I’ve spoken with on the topic.

As I touched on above, the challenges of GenAI span three buckets: financial, operational, and organizational. In other words, it’s costly, it’s complex, and it requires a lot of people. From planning to deploying to using and managing, there is not much about GenAI that adheres to traditional IT practices.

Because of this, organizations struggle to activate GenAI in the enterprise. Probably most of my readers here have seen the stats about GenAI projects, but they bear repeating. For example, recent RAND National Security Research Division study calls out AI project abandonment rates as high as 80%. While I believe this number is on the very high side, the spirit of RAND’s message still resonates. Organizations tend to treat AI projects like other IT projects, then quickly realize they are anything but ordinary because of their costs, complexity, power consumption, people needs, and other factors.

Naturally, IT solutions companies have focused on removing some of these barriers by introducing integrated stacks, partnerships, services, and the like. As evidence of this, NVIDIA CEO Jensen Huang seems to have been on stage for every major tech conference in 2024. Additionally, we’ve seen the introduction of cool-sounding names that promote server vendors’ solutions to the market. Yet after all the hype and cool names, the challenges still remain. GPUs are prohibitively expensive and consume all of the available power in the rack and the datacenter; solution stacks—once operational—are now hard to manage; a huge skills gap exists; and so on. This is how the market gets to 80% abandonment rates, quickly descending from inflated expectations to the depths of disillusionment.

Lenovo Delivers GPUaaS, AIOps, and Neptune

This is where Lenovo comes in. In its latest announcement, Lenovo attempts to address some of these challenges with a few subtly impactful product and service announcements. The first is for the company’s GPU-as-a-service (GPUaaS) offering, which allows customers to better leverage expensive GPUs across the enterprise.

Let’s say you are a state government IT executive with dozens of agencies that operate as separate shops—individual teams, individual budgets, etc. The state CIO, on a directive from the governor, makes implementing AI a top priority for every agency. GPUaaS allows all of these agencies to leverage the same farm of GPUs, with usage metering and billback built in, via Lenovo Intelligent Computing Orchestration (LiCO). Organization-wide costs come down, and each agency has the necessary horsepower to train and tune its AI models.

As somebody who has lived in this world—I have been that state government IT exec—I can immediately see the benefits of GPUaaS. While there are still challenges around how budgets and cross-agency utilization are prioritized and managed, this solution can deliver real value to organizations standing up AI in their datacenters. More than that, GPUaaS addresses all three of the big challenges facing IT mentioned earlier—cost, ops, organization.

Lenovo’s second announcement, about AIOps, goes right to the heart by directly addressing the operational and complexity challenges of enterprise IT. (Cost is more of an indirect benefit.) The substance of it is that Lenovo’s XClarity One hybrid cloud management platform will incorporate predictive analytics and GenAI to deliver greater levels of reliability and cyber resilience for Lenovo infrastructure.

AIOps is an IT trend that has been around for some time. While Lenovo’s move is somewhat of a catch-up play, it does allow the company to check the box for an element of enterprise IT readiness that is critical for achieving broad adoption in this segment. Further, while much of the competition’s capabilities in this area have come via acquisition, Lenovo’s XClarity One is the fruit of in-house design.

As a techie who grew up in the server/network management space (Want to talk about managing Novell NLMs and why IPX is better than IP? I’m your guy), I like what Lenovo has done with XClarity One. In fact, I wish the company would lean into this goodness more. For instance, the cloud-based nature of XClarity One makes it simple to deploy and consume. Further, this model enables IT organizations to manage their Lenovo infrastructure through the proverbial single pane of glass.

Finally, Lenovo has built on its big HPC and AI winner by announcing some slight enhancements to its Neptune liquid-cooling technology. Specifically, Lenovo reported that Neptune now has built-in real-time energy efficiency monitoring. This enables organizations to better understand how efficiently their infrastructure is operating, allowing for proactive tweaks and tuning to drive down the all-important power usage efficiency (PUE) rating.

Does Lenovo Differentiate?

Frankly, Lenovo’s challenge is not whether it has a legitimate and differentiated play in enterprise IT in general and AI in particular. In both cases, the answer is a simple yes. The real challenge is telling Lenovo’s story to the enterprise.

The company has done an excellent job of building a business that dominates in the hyperscale and HPC markets. These are two highly competitive markets in terms of performance and resilience/reliability. For whatever reason, though, the company has seemed a little hesitant about aggressively pursuing the commercial enterprise market. Strangely, Lenovo’s business in this area is largely the same business that was run for decades by IBM—perhaps the all-time most trusted brand in enterprise IT.

Lenovo was ahead of the market in building and enabling the AI ecosystem (Check out the company’s AI innovator program). Further, its infrastructure is deployed for brands and retailers that are used and visited by most people on a daily business. Yet despite all of this innovation, most IT professionals don’t know just how rich of a portfolio the company has.

Given Lenovo’s new leadership, I expect that will change. If the company leverages all of the pieces of its technical and business portfolio, it will be a force to reckon with.

The post RESEARCH NOTE: Is Lenovo’s AI Strategy Working? appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending September 13, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-september-13-2024/ Mon, 16 Sep 2024 21:51:19 +0000 https://moorinsightsstrategy.com/?p=42361 MI&S Weekly Analyst Insights — Week Ending September 13, 2024

The post MI&S Weekly Analyst Insights — Week Ending September 13, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

The Moor Insights & Strategy team hopes you had a great weekend!

Last week, Patrick, Melody, Matt, and Robert were in Las Vegas for Oracle CloudWorld and Netsuite SuiteWorld. Will Townsend was in London for Connected Britain, where he also moderated a panel on tech innovations in mobile networks. Jason Andersen was in Austin for JFrog swampUP 24. Robert Kramer joined Infor’s weekly “What’s Up Water Cooler” podcast to discuss the latest innovations in modern ERP systems. Check it out on YouTube.

This week, Anshel is attending the Snap Partner Summit in Santa Monica; Patrick will be at Salesforce Dreamforce in San Francisco, and Jason, Melody, and Robert will attend virtually.

Will is hosting a webinar with Zayo, “What’s Next for Your Network’s Foundation?” on September 17 (tomorrow!) It’s free to register to hear Will’s insights on the future of networks.

Our MI&S team published 19 deliverables:

Over the last week, our analysts have been quoted multiple times in top-tier international publications with our thoughts on Apple, Automotive GPU IP, Border Gategway Protocol (BGP), Canva, Cyber Resilience, Google, Oracle, ZeroPoint, and Zoho Analytics.

MI&S Quick Insights

Adobe has previewed its Firefly Video Model, which is an AI-powered tool that can streamline workflows and add a lot to an editor’s creativity. The existing Firefly models are image-based. The Firefly Video Model has many useful features such as filling timeline gaps with generated B-roll footage using text prompts, camera controls, and reference images to fill in the missing sections. It can create variations of existing concepts or brainstorm on demand to generate new elements and provide new ideas. Firefly can also create atmospheric effects, 2-D and 3-D animations, and other visual enhancements. Firefly can remove unwanted objects, smooth transitions, and more, allowing editors to focus on creative storytelling and collaboration. Everything considered, AI provides Adobe editors a powerful video toolkit with many creative advantages. It’s a big step forward for video editing.

I’ve been speculating on OpenAI’s stealth project code-named Strawberry, which is believed to have superior reasoning power. Well, OpenAI may have just released Strawberry in the form of its latest model, o1, which appears to be a groundbreaking language model that demonstrates improved reasoning. o1 excels in several complex tasks ranging from math to code challenges. It even beats human experts with certain problem-solving skills. It also has an impressive ability to explain its thought process and how it arrives at its conclusions, as well as the ability to learn and improve over time.

That said, o1 also has some challenges. For example, training with large datasets may be a problem. However, the model is still under development, so we can wait to see if that improves. Despite any training problems, its reasoning ability already appears to be a plus that will set new model standards. I’m looking forward to seeing this model in its fully developed form.

Adobe has previewed its Firefly Video Model, which is an AI-powered tool that can streamline workflows and add a lot to an editor’s creativity. The existing Firefly models are image-based. The Firefly Video Model has many useful features such as filling timeline gaps with generated B-roll footage using text prompts, camera controls, and reference images to fill in the missing sections. It can create variations of existing concepts or brainstorm on demand to generate new elements and provide new ideas. Firefly can also create atmospheric effects, 2-D and 3-D animations, and other visual enhancements. Firefly can remove unwanted objects, smooth transitions, and more, allowing editors to focus on creative storytelling and collaboration. Everything considered, AI provides Adobe editors a powerful video toolkit with many creative advantages. It’s a big step forward for video editing.

I’ve been speculating on OpenAI’s stealth project code-named Strawberry, which is believed to have superior reasoning power. Well, OpenAI may have just released Strawberry in the form of its latest model, o1, which appears to be a groundbreaking language model that demonstrates improved reasoning. o1 excels in several complex tasks ranging from math to code challenges. It even beats human experts with certain problem-solving skills. It also has an impressive ability to explain its thought process and how it arrives at its conclusions, as well as the ability to learn and improve over time.

That said, o1 also has some challenges. For example, training with large datasets may be a problem. However, the model is still under development, so we can wait to see if that improves. Despite any training problems, its reasoning ability already appears to be a plus that will set new model standards. I’m looking forward to seeing this model in its fully developed form.

JFrog held its annual SwampUp event, where it made a number of interesting announcements. Long known for its Artifactory code-storage solution, JFrog has been steadily adding new capabilities and acquiring technology. The focus on security has been an especially high priority, and this week JFrog announced a runtime protection service enabling end-to-end artifact security for both source and binary code. This was coupled with the formal announcement of its MLOps solution, which is based on its acquisition of Qwak this summer. My research on how JFrog has elevated itself from a DevOps point tool to a full-blown platform will be available soon.

In addition to its own innovations, JFrog also announced a partnership with GitHub that shows some long-term potential. One of the first big steps was the integration of Artifactory with GitHub Copilot. While there have been many announcements of this type over the past few months, this one stands out. Given that Artifactory provides a rigorous and secure registry for all development artifacts, this integration makes it easy for developers to have an AI assistant specifically configured around their companies’ standards. For instance, a developer will get assistance based only upon specifically curated artifacts, versions, and standards established by existing rules and policies. This out-of-the-box integration is something that other tools either cannot do yet or that would require a lengthy integration process.

Over the next few months, we will see a new arms race, with a range of vendors showing off their AI agents. This started last week at Oracle CloudWorld and NetSuite SuiteWorld. It makes sense for application platforms to begin introducing agents—and dev tools to build agents. Agents will be the next big thing now that LLMs and bots (which are actually the simplest agents) are becoming common. This will be a major focus of my research over the rest of 2024. Over time, agents will become as ubiquitous as apps on your phone, all built to manage multi-step activities by using AI. You can think of them as dev assistants that can help you write code. An agent will be able to write the code, develop a test plan, execute the tests, give feedback, and recommend where to deploy. All you will need to do is approve the tasks, and the code will be deployed.

Last week Amazon announced that the SageMaker platform will include added support for Amazon Enterprise Kubernetes Service with its HyperPod-managed MLops solution. This allows IT and DevOps teams to use the familiar Kubernetes interface and more easily manage HyperPod clusters. This solution also ties in with AWS Cloudwatch to enable production monitoring of the clusters. This is a great step in enabling IT Ops and DevOps to “de-silo” AI and machine learning workloads.

Salesforce’s new Agentforce platform aims to change how businesses operate by introducing autonomous AI agents that work alongside employees to handle tasks in various departments such as service, sales, marketing, and commerce. These AI agents can analyze data, make decisions, and take independent action, freeing human workers to focus on more strategic and complex tasks. This could lead to increased efficiency and the ability to scale operations on demand. Early adopters such as Wiley have reported significant increases in case resolution using Agentforce, highlighting the potential of this new technology to increase customer satisfaction as well.

While Agentforce promises efficiency and scalability, its implementation could lead to job displacement and over-reliance on AI, potentially impacting employee and customer relationships. Concerns around data privacy, bias, and unexpected errors also need addressing. Additionally, the cost and complexity of implementing such technology may pose challenges, especially for smaller businesses.

However, Salesforce is emphasizing the importance of its Data Cloud in powering the accuracy and capabilities of its new AI solutions. By unifying apps, data, and AI agents on a single platform, Salesforce aims to reinforce its position as a leader dedicated to customer-centric solutions in the evolving CRM landscape.

Oracle announced a strategic partnership with AWS through which Oracle Cloud Infrastructure (OCI) will reside in AWS datacenters, with Exadata infrastructure housing Oracle Autonomous Database and other services. This service, Oracle Database@AWS, is aimed at enabling customers to deploy their Oracle database environments natively in AWS for easier, more performant integration of enterprise data with services such as Bedrock, SageMaker, and other analytics tools. No ETL. No complicated data pipeline management.

How will this work? Customers will go through their AWS console to select and deploy Oracle Database@AWS, using either AWS or Oracle credits to activate. Database@AWS will spin up as a service for use. L1 service will be provided by AWS; anything beyond that will be a collaborative effort between the two companies.

AWS marks the last of the big cloud providers to embrace this native multicloud model that Oracle pioneered. And while this partnership may seem surprising on its surface, it actually makes perfect sense. Virtually every Oracle customer uses AWS, and virtually every large AWS customer uses Oracle. Many of these customers want to integrate their rich enterprise data with the AI and analytics tools that are available in the cloud—and AWS is that cloud of choice. Rather than make life complicated, or force customers into a choice that would be suboptimal regardless of which cloud they chose, OCI and AWS have found a way to address these needs.

I mark this as a big win for both companies. Oracle has effectively mainstreamed this concept of native multicloud – or cloud within a cloud. And AWS has sent a big signal to the market about its customer-first approach.

Lenovo has made a slew of announcements aimed at enterprise IT organizations struggling with enabling and supporting the AI environment. GPU-as-a-service (GPUaaS), AIOps, and deeper insights into liquid cooling are three launches that should drive efficiencies across the financial, operations, and sustainability vectors. Here are the three offerings in a nutshell:

  • With GPUaaS, the company has delivered a new solution in its TruScale lineup that allows organizations to deploy and meter uses of NVIDIA GPUs across AI and HPC workloads. This includes built-in consumption metering that can be used to charge back to internal customers.
  • XClarity One gets a significant upgrade as the company leans more heavily into AIOps, delivering greater levels of automation to IT operations.
  • Lenovo’s Neptune liquid cooling becomes easier to deploy and utilize with Lenovo advisory services designed to help customers better understand how to most efficiently use liquid cooling. This is especially important as AI becomes more present in the enterprise datacenter.

I like how Lenovo is driving differentiation across the areas where enterprise IT and datacenter operators struggle.

How does this GPUaaS work? It’s designed to make life easier for organizations with multiple business units. This is like when I was working in government IT in Florida, where we had 39 different agencies—and a number of entities within each agency—that were funded separately. GPUaaS would enable me to apportion GPU resources across the state—dedicated GPU resources along with dedicated bill-back. This is not simply differentiated; it’s differentiated and it delivers value to organizations that are trying to better utilize or leverage the very large investments they are making in performant computing platforms.

What’s in it for Lenovo? I see two things. First, this is a differentiated service that can deliver real value into the enterprise—a market segment where Lenovo has been trying to establish itself for some time. If Lenovo can gain traction with a service such as GPUaaS, it can perhaps find opportunities downstream, in the more general-purpose compute clusters and farms.

Second, I think this is maybe an opportunity for Lenovo to establish itself with a higher-margin AI add-on business in consulting services. Something we’ve seen across the quarterly earnings of the OEMs is that the AI infrastructure market has contributed big top-line gains, but not a lot of margin. Consulting add-ons could help Lenovo change that.

Canva is raising the price of its Teams subscription by up to 300%, citing the addition of AI-powered features as justification. This move has been met with mixed reactions from users, with some questioning the value proposition while others find the new features to be worth the increased cost. Canva seems to have overlooked a more flexible AI pricing strategy, forcing customers to pay for bundled features instead of choosing the AI tools they actually need.

Canva’s decision to bundle AI features into a significantly higher-priced plan risks alienating its core user base of smaller teams and individuals that may not require the full suite of AI capabilities. By potentially forcing customers to pay for features they don’t need, Canva could drive them to seek more affordable and customizable alternatives. In a rapidly evolving AI landscape where costs are decreasing, offering AI features as add-ons or adopting a usage-based pricing model could be a more sustainable and customer-centric strategy.

Zoho Analytics has unveiled over 100 new enhancements, including an upgraded AI assistant and a machine learning studio. The focus is on democratizing data analysis to empower users across all roles to extract actionable insights. This comprehensive upgrade positions Zoho Analytics as a powerful and user-friendly BI solution at a competitive price. You can read my full analysis in this Forbes contribution.

Zoho has released version 6.0 of its AI-powered Analytics platform, bringing new AI and machine learning features. The update offers more options for teams to collaborate, analyze data, predict trends, automate tasks, and connect data for better decision-making. AI-driven automation simplifies metrics, reporting, and dashboards, while AutoML allows users to create custom models. The platform is also more flexible and extensible, integrating smoothly with tools like Power BI and Tableau.

Oracle CloudWorld is one of the best tech events of the year, especially when you pair it with the company’s NetSuite SuiteWorld conference. Let’s recap these back-to-back events from last week.

Oracle came out strong with a slew of announcements, emphasizing how AI is now deeply embedded across its offerings—from Oracle Cloud Infrastructure (OCI) to Oracle Fusion Cloud Applications. Hot topics included partnerships with IBM, Amazon Web Services, Google, and Microsoft Azure to provide customers with more-unified experiences.

Larry Ellison remains as sharp and visionary as ever. (I hope I look that great when I’m 80.) He focused on passwords and Zero-trust Packet Routing (ZPR) to simplify the complexities of network security.

I had the opportunity to meet with the Oracle Cloud ERP and Oracle Cloud SCM teams (#mywheelhouse) to discuss some of their impressive updates. For example, an RFID-powered solution now ensures Oracle Cloud SCM healthcare customers get the right supplies to the right places at the right times—driving better patient care experiences.

ERP remains the backbone of enterprise operations, and by capitalizing on its modern data management and AI-driven solutions, Oracle can push its customers to fully leverage these innovations. I’ll dive deeper into this topic in an upcoming article, including how the new Oracle Intelligent Data Lake, powered by OCI, helps ERP customers integrate and analyze structured and unstructured data in an all-in-one solution.

NetSuite announced some notable AI-powered enhancements: a new procurement solution, a Salesforce connector, improved project management, upgrades to the user experience, fresh training resources, and an integrated benefits offering. To be fair, some of these features should have already been standard. I’ve covered this in more detail in my latest piece on NetSuite.

As always, real-world customer stories breathe life into Oracle’s narrative. Organizations including Clayton Homes, the CIA, BNP Paribas, MGM Resorts International, Cloudflare, DHL, Uber, WideLabs, and Guardian Life took the stage to explain how Oracle’s solutions are transforming industries.

In other news, IBM and Oracle announced that IBM Consulting will support Oracle customers in gaining more value from generative AI and its growing challenges. “Our clients are eager to extend generative AI initiatives but they’re also concerned about rising compute costs, lack of in-house AI skills, AI assistant sprawl, and management oversight,” said Corinne Koppel, Global Oracle Practice Leader, IBM Consulting.

Broadcom is another company to suffer from the AI bubble-burst phenomenon. Despite its strong earnings, the company’s stock took a dip based on investor concerns about future earnings. The fears may be unwarranted, since Broadcom is well diversified beyond GenAI and its silicon is used pervasively across many enterprise networking infrastructure providers.

SAP has completed its acquisition of WalkMe, a platform that enhances user experiences with features such as in-app walkthroughs and step-by-step guides. It simplifies complex software tasks, increases productivity, shortens training time, and improves software usability by offering real-time assistance and automation of routine processes. WalkMe supports employee onboarding and can be integrated with enterprise systems such as CRM, ERP, and HR. The acquisition is set to complement SAP’s Joule AI, RISE with SAP, and GROW with SAP programs by further enhancing user engagement and simplifying digital adoption.

Curious about how NetSuite transformed from an early SaaS innovator to a major player in the ERP landscape, especially following its acquisition by Oracle? Dive into my latest Research Note as I explore NetSuite’s remarkable journey, its differentiators, and the strategic advantages that have positioned NetSuite as a go-to ERP solution for small to medium-sized businesses across multiple industries.

IBM has announced its plan to acquire Accelalpha, a global Oracle services provider specializing in implementing, integrating, and managing Oracle Cloud applications. Accelalpha serves clients around the world, focusing on industries such as distribution, heavy industry, and financial services. The acquisition will enhance IBM’s consulting expertise, particularly in ERP, SCM, logistics, finance, EPM, and customer transformation services. The deal is expected to close in Q4 2024, pending regulatory approval. Financial terms remain undisclosed.

Acquisitions remain a key strategy for many companies in the ERP space, particularly to enhance resources and consulting expertise. A recent example is Capgemini’s announcement of its acquisition of Syniti to expand its data management capabilities and strengthen its expertise in SAP projects.

Mastercard is set to acquire Recorded Future, a cybersecurity company, for $2.65 billion to strengthen its fight against fraud and cyber threats. The acquisition, expected to close in early 2025, builds on an existing partnership between the two companies. Mastercard cites the growing threat of cybercrime, which is projected to cost trillions globally this year, as a driving force behind this strategic move.

PayPal is partnering with Shopify to handle a portion of Shopify Payments in the U.S. PayPal will become an additional processor for credit and debit card transactions. This will create a consolidated view for merchants by integrating PayPal wallet transactions with Shopify Payments. The deal expands a global strategic partnership between the two companies. It shows how PayPal is increasingly being selected as a preferred platform by major commerce brands, technology companies, and payment processors.

At Oracle CloudWorld in Las Vegas, Oracle announced a new open skills architecture within Oracle Dynamic Skills that helps organizations develop, curate, and execute an enterprise-wide skills-based talent strategy. Oracle Dynamic Skills is part of Oracle Fusion Cloud HCM. This new architecture should help HR leaders leverage AI to better understand and leverage the skills of their employees, identify skills gaps, expand access to talent and nurture it, and make smarter workforce decisions.

Oracle Dynamic Skills aims to simplify the process of managing skills data, regardless of an organization’s current capability in this area. The platform helps customers align employee skills with business goals to optimize the workforce and stay ahead in a rapidly changing job market. Using this product’s AI-powered capabilities, HR leaders can create a comprehensive skills inventory, enrich the data with external sources, analyze skills gaps and trends, leverage third-party skills data and labor market analytics, and effectively manage a skills library.

Skills have emerged as a critical metric for assessing an organization’s potential. Adopting a skills-based talent strategy can offer valuable workforce insights, facilitate access to a broader range of talent, and ultimately boost overall company performance. Oracle’s offering comes at a good time because talent needs are rapidly changing—yet many organizations struggle to initiate their skills journey.

The new iPhone 16 was announced last week, and pre-orders began on Friday. The biggest takeaway from the launch is that the iPhone 16 lineup is Apple’s most complete in ages, with the base model iPhone and the Pro series having very comparable chips and likely the same AI performance. The trade-in offers also seem very aggressive, which I believe is because Apple wants as many users to have Apple Intelligence-capable devices as possible to entice more developers to develop for it. The biggest problem that Apple Intelligence has—other than its low install base—is that many of its features aren’t available at launch. Apple is broadly saying “Fall” as the window for some of its features, but the biggest ones, for example the new version of Siri, won’t be available until 2025. My advice for anyone looking to buy a new iPhone is that this is probably the best version of the base-level iPhone in many years, including near-parity with the Pro on most features. Still, it might not be a significant upgrade for anyone who already has an iPhone 15 Pro, especially since those customers will also be getting Apple Intelligence.

During the event launching the new iPhone, Apple announced a series of new wearables, including a thinner, larger-screen Apple Watch and a bunch of updated and new AirPods. While the update to the AirPods Max brought nothing more than new colors and USB-C connectivity, Apple did announce new versions of its base-model AirPods—the AirPods 4—in two versions, as well as major updates to the AirPods Pro 2. The biggest updates to the latter, in my opinion, are the hearing test and the ability to use the AirPods Pro 2 as hearing aids. Apple just got FDA clearance for this last week, right after the event. At $249, these might be the cheapest hearing aids on the market. While I don’t believe they will work well for someone with severe hearing loss who always needs hearing aids, I do believe they might work well for people who have impaired hearing and might need temporary assistance, especially when talking to friends and family over the phone. At $249, these might also bring hearing aids within reach for people who could never afford them otherwise—as long as they have an iPhone, which might be the most limiting factor.

Google has started shipping the new Pixel Watch 3, including the new 45mm size. (It previously offered only 41mm.) I have been using it for a few days now, and the battery life is fantastic. Thanks to the included Fitbit software, it is truly the only other watch with comparable fitness and health capabilities to Apple. I genuinely appreciate the design and integration with the Pixel 9 Pro Fold. I am also reviewing the accessories and really appreciate the improvements to them, although I do wish the metal mesh wristband were compatible with the 45mm model. More broadly, I wish Google had better third-party support for its watches, but the reality is that the majority of the market belongs to the Apple Watch, while on the Android side most of the market is dominated by the Samsung Galaxy Watch.

Ericsson has formed a joint venture with 12 of the world’s leading network operators to create a company that manages and sells 5G network APIs. This joint venture will be 50% owned by Ericsson and 50% owned by the operators, which should create cohesion that simply has not existed before in the market. This announcement includes all three big carriers in the U.S. and will likely drive 5G Standalone applications and the monetization of 5G unlike anything that has been possible before. The really important thing here is that this new venture will be able to sell services to ISVs and other customers across multiple carriers at global scale. This move is truly unprecedented in scale and cohesiveness.

T-Mobile sent a test alert via satellite using its new partnership with SpaceX, covering more than 500,000 square miles. This approach should ensure that people receive emergency alerts even in the most secluded areas where cell service might not reach. For example, this could be especially important for people in National Parks and other remote areas during fire season who would otherwise not know that a wildfire is headed their way until it’s too late. This capability won’t be limited to T-Mobile users, either; it could potentially be used nationwide by all operators, and even enabled by the federal government for emergency preparedness and response. This could truly be a feature that saves lives.

AST SpaceMobile announced the successful launch of its first five satellites aboard a SpaceX rocket, which finally sets the company on a course to initiate service with its partners AT&T and Verizon. AST SpaceMobile’s BlueBird satellites are much larger than traditional low-earth orbit satellites, but they can serve larger areas and more users simultaneously and with higher speeds. The company’s approach will also serve a long list of other carriers around the world, but AT&T and Verizon will be among the most prominent in the U.S. I believe that AST SpaceMobile will compete with existing satellite operators, including SpaceX’s Starlink.

United Airlines announced that in 2025 its entire fleet of planes will offer free high-speed Wi-Fi connectivity thanks to a new partnership with SpaceX’s Starlink. While United has not given details on the speeds that users can expect from this new upgrade, they can range anywhere from 40 Mbps to 220 Mbps for the entire plane. For passengers, United’s announcement is significant because it means upgrades to Wi-Fi coverage and speeds for more than 1,000 planes. That said, this is a significant undertaking that will likely put new stresses on SpaceX’s Starlink satellite network. It will be interesting to see how SpaceX responds to this new stress and how users’ Wi-Fi speeds are affected.

Quantinuum has released a new roadmap that projects it will create a universal, fault-tolerant quantum computer by 2030. The roadmap forecasts that Quantinuum will achieve this major milestone using a fifth-generation quantum computer, Apollo, that will be able execute millions of gates. If all goes as planned, Apollo will be able to achieve quantum advantage using many high-fidelity logical qubits by scaling its QCCD architecture.

Quantinuum has collaborated with Microsoft on several recent breakthroughs. It demonstrated 12 logical qubits on its System Model H2 quantum computer, plus a chemistry simulation using a combination of logical qubits, AI, and HPC. Microsoft’s Azure Quantum Elements has also integrated Quantinuum’s InQuanto software into the product offering. The Quantinuum roadmap anticipates that the company will continue building increasing numbers of reliable logical qubits and leveraging partnerships with industry leaders like Microsoft.

The U.S. federal government recently published guidance related to border gateway protocol (BGP) internet routing security. BGP is instrumental in determining the optimal routes for information to be transmitted over the internet across public and private networks. There is a concern here, which is that BGP predates the launch of the public internet—and cyberthreats have vastly increased in sophistication. A more modern protocol, Open Shortest Path First (OSPF), could address BGP’s shortcomings and provide a more secure routing methodology.

IBM and ESPN are continuing their collaboration for the eighth year with the ESPN Fantasy app for fantasy football. As with IBM’s work on the US Open, Wimbledon, and The Masters, the watsonx data and AI platform is supporting over 12 million fantasy football users with advanced AI-driven tools.

“Millions of people participate in fantasy football on the ESPN Fantasy platform each year, and they are constantly looking for the best information available to compete in and win their leagues,” said Noah Syken, VP of sports and entertainment partnerships for IBM. “This year’s enhancements in the ESPN Fantasy platform put watsonx-powered insights directly in their hands, giving them access to personalized, data-driven insights that help deliver on these expectations.”

New tools such as “Top Contributing Factors” in Waiver and Trade Grades offer personalized player grades and AI-generated insights based on complex data, along with detailed measures of player performance and expert articles.

Qualcomm’s front-of-the-jersey Snapdragon sponsorship with Manchester United goes far beyond a logo on a shirt. It brings technology into the mix, enhancing fan experiences with better connectivity, data-driven insights, and more interactive features at the stadium. As Qualcomm CMO Don McGuire put it, “Our Manchester United partnership is how we come together with one of the most revered sports franchises in the world and how we build to scale for the Snapdragon brand—from awareness all the way through to affinity and advocacy.”

Read my Research Note to discover how Qualcomm and Manchester United are elevating sponsorship to new heights, and be sure to catch regular insights from Melody Brue, Anshel Sag, and me on the Game Time Tech podcast (linked below) as we explore how technology is shaping the future of sports.

Amid many announcements at Oracle CloudWorld, the new Oracle Fusion Cloud Sustainability application probably hasn’t gotten the attention it deserves. This new tool aims to streamline sustainability data management and reporting, enabling organizations to make more informed decisions and accelerate progress on their environmental targets.

Oracle Fusion Cloud Sustainability integrates data from various Oracle Cloud applications to allow automated tracking of sustainability-related activities, contextualized data analysis within existing business processes, and simplified reporting through pre-built dashboards. The solution also provides rigorous audit trails, emission factor mapping, and third-party integrations, giving businesses a broad toolkit to measure and improve their environmental performance. Notably, Oracle offers this new capability to existing customers at no additional cost, which suggests its commitment to supporting sustainability efforts.

Ericsson is attempting to accelerate mobile network programmability with the recent announcement of a joint venture with several mobile network operators. It is a logical move given that the company has written off its entire acquisition of Vonage and needs to chart a new course. From my perspective, Ericsson was ahead of the telecommunications industry with its API strategy, but with Nokia entering the category with its Network as Code platform just one year ago and the GSMA now providing guidelines, mobile network programmability may finally find its rhythm.

Research Notes Published

Citations

Apple / Airpods Pro 2 / Anshel Sag / The Verge
Apple’s AirPods Pro 2 could forever change how people access hearing aids

Apple / Apple Watch News / Anshel Sag / Tech News World
Apple Weaves AI Into Latest Watch, AirPods, iPhone Models

Automotive GPU IP / Anshel Sag / Business Wire
Imagination Announces Highest Performance Automotive GPU IP with FuSa Advancement

Automotive GPU IP / Anshel Sag / New Electronics
Imagination announces automotive GPU IP with FuSa advancement

Border Gateway Protocol (BGP) / Will Townsend / Network Computing
White House Road Map Provides Guidance on BGP Internet Routing Security

Canva / Saas & AI / Melody Brue / Venture Journeys
A Simple Framework for Evaluating SaaS Resilience to AI

Cyber Resilience / Matt Kimball / StateTech
Backup and Recovery Systems Augment Government Cyber Resilience

Google / Smart Glasses / Anshel Sag / HT Haztech
Smart Glasses 2.0: How AI is Driving the Next Generation of Wearable Tech

Oracle / Earnings & AWS Partnership / Patrick Moorhead / Yahoo! Finance
Oracle will be ‘data broker’ for the gen. AI future: Analyst

ZeroPoint / AI Memory / Matt Kimball / Blocks & Files
ZeroPoint aims to tackle AI memory bottlenecks with compression IP

ZeroPoint / Matt Kimball & Patrick Moorhead / PR Newswire
ZeroPoint Technologies Releases New Hardware-Accelerated Memory Optimization Solutions and Receives Industry Recognition for Innovation  

Zoho / Zoho Analytics / Melody Brue / Business Wire
Zoho Launches AI-Rich, Highly Extensible Version of Zoho Analytics, Democratizing Self-Service BI to Any Persona or Business

New Gear or Software We Are Using and Testing

  • Cisco Desk Pro (Melody Brue)
  • OnePlus Buds Pro 3 (Anshel Sag)
  • Insta360 Link2 4K AI Webcam (Anshel Sag)
  • Pixel Watch 3 (Anshel Sag)
  • Pixel 9 Pro Fold (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Snap Partner Summit, September 17, Santa Monica (Anshel Sag)
  • Zayo Network Transformation webinar moderation, September 17 (Will Townsend)
  • Salesforce Dreamforce, September 17-19, San Francisco (Patrick Moorhead) (virtual – Jason Andersen, Melody Brue, Robert Kramer)
  • Intel Innovation, September 23-26 — EVENT CANCELED
  • HP Imagine, September 24, Palo Alto (Anshel Sag)
  • Meta Connect, September 25, San Jose (Anshel Sag)
  • Verint Engage, September 23-25, Orlando (Melody Brue)
  • Infor Annual Summit, September 30-October 2, Las Vegas (Robert Kramer)
  • Snap Partner Summit, September 17, Santa Monica (Anshel Sag)
  • Zayo Network Transformation webinar moderation, September 17 (Will Townsend)
  • Salesforce Dreamforce, September 17-19, San Francisco (Patrick Moorhead) (virtual – Jason Andersen, Melody Brue, Robert Kramer)
  • Intel Innovation, September 23-26 — EVENT CANCELED
  • HP Imagine, September 24, Palo Alto (Anshel Sag)
  • Meta Connect, September 25, San Jose (Anshel Sag)
  • Verint Engage, September 23-25, Orlando (Melody Brue)
  • Infor Annual Summit, September 30-October 2, Las Vegas (Robert Kramer)
  • Fem.AI Summit, Menlo Park, October 1 (Melody Brue) 
  • Microsoft Industry Analyst Event, Burlington, Mass, October 2 (Melody Brue)
  • LogicMonitor, Austin, October 2-4 (Robert Kramer)
  • Teradata, October 7-10, Los Angeles (Robert Kramer)
  • Zoomtopia, San Jose, October 8-9 (Melody Brue)
  • Embedded World NA, Austin, October 8-10 (Bill Curtis)
  • MWC Americas and T-Mobile for Business Unconventional Awards event judge, October 8-10, Las Vegas (Will Townsend)
  • AWS GenAI Summit, October 9-10, Seattle (Jason Andersen, Robert Kramer)
  • AdobeMAX, October 14-16, Miami (Melody Brue)
  • Lenovo Global Analyst Summit & Tech World, October 14-17, Bellevue, WA (Matt Kimball, Paul Smith-Goodson, Anshel Sag)
  • IBM Analyst Summit, October 16-18, New York City (Matt Kimball, Robert Kramer)
  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • RISC-V Summit, October 22-23 — virtual (Matt Kimball)
  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Dell Tech Analyst Summit, November 6-8, Austin (Matt Kimball, Anshel Sag, Paul Smith-Goodson)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • IBM, November 6-8, New York City (Paul Smith-Goodson)
  • Fyuz, November 11-13, Dublin (Will Townsend)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Analyst Summit, November 12-13, San Francisco (Melody Brue)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • AWS re:Invent, December 2-6, Las Vegas (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)
  • ServiceNow Global Industry Analyst Digital Summit, December 10 (Jason Andersen, Melody Brue, Robert Kramer – virtual)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending September 13, 2024 appeared first on Moor Insights & Strategy.

]]>
VMware Explore Brings Broadcom’s Private Cloud Strategy Into Focus https://moorinsightsstrategy.com/vmware-explore-brings-broadcoms-private-cloud-strategy-into-focus/ Fri, 13 Sep 2024 21:41:59 +0000 https://moorinsightsstrategy.com/?p=42241 A look at VMware Explore 2024 and Broadcom's first opportunity to speak directly with its customers and dig deeper into its strategy.

The post VMware Explore Brings Broadcom’s Private Cloud Strategy Into Focus appeared first on Moor Insights & Strategy.

]]>
VMware Explore Brings Broadcom’s Private Cloud Strategy Into Focus
Broadcom CEO Hock Tan presents at VMware Explore 2024. Image by Broadcom

Since Broadcom closed its acquisition of VMware in November 2023, there has been much noise from industry pundits and the press. Some of the noise focused on licensing and pricing changes. Some of it concentrated on changes to the legacy VMware channel program. Some of it was around portfolio consolidation strategies and the potential end-of-life of certain products. And some of it was general FUD instigated by competitors who saw an opportunity to capitalize on the situation.

Mind you, some of this noise was undoubtedly warranted. Or maybe “warranted” is too strong, and “understandable” is better. Yes, there was a lot of change—a lot of disruption. And in fairness, the company was not exactly crisp in its messaging around these changes.

Lost in all this, however, was a vision that Broadcom laid out regarding VMware Cloud Foundation and how the company wanted to transform it into a platform that would enable customers to achieve a true cloud operating model—meaning a single stack and single control plane that customers could use to achieve the cloud on-premises.

The VMware Explore 2024 event this week marked the first opportunity Broadcom had to speak directly with its customers and dig deeper into its strategy. Granted a few days of our collective attention, what did the company finally have to say? Did Broadcom lay out a compelling and differentiated story? Have questions been answered? Let’s dig in.

Private Cloud Is Cool Again—But Not Private Cloud As We Know It

Before getting into Broadcom’s announcements, let’s peel back the layers of the onion on this private cloud thing. When many people think of “private cloud,” they hearken back to circa 2009 and a term given to what amounts to a VM cluster that is walled off, with maybe some base self-service capability. It seems that this term quickly fell out of fashion, giving way to “hybrid” and, eventually, “hybrid multi-cloud.” However, as enterprise organizations keep using the public cloud for some functions, the needs that drove the concept of the private cloud in the first place persist. This means that the apps and data—and the environment that runs them—that need to be on-premises . . . really must be on-premises.

But there is a tension here, because the legitimate privacy and security needs of the business can conflict with what works best for IT. Developers, DevOps, data science folks and others want to quickly spin up the compute, storage and networking required for the tasks at hand. They also want to do this complete with curated services for security, load balancing, AI and so on so they can do their jobs faster and easier. All of these needs have traditionally been served by the public cloud, albeit at a huge cost. In private cloud’s original form, the most advanced implementations almost managed to satisfy both the business and the technical requirements—but not quite.

This dynamic has partially fueled the continued growth of cloud service providers including AWS, Microsoft Azure, Google, and Oracle. While managing a company’s public cloud estate is both very complex and very costly, IT organizations have traditionally endured those pains to enable business agility.

Against this backdrop, Broadcom introduced VCF 9—the full-stack, multi-tenant private cloud that can be run anywhere: on-premises, in a colocation facility or even on a public cloud. Yes, an IT organization can take its entire VCF stack and move it from on-prem to the public cloud and back.

What Is VCF 9?

We could think of VCF 9 as Private Cloud 3.0. Meaning, it is effectively the public cloud brought on-premises through the integration of technology that existed across the VMware portfolio. It is not simply a bunch of virtual machines or siloed environments managed by different teams. It is infrastructure provisioned for multi-tenancy and consumed through a cloud portal. It’s also a curated (and growing) list of services that address the enterprise’s most common set of needs.

VCF 9 delivers the public cloud on-prem
VCF 9 delivers the public cloud on-prem. Image by Broadcom

In many ways, VCF 9 is a radical departure from what the legacy VMware portfolio has delivered to the market. But in another sense, it’s not so different. Most of the pieces of this puzzle have been in the VMware portfolio; the creation of VCF 9 was more of an exercise of bringing it all together coherently. The great technology the company has been developing over the years is now integrated into a single stack with a single control plane to deliver the cloud as described above.

Moving to VCF 9 will not be easy for enterprise IT organizations. It is a full cloud migration—just not to a public provider like AWS or Azure. However, Broadcom has created a set of services to ease this migration and help IT organizations build and maintain the skills to support this new environment.

Broadcom programs to migrate to the private cloud
Broadcom programs to migrate to the private cloud. Image by Broadcom

I’m a fan of what Broadcom is doing with these services for several reasons. First and foremost, it creates an opportunity for its partners to add real value to the equation—not simply managing licenses or volume agreements, but playing an important role in what is arguably the largest IT transformation project many organizations will experience.

Second, this approach creates stickiness for Broadcom with its customers. Customers may get VCF 9 as part of their VMware license, but deploying and using it builds an entirely new dynamic between customers and Broadcom. Effectively, Broadcom is commercializing the cloud and becoming that provider to the enterprise.

Is VCF 9 What Customers Want?

I speak with IT practitioners and executives regularly. I also used to run a couple of IT shops before I became an analyst. Remove the term “private cloud” and the perceptions that folks have of it, and I believe that VCF 9 is precisely what customers want.

This is not to say that IT organizations are looking to abandon the public cloud (although Broadcom CEO Hock Tan flashed a slide during his keynote indicating that upwards of 83% of enterprise IT organizations are looking to repatriate some applications). Rather, it’s to say that while the public cloud has its place and utility, an enterprise organization’s on-prem datacenter—its data estate—needs to be consumed in the way that organizations have grown accustomed to with the public cloud. However, these enterprises also have to account for reducing the cost and complexity of the public cloud. Tan, at one point, referenced the “public cloud PTSD” suffered by many enterprise IT organizations.

So, do I believe customers want VCF 9? Yes. Do I believe customers realize they want VCF 9? Not yet, but I suspect Broadcom’s go-to-market team is going to resolve this.

What About The Competition?

When talking with IT folks regarding VMware and potential moves to other alternatives, Nutanix and Red Hat tend to be the two vendors most mentioned. Nutanix Cloud Platform and Red Hat OpenStack tend to be the products that are in the discussion.

There are similarities and differences between VCF and these competitors. We can generally lump all three into the cloud operating model that IT organizations hope to achieve. NCP is the solution I hear referenced more as companies discuss exploring alternatives. While Nutanix has done an excellent job leveraging partnerships with OEMs, I haven’t yet seen NCP land in large enterprise accounts. I am curious how the company’s partnership with Dell will make Nutanix AHV available on PowerFlex storage. This external storage support is critical to achieving market traction.

When looking at Red Hat, I have not seen the same level of interest as I have regarding Nutanix. This solution, a Red Hat commercialized version of open source projects, faces a similar challenge to the legacy VMware solutions that enterprise organizations face—it’s a cobbling-together of multiple solutions to get customers part of the way to achieving cloud on-prem. While Red Hat’s RHV, OpenStack and OpenShift solutions can be good for customers who want more customization, that flexibility has a cost: complexity.

Broadcom’s Vision For Private Cloud Is Clear

There has been much noise surrounding Broadcom and VMware since November of 2023. The Explore conference this week was a pivotal event for the company, because it was important for Hock Tan and the team to demonstrate to a skeptical market that the company is focused on delivering value to its customers.

Did Broadcom succeed? Yes. Whether one agrees or disagrees with Broadcom’s vision of private cloud is irrelevant. The company has built a compelling vision that helps enterprise organizations reduce complexity and cost by creating their own cloud that can run anywhere.

I suspect the company will see attrition among its smaller customers who cannot realize the full value of VCF or VMware vSphere Foundation. However, given the changes to the VMware portfolio, this is to be expected.

I’ll be following Broadcom’s progress with VCF 9 closely, looking for actual deployments and consumption as the true indicator of its market success. Stay tuned.

The post VMware Explore Brings Broadcom’s Private Cloud Strategy Into Focus appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending September 6, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-september-6-2024/ Tue, 10 Sep 2024 00:45:26 +0000 https://moorinsightsstrategy.com/?p=42053 MI&S Weekly Analyst Insights — Week Ending September 6, 2024

The post MI&S Weekly Analyst Insights — Week Ending September 6, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

The Moor Insights & Strategy team hopes you had a great weekend!

Last week, Anshel Sag was at IFA Berlin, where his insights were featured during the Qualcomm press conference. This week, Patrick, Melody, Matt, and Robert will be in Las Vegas for Oracle CloudWorld and Netsuite SuiteWorld. Will Townsend will be in London for Connected Britain, where he’ll be moderating a panel. Jason Andersen will be in Austin for JFrog swampUP 24.

Will is hosting a webinar with Zayo, “What’s Next for Your Network’s Foundation?” on September 17. It’s free to register to hear Will’s insights on the future of networks.

Our MI&S team published 10 deliverables:

Over the last week, our analysts have been quoted multiple times in top-tier international publications with our thoughts on Intel, NVIDIA, Zoom, and charging for AI.

Patrick Moorhead appeared on Yahoo! Finance and CNBC to discuss NVIDIA stock and the DOJ subpoena over possible antitrust violations. Melody was on the Big UC News Show to discuss Zoom AI, Slack and Box integration, and more.

MI&S Quick Insights

YouTube recently announced new AI detection tools intended to protect creators from unauthorized use of their likenesses. There has been concern about the ease with which AI can misuse someone’s face, voice, or other attributes. The new YouTube tools can detect when AI-generated content has copied a creator’s appearance or voice without permission.

The new policy, backed up by YouTube’s tools and commitment to protecting IP and personal rights, is appropriate because fake images, fake porn, and other videos are easily created, and almost anyone can do it with readily available AI tools. I believe the new detection methods will allow creators to more easily police their own digital properties to protect their reputations and brands from damage.

The bulk of—and most impressive part of—our exposure to AI began only a couple of years ago with ChatGPT. In the short time between then and now, AI has evolved rapidly, even though our understanding of AI’s inner workings hasn’t matched its functional evolution. According to a new research paper by Google DeepMind researchers, machine psychology provides a fresh way to understand how AI models work.

Traditionally, AI’s core functionality and power are based on complex neural network designs. Machine psychology doesn’t examine how an AI model reacts to inputs or what its output reveals. Instead of the inner step-by-step path through the model, machine psychology focuses on understanding the “behavior” of AI as it responds to commands and questions. Terms like “learning” or “reasoning ” have roots in human psychology, and applying them to AI can be confusing and meaningless. It is like calling AI intelligent even though it doesn’t have human-like understanding or consciousness yet.

Machine psychology is important because it helps us recognize and understand AI’s sophisticated behaviors and abilities beyond simple data processing. It will require long-term research to understand AI behavior over time, predict future developments, and ensure that AI remains safe and aligned with human objectives. Machine psychology is a significant step and necessary step to better understand AI. If you are interested in learning more, click here for the paper by Google DeepMind.

YouTube recently announced new AI detection tools intended to protect creators from unauthorized use of their likenesses. There has been concern about the ease with which AI can misuse someone’s face, voice, or other attributes. The new YouTube tools can detect when AI-generated content has copied a creator’s appearance or voice without permission.

The new policy, backed up by YouTube’s tools and commitment to protecting IP and personal rights, is appropriate because fake images, fake porn, and other videos are easily created, and almost anyone can do it with readily available AI tools. I believe the new detection methods will allow creators to more easily police their own digital properties to protect their reputations and brands from damage.

The bulk of—and most impressive part of—our exposure to AI began only a couple of years ago with ChatGPT. In the short time between then and now, AI has evolved rapidly, even though our understanding of AI’s inner workings hasn’t matched its functional evolution. According to a new research paper by Google DeepMind researchers, machine psychology provides a fresh way to understand how AI models work.

Traditionally, AI’s core functionality and power are based on complex neural network designs. Machine psychology doesn’t examine how an AI model reacts to inputs or what its output reveals. Instead of the inner step-by-step path through the model, machine psychology focuses on understanding the “behavior” of AI as it responds to commands and questions. Terms like “learning” or “reasoning ” have roots in human psychology, and applying them to AI can be confusing and meaningless. It is like calling AI intelligent even though it doesn’t have human-like understanding or consciousness yet.

Machine psychology is important because it helps us recognize and understand AI’s sophisticated behaviors and abilities beyond simple data processing. It will require long-term research to understand AI behavior over time, predict future developments, and ensure that AI remains safe and aligned with human objectives. Machine psychology is a significant step and necessary step to better understand AI. If you are interested in learning more, click here for the paper by Google DeepMind.

Last week I got the chance to tune into Dell’s AIOps strategy and products. For context, we are now about one year out from Dell’s acquisition of Moogsoft. I was impressed with how Dell is pragmatically tackling the challenges of increasingly complex IT ops structures. Instead of trying to be all things to all people, Dell focuses its efforts on its own infrastructure via its Infrastructure Observability platform, where it clearly can add the most value. To put it another way, instead of trying to do everything itself, Dell is adding a different sort of value to customers’ operations requirements via a clever partnering and integration approach. For application observability, Dell has chosen to tightly integrate with IBM’s Instana platform. For cross-platform integration and alerting, Dell Incident Management is the new name for Moogsoft. It may be just enough flexibility without sacrificing Dell’s own AI management solution that is optimized for its hardware.

Speaking of observability, this will be a hot topic for different companies over the next few months, with many new products and announcements in the pipeline. While I cannot speak to anything specific yet, enterprises need to understand a couple of things. Crucially, the scope of observability is changing in both breadth and depth. Observability tools are leveraging better analytics and AI tools to provide new—deeper and more connected—views of the environment. A good example of this is IBM Concert, IBM’s application- and service-centric viewpoint. On the breadth front, we are seeing a wider range of observability tools across the IT landscape. A good example of this is VMWare Cloud Foundation 9, which was announced last week. This flood of new products and capabilities will require more in-depth reviews to ensure that enterprises are able to (a) not pay for things they do not need and (b) make sure the tooling will align with increasingly complex environments.

Anecdotally, usage of AI code assistants seems to be trending upward. As I cover this space and have great enthusiasm for AI as a developer aid, I’ve seen a big uptick in both LinkedIn posts and Reddit entries on this topic. Some but not all were positive. Given the newness of the technology and how people are migrating up the AI learning curve (including me—see this post from last week), mixed results are not surprising. But it is notable, and a trend I will continue to watch.

While tuning into VMWare Explore a couple of weeks ago, I saw the initial signs of a transformation effort for the company. And while it has been hard to hear about colleagues and customers that have been impacted by Broadcom’s acquisition, it does appear that VMWare is working hard to regain its footing. I see parallels between the steps VMWare is taking and those of other companies that have successfully transformed—or that, like SAP, are currently undergoing a transformation. It prompted me to sit down and consider what it takes for big tech companies to weather disruptive market events. You can read my thoughts in this new post on our site.

HPE released its quarterly earnings, and the numbers were impressive. Overall, revenue came in at $7.7 billion, up 10% year over year. GreenLake ARR grew at a 39% YoY clip, with over 3,000 new customers in the quarter (for nearly 37,000 GreenLake customers total). And server revenue came in at $4.3 billion—a 35% YoY increase. As we saw with other server vendors, HPE’s business is recognizing considerably more revenue due to the AI boom (the company’s AI business was roughly $1.3 billion—a 39% sequential increase). It is clear that the focus on driving adoption of HPE technologies and services through AI is paying off.

As with other OEMs, we are also seeing that the AI game is considerably lower-margin. While these AI servers are selling at a higher ASP, the margins appear to be going to the chipmakers who are providing AI acceleration. Strategically, it’s important for HPE (and all OEMs) to win as much business in this market as they can—despite the lower margins. Establishing itself as the AI solution of choice while this market is still nascent will lead to more, margin-rich business as inferencing begins to dominate the AI landscape. This will impact the entire HPE portfolio. (Keep an eye on the intelligent edge market over time.)

One area of concern is the company’s 7% shrink in its hybrid cloud business. This business includes server, storage, the recently announced private cloud, resiliency, and GreenLake Flex. While the company didn’t provide a breakout of contributions, I suspect storage is contributing to this decline. Despite HPE talking about numbers trending in the right direction, its storage business has been relatively flat to negative over the past few quarters (as has its largest competitor’s—Dell also reported soft storage numbers).

Here’s what I think is going on: the high end of the storage market is moving to AI and high-performance-specific storage vendors (VAST, Weka, DDN, etc.). Also, I believe companies including Pure Storage (up 10% YoY) and NetApp are taking their fair share of the commodity AI-storage market. Likewise, I believe these storage vendors are taking a share of mid-range enterprise storage through deployment-driven purchases. Lenovo has also done well in this “commodity storage” market.

In other earnings news, Broadcom reported mixed results. Its revenue for the quarter came in at $13.07 billion, with $7.25 billion attributed to semiconductors and $5.8 billion attributed to software. Fueling these numbers were AI-related silicon and the contribution of VMware to the software portfolio.

On the software front, VMware’s number is a little more impressive considering that, post-acquisition, Broadcom sold off two considerable contributors (the Horizon end-user computing division and the Carbon Black security unit). Countering these strong numbers was the rest of Broadcom’s software portfolio, which saw a considerably smaller 4% YoY growth. This is to be expected, as the other contributors include what was Symantec and mainframe software solutions (previously CA). While I see VMware’s contribution as a big win for Broadcom, I believe CEO Hock Tan has the right perspective on this. At the recent VMware Explore event, he said the real measure of success with VMware will not be in short-term licensing deals and revenue, but the consumption of its new VCF 9 private cloud platform. (I published a detailed analysis of VCF 9 on Forbes.) While licensing revenue is transactional, consumption of the full capabilities of VCF 9 is sticky—meaning very long-term.

On the silicon front, Broadcom suggested that AI acceleration was carrying the business, while non-AI-related silicon had “bottomed out.” Further, the company expects to see the non-AI-related business rebound and accelerate through Q4. What Broadcom is suggesting is what I’m seeing across the industry: AI is fueling the tech industry at the moment, while non-AI-related business is more sluggish.

What to make of the U.S. government going after NVIDIA for antitrust violations? This is a tough one to sort through. Does NVIDIA have a monopoly? Yup. Is this monopoly due to anti-competitive behaviors? This is where it gets murky. NVIDIA’s CUDA software platform makes it difficult for non-NVIDIA silicon providers to be competitive. CUDA is also almost 20 years old and only became popular because NVIDIA silicon was so much better than the competition that developers chose to use it. If AMD was as successful in designing and building silicon after its ATI acquisition, CUDA would not be a lockout architecture today. In fact, we see what happens when competitors do create competitive silicon: AMD’s most recent quarter saw it far exceed expectations with the MI300—and the company raised its forecast.

I was with AMD when the company sued Intel for anti-competitive behavior. It was a legitimate gripe. OEMs were being compensated to limit Opteron (the AMD server CPU) in terms of portfolio, positioning, and go-to-market. Not only did Intel compensate AMD, it also paid heavy fines around the globe.

(Let me be clear that I don’t mean to suggest that AMD is tied in any way to the actions of the U.S. government. I mention them simply because they are now NVIDIA’s closest competitor and because of my experiences during the Intel antitrust activities.)

Is NVIDIA deploying similar tactics? Or has it simply designed better GPUs over the years and is now benefiting from that success? I don’t know the answer to that question. But if NVIDIA hasn’t done anything wrong, the U.S. government is actively stifling innovation and the success that comes along with that innovative spirit. NVIDIA made a lot of bets far ahead of the market, and those bets have paid off.

The analyst community lost a wonderful soul with the recent passing of Brian Gong of Pure Storage (and formerly Cisco). Analyst relations folks are, by nature, social creatures. The best ones are funny, warm, and genuinely interested in us as analysts and people. Even by this measure, Brian was a cut above others. He will be sorely missed for his warmth, wit, and genuinely gentle spirit. To the good folks at Pure Storage—we wish you well during this difficult time.

Observe, Inc. provides an observability platform that unifies telemetry data from distributed applications, enabling faster, more cost-effective troubleshooting. Integrating with over 250 data sources and cloud services like AWS and Kubernetes, the platform is built on Snowflake and uses a usage-based pricing model focused on data storage and querying. Observe aims to modernize monitoring by replacing traditional log analytics and infrastructure tools.

I mention this because Observe has a new release for its Observe Agent, taking a further step in the observability game by adopting OpenTelemetry as the standard for data collection. Given how packed this market is with seasoned players, it’ll be interesting to see what sets Observe apart from the crowd. Patrick Moorhead and I recently connected with the team at Observe; watch for more details about the tech behind the platform and how Observe plans to carve out its niche.

Smartsheet is reportedly in talks to be acquired with private equity firms, including Vista Equity Partners and Blackstone. The reports of the acquisition talks resulted in a nearly 10% rise in Smartsheet’s shares. In its Q2 2025 earnings, Smartsheet reported that revenue increased by 17% YoY to $276.4 million. Smartsheet currently serves 85% of the Fortune 500 with its cloud-based enterprise solutions for project management and collaboration. Smartsheet management declined to comment on the buyout talks, but Reuters reported that the company hired an investment bank in June to explore interest from PE firms. Overall, M&A activity has slowed, creating pent-up demand in the private equity universe, according to a recent report from PwC.

Meta has joined the steering committee of the Coalition for Content Provenance and Authenticity. The C2PA, as a standards body, focuses on establishing ways to verify the origin and history of digital content, an increasingly vital task in the face of rising misinformation and the proliferation of AI-generated media.

There’s a certain irony in Meta promoting digital authenticity. Meta’s platforms, especially Instagram, have been criticized for fostering environments where users often present idealized (and heavily filtered) versions of their lives, contributing to feelings of inadequacy and inauthenticity, among other things. Meta’s business model relies heavily on collecting user data and targeted advertising, practices that can feel intrusive and manipulative, further eroding trust and authenticity. So, it can seem contradictory for Meta to now champion digital authenticity when its platforms have arguably played a role in creating the opposite.

Whether its efforts are perceived as genuine remains to be seen. In a digital world where distinguishing real from synthetic media is increasingly challenging, Meta’s active participation in the C2PA alongside other industry leaders represents a crucial step towards establishing a more transparent and trustworthy online environment. This move could influence how different platforms handle content verification, potentially shaping the future of how we consume and interact with information online. Ultimately, Meta’s deeper involvement with the C2PA is a promising indicator of a proactive approach to addressing the complexities and challenges brought about by the rise of AI and the spread of misinformation.

Data and AI were central themes at this year’s Amazon Web Services (AWS) summit in New York. Dr. Matt Wood, AWS VP for AI products, noted, “Customers are able to apply generative AI to understand and leverage existing data in new and exciting ways.” At the event, AWS introduced new features to its three-layer GAI stack, enhancing AI infrastructure, models, and applications. These expansions aim to make AI and data analytics more accessible for large enterprises, small businesses, and startups. The summit also highlighted how Nasdaq utilizes AI, as well as the broader impact of AWS’s generative AI offerings across different industries. My latest Forbes article provides insights on AWS’s recent summit.

HPE recently announced its Q3 earnings, and it was a tale of two product portfolios, despite double-digit top-line revenue growth. To no one’s surprise, server revenue was up 35% based on strong AI systems demand, while networking revenue was down 23%. I expect that the company’s quarterly performance is a result of customer prioritization of computing infrastructure, but networking could rebound in the future with the imminent close of the Juniper Networks acquisition.

Capgemini is acquiring Syniti to broaden its data management expertise and strengthen its SAP project capabilities. With a team of more than 1,200 specialists, Syniti brings a wealth of experience in data transformation and management across industries including life sciences, aerospace, manufacturing, retail, and automotive.

This acquisition positions Capgemini to better support RISE with SAP implementations, especially in data migration to SAP S/4HANA. Both companies recognize that successful digital transformation hinges on clean, reliable data. By integrating Syniti’s expertise, Capgemini could offer clients a smoother path for data migrations and governance, as well as more efficient use of their data during ERP transitions.

Next week marks an exciting time as Oracle rolls out two of its big annual events. I will be at Oracle’s CloudWorld and NetSuite’s SuiteWorld conferences starting September 8, diving into the latest innovations in AI, automation, and more. AI and machine learning have an increasing influence on ERP systems, especially those related to demand forecasting, supply chain management, quality control, shipping, preventive maintenance, data intelligence, and process automation. I’m looking forward to seeing what’s new from Oracle and NetSuite in these areas. If you have any questions or want to set up a meeting, feel free to reach out.

Zoho announced the launch of Zoho Payments, a unified payment solution that allows businesses to accept payments via various methods (cards, UPI, net banking) directly within their business applications. Zoho Payments offers businesses flexibility in receiving payments from customers. Businesses can tailor the options—such as invoice e-mails, payment links, dedicated payment pages, and a secure client portal—so customers can choose how they want to pay. Early access customers in the U.S. can receive payments in 135 currencies, and the solution ties to all of Zoho’s finance ecosystem, including Zoho Books, Inventory, Billing, Invoice, and Checkout. This solution, which is now also available for early access in India, promises to provide faster payouts and streamlined dispute management.

The long-awaited and much-anticipated Thread 1.4 update is now available. This new version is significant because Thread is the low-power device mesh network used by Matter, and it’s already present in most homes as a standard feature of smart speakers and hubs.

Users frequently encounter five big problems when adding new devices and border routers to existing Matter/Thread networks, and this new version addresses all of them.

  1. Different border router brands (smart speaker, hubs) can now share credentials and join existing Thread networks instead of creating new networks.
  2. Users can connect multiple border routers over Wi-Fi and Ethernet to cover large buildings and campuses.
  3. Users can install Thread devices without physical access to the device or its QR code.
  4. Thread now supports network diagnostics that simplify troubleshooting.
  5. Thread devices can directly communicate with cloud-based services.

Thread Group announced these enhancements at CES in January. Completing these complex new features in seven months is impressive progress for a standards body. For instance, multiple product ecosystems sharing the same Thread network required hyperscalers to collaborate on secure credential sharing. Those discussions must have been interesting. The good news is that 1.4 primarily affects border routers, while individual Matter/Thread devices are backward-compatible. Hence, the update should not delay the availability of new Matter devices. And many existing border router products are software upgradeable to 1.4, so I expect a slew of new Matter products at CES in January. Please refer to Thread’s 1.4 features white paper for technical details.

Quantum Brilliance (QB) and Oak Ridge National Laboratory (ORNL) announced a collaborative effort to integrate QB’s room-temperature diamond-based quantum computing technology with ORNL’s high-performance computing systems. Quantum Brilliance was founded in Australia in 2019. It specializes in room-temperature diamond quantum accelerators. With funding supplied by the Australian Capital Territory Government, Quantum Brilliance wants to make quantum technology more accessible so it can be integrated into everyday devices and advanced computing systems.

The collaborative objective is to explore the effectiveness of parallel and hybrid quantum computing. Parallel quantum computing uses multiple quantum processors working together, while hybrid computing combines quantum and classical processors. It is hoped that the combination of enhanced computational capabilities will solve problems beyond the reach of classical computing alone.

We are getting closer to a powerful supercomputer that will integrate AI, HPC, and quantum technologies.

Zscaler is the latest company to suffer a stock value decline despite posting solid financial results for its most recent quarter. Sales were up 30%, billings up 27%, and deferred revenue up 32%. However, pressure on profitability and a softer revenue outlook triggered a 17% stock value decline last week. It proves that Wall Street will be satisfied only when expectations for both current but future financial performance are met.

SportAI recently closed a $1.8 million seed-funding round, allowing it to continue developing its technology and expand its reach. SportAI uses artificial intelligence to improve sports performance. Its platform provides sports-technique coaching, commentary, and analysis using machine learning, computer vision, and biometric technology. It caters to coaches, training facilities, broadcasters, sports equipment brands, and retailers. SportAI’s platform works with various video sources, eliminating the need for specific hardware and manual tagging—and thereby making video analysis more scalable and technically accessible. This also makes it more commercially accessible to more people, because previous versions of this type of performance analysis were costly and, therefore, mainly limited to professional athletes or larger companies.

AI is transforming both the business world and sports, with UEFA’s use of AI in the Champions League draw as a prime example. As the competition moved to a more complex 36-team league format, the traditional manual draw system, where teams were pulled from bowls, became impractical. The complexity of factors, such as preventing teams from the same country from meeting too often, made it too challenging for manual handling. AI software now manages these details, improving accuracy and reducing the risk of human error.

Concerns arose after a technical mishap in the draw for the 2021–22 tournament forced a redo. With AI now in place, UEFA has improved its ability to manage the process smoothly, but fans who worry about transparency and the potential for manipulation remain skeptical.

This shift in sports mirrors the broader impact AI is having across industries. AI is already transforming areas like customer service and data analysis in business, making operations more efficient. In sports, AI is enhancing efficiency and changing how complex logistics are managed. While scrutiny around transparency and misuse is present in both fields, the benefits of AI in reducing errors are evident. However, humans must always stay involved in these processes to ensure appropriate oversight.

Honeywell and Cisco are collaborating on an AI-powered solution that adjusts building systems based on real-time occupancy data to reduce energy consumption. The joint effort uses Cisco Spaces to collect occupancy and environmental data and Honeywell Forge Sustainability+ for Buildings to improve energy efficiency. Room temperatures, lighting, and ventilation are adjusted based on occupancy, leading to automated building operations, optimized energy use, improved employee comfort, and reduced greenhouse gas emissions.

This collaboration supports Honeywell’s aim to reduce buildings’ environmental footprint and aligns with its focus on automation and energy transition. Cisco also supports the initiative with its Country Digital Acceleration program, a worldwide effort involving governments and businesses to create equitable and safe societies using responsible and cutting-edge technology. Building owners are prioritizing energy management because of hybrid working policies and lower occupancy. These factors and other challenges are putting pressure on owners to operate buildings efficiently and minimize resource waste.

AT&T recently struck a new deal with Nokia; on the surface, many interpret it as an olive branch, given the operator’s alignment with Ericsson for open RAN infrastructure last year. Nokia has a long history of success in fiber optics, and the latest announcement will provide AT&T with platforms that will upgrade and expand its massive fiber network over a five-year period. Nokia’s Lightspan platform is extremely flexible and can provide symmetrical speeds at 10G, 25G, 50G, or 100G. It is potentially a lucrative opportunity for Nokia, one that the company desperately needs to keep its financial performance on stable footing.

Podcasts Published

MI&S DataCenter Podcast (Will Townsend, Paul Smith-Goodson, Matt Kimball)
Ep. 29 of the MI&S Datacenter Podcast – We’re Talking Zscaler, AI, Broadcom, HPE, xAI, Dell

Don’t miss future MI&S Podcast episodes! Subscribe to our YouTube Channel here.

New Gear or Software We Are Using and Testing

  • Cisco Desk Pro (Melody Brue)
  • OnePlus Buds Pro 3 (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Oracle Cloud World, September 9-12, Las Vegas (Melody Brue, Robert Kramer)
  • JFrog swampUP 24, September 9-11, Austin (Jason Andersen)
  • Connected Britain, September 11-12, London (Will Townsend)
  • Connected Britain panel moderation, September 11-12, London (Will Townsend)
  • Snowflake Industry Day 2024, September 12 (virtual) (Robert Kramer)
  • Snap Partner Summit, September 17, Santa Monica (Anshel Sag)
  • Zayo Network Transformation webinar moderation, September 17 (Will Townsend)
  • Salesforce Dreamforce, September 17-19, San Francisco (Patrick Moorhead) (virtual – Jason Andersen, Melody Brue, Robert Kramer)
  • Intel Innovation, September 23-26 — EVENT CANCELED
  • HP Imagine, September 24, Palo Alto (Anshel Sag)
  • Meta Connect, September 25, San Jose (Anshel Sag)
  • Verint Engage, September 23-25, Orlando (Melody Brue)
  • Infor Annual Summit, September 30-October 2, Las Vegas (Robert Kramer)
  • Oracle Cloud World, September 9-12, Las Vegas (Melody Brue, Robert Kramer)
  • JFrog swampUP 24, September 9-11, Austin (Jason Andersen)
  • Connected Britain, September 11-12, London (Will Townsend)
  • Connected Britain panel moderation, September 11-12, London (Will Townsend)
  • Snowflake Industry Day 2024, September 12 (virtual) (Robert Kramer)
  • Snap Partner Summit, September 17, Santa Monica (Anshel Sag)
  • Zayo Network Transformation webinar moderation, September 17 (Will Townsend)
  • Salesforce Dreamforce, September 17-19, San Francisco (Patrick Moorhead) (virtual – Jason Andersen, Melody Brue, Robert Kramer)
  • Intel Innovation, September 23-26 — EVENT CANCELED
  • HP Imagine, September 24, Palo Alto (Anshel Sag)
  • Meta Connect, September 25, San Jose (Anshel Sag)
  • Verint Engage, September 23-25, Orlando (Melody Brue)
  • Infor Annual Summit, September 30-October 2, Las Vegas (Robert Kramer)
  •  
  • Fem.AI Summit, Menlo Park, October 1 (Melody Brue) 
  • Microsoft Industry Analyst Event, Burlington, Mass, October 2 (Melody Brue)
  • LogicMonitor, Austin, October 2-4 (Robert Kramer)
  • Teradata, October 7-10, Los Angeles (Robert Kramer)
  • Zoomtopia, San Jose, October 8-9 (Melody Brue)
  • Embedded World NA, Austin, October 8-10 (Bill Curtis)
  • MWC Americas and T-Mobile for Business Unconventional Awards event judge, October 8-10, Las Vegas, October 8-10, Las Vegas (Will Townsend)
  • AWS GenAI Summit, October 9-10, Seattle (Jason Andersen, Robert Kramer)
  • AdobeMAX, October 14-16, Miami (Melody Brue)
  • Lenovo Global Analyst Summit & Tech World, October 14-17, Bellevue, WA (Matt Kimball, Paul Smith-Goodson, Anshel Sag)
  • IBM Analyst Summit, October 16-18, New York City (Matt Kimball, Robert Kramer)
  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • RISC-V Summit, October 22-23 — virtual (Matt Kimball)
  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Dell Tech Analyst Summit, November 6-8, Austin (Matt Kimball, Anshel Sag, Paul Smith-Goodson)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • IBM, November 6-8, New York City (Paul Smith-Goodson)
  • Fyuz, November 11-13, Dublin (Will Townsend)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Analyst Summit, November 12-13, San Francisco (Melody Brue)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • AWS re:Invent, December 2-6, Las Vegas (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending September 6, 2024 appeared first on Moor Insights & Strategy.

]]>
Datacenter Podcast: Episode 29 – We’re Talking Zscaler, AI, Broadcom, HPE, xAI, Dell https://moorinsightsstrategy.com/data-center-podcast/datacenter-podcast-episode-29-were-talking-zscaler-ai-broadcom-hpe-xai-dell/ Mon, 09 Sep 2024 17:43:58 +0000 https://moorinsightsstrategy.com/?post_type=data_center&p=42558 The Datacenter team talks Zscaler, AI, Broadcom, HPE, xAI and Dell on episode 29 of the Datacenter Podcast

The post Datacenter Podcast: Episode 29 – We’re Talking Zscaler, AI, Broadcom, HPE, xAI, Dell appeared first on Moor Insights & Strategy.

]]>
Welcome to this week’s edition of “MI&S Datacenter Podcast” I’m Patrick Moorhead with Moor Insights & Strategy, and I am joined by co-hosts Matt, Will, and Paul. We analyze the week’s top datacenter and datacenter edge news. This week we cover Zscaler, AI, Broadcom, HPE, xAI, Dell and more!

Watch the video here:

Listen to the audio here:

3:03 Zscaler’s Strong Earnings Don’t Land With Bubble Bears
8:55 AI Can Read Your Tongue
15:23 Broadcom Goes Back To The Future With VCF 9
26:46 HPE Q3FY24 Earnings Are A Tale Of Two Portfolios
32:09 World’s Most Powerful AI Training System
36:36 Server Vendors Had A Banner Quarter
44:02 Getting To Know The Team

Zscaler’s Strong Earnings Don’t Land With Bubble Bears

https://x.com/WillTownTech/status/1831782382854357010

AI Can Read Your Tongue

https://www.forbes.com/sites/moorinsights/2024/09/04/say-ahh-to-ai-how-the-tongue-can-reveal-hidden-health-issues/

HPE Q3FY24 Earnings Are A Tale Of Two Portfolios

https://x.com/WillTownTech/status/1831785559141835020

World’s Most Powerful AI Training System

https://x.com/elonmusk/status/1830650370336473253

Server Vendors Had A Banner Quarter

https://www.linkedin.com/feed/update/urn:li:activity:7235270767356100608/

Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.

The post Datacenter Podcast: Episode 29 – We’re Talking Zscaler, AI, Broadcom, HPE, xAI, Dell appeared first on Moor Insights & Strategy.

]]>
RESEARCH NOTE: Looking at AI Benchmarking from MLCommons https://moorinsightsstrategy.com/research-notes/looking-at-ai-benchmarking-from-mlcommons/ Mon, 09 Sep 2024 17:02:39 +0000 https://moorinsightsstrategy.com/?post_type=research_notes&p=42040 Although several AI benchmarking organizations exist, MLCommons has quickly become the body that has gained the most mindshare. Its MLPerf benchmark suite covers AI training, various inference scenarios, storage, and HPC. The organization recently released MLPerf Inference v4.1, which examines inference performance for several AI accelerators targeting datacenter and edge computing. In this research note, […]

The post RESEARCH NOTE: Looking at AI Benchmarking from MLCommons appeared first on Moor Insights & Strategy.

]]>

Although several AI benchmarking organizations exist, MLCommons has quickly become the body that has gained the most mindshare. Its MLPerf benchmark suite covers AI training, various inference scenarios, storage, and HPC.

The organization recently released MLPerf Inference v4.1, which examines inference performance for several AI accelerators targeting datacenter and edge computing. In this research note, I attempt to give more context to the results and discuss what I consider some interesting findings.

Why Is AI Benchmarking Necessary?

Generative AI is a magical and mystical workload for many IT organizations that instinctively know there’s value in it, but aren’t entirely clear what that value is or where it applies across an organization. Yes, more traditional discriminative AI uses, such as computer vision, can deliver direct benefits in specific deployments. However, GenAI can have far broader applicability across an organization, though those use cases and deployment models are sometimes not as obvious.

Just as AI is known yet unfamiliar to many organizations, learning what comprises the right AI computing environment is even more confusing for many of them. If I train, tune, and use, let’s say, Llama 3.1 across my organization for multiple purposes, how do I know what that operating environment looks like? What is the best accelerator for training? What about when I integrate this trained model into my workflows and business applications? Are all inference accelerators pretty much the same? If I train on, say, NVIDIA GPUs, do I also need to deploy NVIDIA chips for inference?

Enterprise IT and business units grapple with these and about 82 other questions as they start to plan their AI projects. The answer to each question is highly dependent on a number of factors, including (but not limited to) performance requirements, deployment scenarios, cost, and power.

If you listen to the players in the market, you will quickly realize that each vendor—AMD, Cerebras, Intel, NVIDIA, and others—is the absolute best platform for training and inference. Regardless of your requirements, each of these vendors claims supremacy. Further, each vendor will happily supply its own performance numbers to show just how apparent its supremacy is.

And this is why benchmarking exists. MLCommons and others make an honest attempt to provide an unbiased view of AI across the lifecycle. And they do so across different deployment types and performance metrics.

What Is in MLPerf Inference v4.1?

MLPerf Inference v4.1 takes a unique approach to inference benchmarking in an attempt to be more representative of the diverse use of AI across the enterprise. AI has many uses, from developers writing code to business analysts tasked with forecasting to sales and support organizations providing customer service. Because of this, many organizations employ mixture of expert (MoE) models. An MoE essentially consists of multiple, smaller, gated expert models that are invoked as necessary. So, if natural language processing is required, the gate activates the NLP expert. Likewise for anomaly detection, computer vision, etc.

In addition to its traditional testing of different inference scenarios, the MLPerf team selected Mistral’s Mixtral 8x7B as its MoE model for use in v4.1. This enables testing that demonstrates the broader applicability of inference across the enterprise. In Mixtral, the MLPerf team chose to test against three tasks in particular: Q&A (powered by the Open Orca dataset), math reasoning (powered by the GSM8K dataset), and coding (powered by the MBXP dataset).

As seen in the table below, MLPerf Inference v4.1 looks at inferencing scenarios that span uses across the enterprise, with tests that show variances for latency and accuracy.

MLPerf Inference v4.1 tests — Source: MLCommons

Performing a Benchmark and Checking It Twice

There are a couple of other things worth mentioning related to MLPerf that I believe show why it’s a credible benchmark. First, all results are reviewed by a committee, which includes other submitters. For example, when AMD submits testing results for its MI300, NVIDIA can review and raise objections (if applicable). Likewise, when NVIDIA submits its results, other contributing companies can review and object as they see fit.

Additionally, chip vendors can only submit silicon that is either released or will be generally available within six months of submission. This leads to results that are more grounded in reality—either what’s already on the truck or what will be on the truck shortly.

For this benchmark, AMD, Google, Intel, NVIDIA, and UntetherAI were the chips evaluated by 22 contributors from server vendors, cloud providers, and other platform companies. Chips from Qualcomm, Cerebras, Groq, and AWS were surprisingly absent from the sample. It is also important to note that while Intel submitted its “Granite Rapids” Xeon 6 chip for testing, its Gaudi accelerator was not submitted.

There are many reasons why an organization might not submit. It could be resource constraints, cost, or a number of other reasons. The point is, we shouldn’t read too much into a company’s choice to not submit—other than that there’s no comparative performance measurement for the chips that weren’t submitted.

One final consideration when reviewing, if you choose to review the results on your own: not every test was run on every system. For instance, when looking at the inference datacenter results, NeuralMagic submitted results for the NVIDIA L40S running in the Crusoe Cloud for Llama 2-70B (Q&A). This was the only test (out of 14) run. So, use the table above to decide what kind of testing you would like to review (image recognition, language processing, medical imaging, etc.) and the configuration you’d like (number of accelerators, processor type, etc.) to be sure you are looking at relevant results. Otherwise, the numbers will have no meaning.

What Can We Take Away from the Results?

If appropriately used, MLPerf Inference v4.1 can be quite telling. However, it would likely be unfair for me to summarize the results based on what I’ve reviewed. Why? Precisely because there are so many different scenarios by which we can measure which chip is “best” in terms of performance. Raw performance versus cost versus power consumption are just a few of the factors.

I strongly recommend visiting the MLCommons site and reviewing your inference benchmark of choice (datacenter versus edge). Further, take advantage of the Tableau option at the bottom of each results table to create a filter that displays what is relevant to you. Otherwise, the data becomes overwhelming.

While it is impossible to provide a detailed analysis of all 14 tests in datacenter inference and all six tests in edge inference, I can give some quick thoughts on both. On the datacenter front, NVIDIA appears to dominate. When looking at the eight H200 accelerators versus eight AMD MI300X accelerators in an offline scenario, the tokens/second for Llama 2-70B (the only test submitted for the MI300X) showed a sizable advantage for NVIDIA (34,864 tokens/second versus 24,109 tokens/second). Bear in mind that this comparison does not account for performance per dollar or performance per watt—this is simply a raw performance comparison.

When looking at NVIDIA’s B200 (in preview), the performance delta is even more significant, with offline performance coming in at 11,264 tokens/second versus 3,062 tokens/second for the MI300X. Interestingly, this performance advantage is realized despite the B200 shipping with less high bandwidth memory (HBM).

When looking at inference on the edge, UntetherAI’s speedAI240 is worth considering. The company submitted test results for resnet (vision/image recognition), and its numbers relative to the NVIDIA L40S are stunning in terms of latency, with the speedAI 240 coming in at .12ms and the L40S coming in at .33ms for a single stream. It’s worth noting that the speedAI 240 has a TDP of 75 watts, and the L40S has a TDP of 350 watts.

The work of the MLCommons team yields many more interesting results, which are certainly worth investigating if you are scoping an AI project. One thing I would recommend is using the published results, along with published power and pricing estimates (neither NVIDIA nor AMD publish pricing), to determine the best fit for your organization.

Navigating the Unknowns of AI

I’ve been in the IT industry longer than I care to admit. AI is undoubtedly the most complex IT initiative I’ve seen, as it is a combination of so many unknowns. One of the toughest challenges is choosing the right hardware platforms to deploy. This is especially true today, when power and budget constraints place hard limits on what can and can’t be done.

MLCommons and the MLPerf benchmarks provide a good starting point for IT organizations to determine which building blocks are best for their specific needs because they allow comparison of performance in different deployment scenarios across several workloads.

MLPerf Inference v4.1 is eye-opening because it shows what the post-training world requires, along with some of the more compelling solutions in the market to meet those requirements. While I expected NVIDIA to do quite well (which it did), AMD had a strong showing in the datacenter, and UntetherAI absolutely crushed on the edge.

Keep an eye out for the next training and inference testing round in the next six months or so. I’ll be sure to add my two cents.

The post RESEARCH NOTE: Looking at AI Benchmarking from MLCommons appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending August 30, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-august-30-2024/ Wed, 04 Sep 2024 02:16:16 +0000 https://moorinsightsstrategy.com/?p=41887 MI&S Weekly Analyst Insights — Week Ending August 30, 2024

The post MI&S Weekly Analyst Insights — Week Ending August 30, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

The Moor Insights & Strategy team hopes you had a nice Labor Day weekend! 🇺🇲

Last week, Patrick Moorhead, Will Townsend, and Matt Kimball attended VMware Explore 2024.  Matt also attended the GlobalFoundries Analyst event. Robert Kramer was in New York for the IBM SAP Analyst and Advisory Services Day, and Robert and Melody Brue were at the US Open with IBM. 

This week, Anshel Sag is at IFA Berlin. Next week, Patrick, Melody, Matt, and Robert will be in Las Vegas for Oracle Cloud World, and Will Townsend will be in London for Connected Britain, where he’s also moderating a panel. Jason Andersen will be in Austin for JFrog swampUP 24.

Our MI&S team published 16 deliverables:

Over the last week, our analysts have been quoted numerous times in international publications with our thoughts on NVIDIA earnings, IBM, Crowdstrike, Amazon, and AI-powered smart glasses.

Patrick Moorhead appeared on CNBC Closing Bell Overtime to discuss CrowdStrike’s stock spike after its first quarterly report since the global outage and again to discuss NVIDIA’s Q2 2025 earnings. Patrick was also on Yahoo! Finance to discuss NVIDIA earnings. You can also check out this supercut of Patrick discussing NVIDIA’s competition on Yahoo! Finance.

MI&S Quick Insights

I’ve speculated about OpenAI’s Strawberry release several times over the past few weeks, here and elsewhere. Most of my coverage is based on research papers and factual material, so it’s fun to occasionally go off the factual rails and speculate. Most recently another story appeared in The Information reporting that Strawberry was demonstrated to the U.S. government. The old information is that Strawberry will have much greater reasoning power than what’s available today. The newer information is that OpenAI is working on a new LLM called Orion, and Strawberry will be used to train and enhance Orion. Whenever it is released, I believe it will move us into a new era of AI, one of super-reasoning. I’m looking forward to something that will be unique.

iAsk.AI is a relatively new entity to me, but I think I’ve found a new go-to AI model. iAsk.AI is a cutting-edge AI search engine developed by a new company called AI Search Inc., which was established last year. Its founders previously created CamFind, a visual search engine, and JRank, a search tool for searching a single complex website. iAsk.AI uses that technology to deliver instant responses for user queries.

iAsk Pro was the first model to achieve the first “Expert AGI” performance, scoring 93.89% on the MMLU benchmark and 85.85% on the new MMLU Pro test. The MMLU (Massive Multitask Language Understanding) benchmark evaluates the performance of AI models across a wide range of subjects, including science, mathematics, history, and more. It comprises over 12,000 questions from academic exams and textbooks, testing an AI’s understanding and reasoning abilities in diverse domains. The new MMLU Pro test is an updated and more difficult version of this benchmark. Achieving an 85.85% score on MMLU Pro indicates that the AI model performs exceptionally well and surpasses the accuracy of many human experts in these subjects. It outperformed the previous best model, GPT-4, by a large margin of 12 percentage points.

I’ve speculated about OpenAI’s Strawberry release several times over the past few weeks, here and elsewhere. Most of my coverage is based on research papers and factual material, so it’s fun to occasionally go off the factual rails and speculate. Most recently another story appeared in The Information reporting that Strawberry was demonstrated to the U.S. government. The old information is that Strawberry will have much greater reasoning power than what’s available today. The newer information is that OpenAI is working on a new LLM called Orion, and Strawberry will be used to train and enhance Orion. Whenever it is released, I believe it will move us into a new era of AI, one of super-reasoning. I’m looking forward to something that will be unique.

iAsk.AI is a relatively new entity to me, but I think I’ve found a new go-to AI model. iAsk.AI is a cutting-edge AI search engine developed by a new company called AI Search Inc., which was established last year. Its founders previously created CamFind, a visual search engine, and JRank, a search tool for searching a single complex website. iAsk.AI uses that technology to deliver instant responses for user queries.

iAsk Pro was the first model to achieve the first “Expert AGI” performance, scoring 93.89% on the MMLU benchmark and 85.85% on the new MMLU Pro test. The MMLU (Massive Multitask Language Understanding) benchmark evaluates the performance of AI models across a wide range of subjects, including science, mathematics, history, and more. It comprises over 12,000 questions from academic exams and textbooks, testing an AI’s understanding and reasoning abilities in diverse domains. The new MMLU Pro test is an updated and more difficult version of this benchmark. Achieving an 85.85% score on MMLU Pro indicates that the AI model performs exceptionally well and surpasses the accuracy of many human experts in these subjects. It outperformed the previous best model, GPT-4, by a large margin of 12 percentage points.

VMWare Exchange was a big event for the IT Automation crowd this week, and the big news was the effort to simplify what had previously been a complex lattice of products. VMWare Cloud Foundation 9 is now a solid and more unified starting point for customers to build their own clouds. Much of the coverage has been on the unification of compute, storage, and networking features, but there are also new DevOps services that are notable. The consolidation of multiple ops services and stakeholders are trends that I discussed in this article published just a few days ago. A few years ago, we all would have been surprised to see VMware include a native Kubernetes stack within its foundation offering, but it’s there in VCF 9. Between these base DevOps services and a rich set of add-on capabilities, it will be very interesting to see how Broadcom will continue to build bridges to developer ecosystems.

This week I also had the pleasure to speak with two startups that are specifically working on how AI can improve the productivity of development teams. This is a big step forward from the personal productivity benefits associated with AI assistants embedded in an IDE. Network Perspective is a Polish firm that collects and aggregates team data from collaboration and productivity apps to figure out how development teams can work better together, with the goal of making more time for the deep work that developers need to get into the flow. Network Perspective is already helping customers and is showing a new use for AI. The second startup is still in stealth mode, but the idea is to create a new type of IDE that allows developers to work on the same sets of code at the same time, with an AI assistant facilitating the process. Think of it like a virtual hackathon. These two new approaches to innovating are starting to show us all that there is more to AI than chatbots. I’ll be keeping an eye on these startups as they navigate the market.

Investors are worried about how Salesforce’s use of AI agents could affect productivity and the need for customer license seats. During a Q&A session for its Q2 2025 earnings call, CEO Marc Benioff said that there is significant interest in AI agents, with approximately 200 million currently in trials. Salesforce is considering a new consumption-based pricing model, which could involve charging $2 per AI agent conversation. The company is confident in its AI strategy, especially with the upcoming launch of the Einstein 1 Agentforce Platform. Salesforce’s goal is to have one billion AI agents in use by the end of fiscal year 2026.

VMware Explore 2024 came and went, and the big news out of this event was VCF 9. This launch is the beginning of the company’s strategy coming into focus as it looks to effectively deliver the public cloud on-prem. Billed as a private cloud solution, VCF 9 is, to me, the realization of what enterprise IT craves—a wholly crafted cloud stack and operating model that also allows an IT executive to deliver the environment and agility that their developers and data scientists require while simplifying the way their IT staff deploys, provisions, and manages infrastructure.

As I said, I think VCF 9 is what IT craves, but I’m not sure that IT realizes they crave this. This is due largely to perception—especially the perception issue around the term “private cloud.” This is a phrase that is tied to older technologies that never quite met the expectations of enterprise IT organizations and eventually gave way to hybrid cloud technologies. I would have greatly preferred the company find a different way to position VCF other than “private cloud.” Or at least to refer to it as a later version—an evolution of the older private cloud.

One thing I didn’t hear addressed at the event is how VCF supports hybrid cloud. To tell enterprise IT organizations that they simply need to repatriate all of their apps and data from the public cloud back to on-prem is not realistic. And the company has not really demonstrated how it will resolve this.

That said, I do like how the company has laid out a vision for VCF, along with a set of tools and services to enable this transformation. I also like how the company has laid down a strong opinion on what the future datacenter looks like. Now it just needs to execute against this vision.

How about that Nutanix quarter?! The company showed a strong beat on expectations and its guidance was even stronger. Nutanix has executed a strong strategy—a masterclass in leveraging market disruption (in this case caused by VMware turbulence). How did it do so? By activating OEM and channel partnerships, from both a technical and go-to-market perspective.

Pure Storage delivered a strong second quarter, outpacing the market in terms of growth. Despite this, the company saw its stock take a significant drop as its guidance for the rest of year fell short of expectations. While I understand that the Street is forward-looking, it is disappointing to see a company punished while delivering stellar results and forecasting growth for the future. Regardless, it is good to see Pure establishing a stronger presence in both AI and hyperscalers, markets that represent the largest growth vectors in the storage market.

Are these Dell Technologies quarterly numbers real??? The company’s Infrastructure Solutions Group (ISG) saw a 38% year-over-year increase in revenue (to $11.6 billion) and a 22% YoY increase in operating income (to $1.2 billion). Both traditional and AI server sales saw strong growth and significant pipelines. Interestingly, the company’s storage business struggled, shrinking 5% YoY in terms of revenue. This is a continuing trend for the company that has seen its storage business on a continual decline, despite what it says is increased demand for core storage.

What is going on? I believe that AI and the performance requirements associated with it have put a renewed focus on high-performance storage. And while Dell’s storage portfolio is more than adequate, many storage companies (such as Pure, VAST, and Weka) are positioning themselves as critical to feeding the AI data pipeline.

No doubt Dell will find its footing on the storage front. And its >$7 billion revenue in servers and networking looks like it will be eclipsed next quarter.

MLPerf Inference 4.1 benchmark results were published this week, and there were some interesting numbers in the release. While NVIDIA ruled (as one would expect), AMD showed some compelling results with its first submission to the benchmark. Meanwhile, Untether AI demonstrated performance-per-watt leadership with its SpeedAI 240 accelerator.

While the AI training market is a battle among a few companies—and dominated by one—AI inference is an entirely different game. Traditional GPUs and big silicon will be challenged by companies like Untether AI that have designed and developed highly performant silicon that fits into very small power envelopes to support the diversity of use cases that span the enterprise.

Keep an eye on Untether AI and other companies like it (such as Tenstorrent)—this inference game is just heating up.

IBM and the US Open made last week memorable for me. It started with meeting Tracy Austin and her offering advice on my backhand for tennis and pickleball. When I was growing up I watched her win the US Open as a teenager (twice!); it was fantastic to hear her thoughts on IBM’s technology. FYI, she said she uses a two-handed backhand in pickleball (interesting!).

IBM is clearly transforming the way we experience sports and entertainment. This can be appreciated when we realize that IBM has been collecting data for the past 30 years in partnership with the United States Tennis Association for the US Open—in parallel with its efforts at Wimbledon and The Masters. I had the chance to use the US Open mobile app firsthand when I attended a few matches. The app provided detailed stories, scores, stats, AI-driven predictions, news, schedules, and much more that added a new layer of depth to the tournament experience. What really impressed me was its integration with Ticketmaster, which lets you access your tickets for the matches right within the app.

IBM’s technology from the US Open is making its way into various industries beyond sports and entertainment. In retail, it can be used to create personalized shopping experiences. In healthcare, it can enhance patient care. The cybersecurity measures deployed at the US Open can help financial institutions protect sensitive data. AI and data analytics can be used to optimize production processes in manufacturing, and these innovations are extending to many other sectors as well.

Much of the buzz around GenAI is centered on the compute side of the infrastructure stack. However, networking is a crucial component, serving as the conduit to move data, connect large language models, and eventually extend workloads to the network edge. Although Dell Technologies posted healthy growth in AI server sales in its recent quarterly earnings, its networking strategy heavily relies on Broadcom. That might not be a bad thing, given Broadcom’s investment in extending Ethernet’s interconnect capabilities, but companies such as HPE that are doubling down on networking infrastructure beyond using merchant silicon could gain an edge in delivering a more complete GenAI solution.

Last week the SAP practice within IBM Consulting hosted me for its analyst strategic session while I was in New York for the US Open. It was a valuable experience to join the IBM SAP team and discuss the critical nature of ERP systems for global enterprises. During the session, we discussed the key elements that contribute to successful ERP transformations and why the IBM SAP team has been effective. It starts with a global network of 18,000 certified SAP professionals. Solid execution processes, including change management, are also critical. Strong data management is at the core of this success, particularly given that AI is used to support project delivery and application management across industries including manufacturing, consumer goods, retail, defense, automotive, and utilities. In the coming weeks I’ll be writing up my research detailing more specifics, including case studies, on IBM SAP.

Transportation management systems (TMS) are improving supply chains by making operations more efficient for manufacturers, distributors, e-commerce businesses, retailers, and third-party logistics providers. These systems help streamline shipping, lower costs, improve profitability, and offer better visibility into the supply chain—in short, automating complex processes to secure transportation services at the best possible price without sacrificing quality. The value of TMS comes from understanding how to use the technology effectively and managing its implementation carefully to achieve tangible business results. TMS solutions can be part of larger SCM and ERP systems or used on their own. As TMS usage increases, I’ll explore their challenges, benefits, and impacts on businesses. More details to come.

According to Paycor’s “HR in 2025” study, newer employees are particularly prone to turnover, and remote workers often express less favorable views of their leaders and experience role ambiguity. Paycor surveyed more than 7,000 HR, finance, and IT professionals for the report. Some interesting key takeaways: The ongoing talent shortage is due to several factors, including low birth rates, retirements, skills gaps, and caregiving obligations.

In addition, the employee experience needs to start at the application step. 52% of candidates have declined job offers due to a poor experience during the hiring process, according to CareerPlug. To navigate this, companies are increasingly turning to AI to optimize their recruitment process and improve the candidate experience. This includes a wide gamut of processes, including broadening the candidate pool beyond active job-seekers, and automating touchpoints that keep applicants updated and feeling valued, even when they don’t end up with an offer.

In a recent analyst briefing, Ericsson provided details about “site energy orchestration,” an initiative to reduce cellular infrastructure energy costs. The company says global mobile networks comprise 1% of global energy consumption (source: GSMA report 2024), so savings could be significant. Electric utilities are rapidly moving to dynamic pricing models that reflect real-time supply and demand. Ericsson shaves loads to avoid peak prices, charges local batteries when rates are low, switches to battery power when rates are high, and sells excess power back to the grid from on-site renewables and batteries. Ericsson’s field tests produced savings of 36% when combining these features. Any home or business, not just cell sites, can reap these benefits by orchestrating electricity flow, and that’s where IoT comes in. Matter, the smart home connectivity standard from Amazon, Apple, Google, Samsung, and other companies, enables the whole-home device communication required to manage electricity usage, storage, and generation. This paper explains how autonomous energy orchestration software using Matter-connected equipment can deliver measurable financial benefits. As more homes and businesses orchestrate power usage, the effects become grid-scale, with public policy implications. Bravo Ericsson!

I’m increasing my coverage of the Linux Foundation’s LF Edge project to include two new at-large projects that are consistent with my views on IoT middleware and device software evolution. The first one is EdgeLake, sponsored by AnyLog—a distributed, virtual relational database that combines structured data from multiple sites. A standard SQL query selects results from all databases, regardless of location. There are two big advantages to this approach: (1) Only actionable data travels over the network. All data stays local, at the edge, until a specific query calls for it. (2) EdgeLake is pure middleware with standard SQL interfaces and no system dependencies. So, it can plug and play with any device and any back-end application. The second project is Ocre, sponsored by Atym, which uses WebAssembly (Wasm) and Zephyr to provide ultra-lightweight containers for microcontroller-based edge devices, enabling developers to focus on applications without building custom OSes, system images, and OTA update services. It’s like Docker for small devices. Both projects are worth watching.

In addition to the two LF Edge projects described above, the Linux Foundation’s Joint Development Foundation is sponsoring Margo, a mechanism for orchestrating applications and workloads on edge devices. Margo’s goals are ambitious (perhaps too ambitious), but big companies are involved (Microsoft, Intel, ABB, Capgemini, Rockwell, Schneider, Siemens, and more), so the initiative has plenty of resources. Margo is also worth watching.

Apple’s upcoming iPhone launch is happening on September 9 at 10 a.m. Pacific, and I believe it will be very iPhone- and AI-heavy. We might get more updates on wearables, though I don’t think we’ll get Macs running Apple’s M4 chips at the same event.

Anandtech’s abrupt shutdown marks the end of a 27-year journey for the hardware review publication, which will be missed by many and will leave a mark on the industry for many years to come.

A panel of justices from Brazil’s Supreme Court has upheld an order that X (formerly Twitter) be banned in that country. This ruling—the latest twist in months of conflict between the Brazilian judiciary and X over allegations of disinformation—shows the challenges of running a social media platform at global scale without global uniformity of law.

Spotify is (understandably) upset that Apple has stopped the function that allows the volume buttons on iPhones to work for Spotify Connect, forcing Spotify users to resort to a workaround to control this basic feature on connected devices such as wireless speakers or smart TVs. Spotify holds that this violates the Digital Markets Act in the EU, and I believe Apple’s actions will just set off another round of lawsuits in Europe—where Apple has already run into much resistance.

Post-quantum cryptography (PQC) is creating churn in the quantum ecosystem. Juniper Networks has announced an investment in Quantum Bridge Technologies, a pioneer in the Distributed Symmetric Key Exchange (DSKE) protocol for PQC networks. Juniper plans to advance quantum-safe communications by using Quantum Bridge to expand its DSKE technology, which integrates into its infrastructure without relying on asymmetric cryptography.

Quantum Bridge’s DSKE technology is the first to offer symmetric key distribution at scale and provides security against future quantum encryption-busting attacks. Combining this technology with Juniper’s quantum-safe VPNs and crypto-agility solutions increases the security of Juniper’s networking platform. It should protect encrypted data from “harvest now, decrypt later” threats, where actors steal encrypted assets now and decrypt them when the quantum capability becomes available. This deal gives both Juniper and Quantum Bridge a strong position in quantum-safe networking.

Iranian hackers hoping to disrupt U.S. political campaigns are using DNS techniques to register and weaponize lookalike domains with the intent of stealing data through sophisticated phishing attacks. Compromised data could be used for direct cyberattacks against specific candidates, or even to steal voter data to enable casting fraudulent ballots in the future. However, DNS-specific cybersecurity tools can be used to counter these attacks. A prime candidate (ahem) for this is Infoblox, which has made DNS the cornerstone of its platform development efforts for two decades. Recent announcements about its DNS threat intelligence capabilities point to its ability to identify threats much sooner than other vendors. Whatever happens, the vigilance of both big tech companies such as Google, Microsoft, and Meta and specialized security vendors like Infoblox will be required to back up the efforts of government agencies and the campaigns themselves to keep U.S. elections free from interference.

Verizon has partnered with the Atlanta Hawks and State Farm Arena to be the Atlanta venue’s official 5G wireless partner. This partnership aims to improve connectivity for fans at games and other events. Verizon will also be making technology upgrades throughout the arena to improve experiences for everything from event entry to concessions. With this partnership, Verizon also plans to create exclusive experiences for Verizon customers and connect more deeply with the Atlanta community.

Verizon is actively taking steps to reduce its environmental footprint. The company is transitioning to renewable energy, aiming to source 50% of its annual electricity usage from renewable sources by 2025, and 100% by 2030. The company also actively helps customers reduce their carbon emissions, with its solutions enabling the avoidance of over 90 million metric tons of CO2 equivalent since 2018. Water conservation is also a priority, as the company has reduced water usage by 16% between 2019 and 2022. Verizon is also working to electrify its fleet, plus it has set a goal to collect and recycle 10 million pounds of e-waste by 2026.

Deutsche Telekom recently announced its plans for deploying 5G Standalone—with an interesting twist. The operator plans to offer it as a bespoke service married with network slicing, rather than deploy it broadly to subscribers. It is an interesting strategy, one that is likely designed to not confuse the German market, given the roller coaster of high and low expectations of 5G globally. One thing is for certain: the 3GPP standards body will not make the same mistake of allowing core infrastructure deployment to lag the radio access network for 6G and beyond.

Research Notes Published

Citations

AI Smart Glasses / Anshel Sag / Tech News World
AI-Enhanced Next-Gen Smart Glasses Could Revolutionize Wearables

Amazon / Jason Andersen / Fantastical Futurist
Amazon saved 4,500 years of work and $260 Million using Gen AI Robo-Coders

Intel / Patrick Moorhead / Fierce Electronics
Report of possible Intel foundry split sends stock skyward

Crowdstrike Earnings / Patrick Moorhead / CNBC
CrowdStrike shares spike after first quarterly report since global outage

IBM /Patrick Moorhead / Network World
IBM Z mainframes get AI boost with new Telum II processor, Spyre accelerator

NVIDIA / Patrick Moorhead / Beebom
https://beebom.com/nvidia-competitors-ai-chipmakers/

NVIDIA Earnings / Patrick Moorhead / Yahoo! Finance 
Who are Nvidia’s biggest competitors?

NVIDIA Earnings / Patrick Moorhead / Fortune 
Nvidia, the ‘most important stock in the world,’ reports Q2 earnings today: Here’s what to watch for

NVIDIA  Earnings / Patrick Moorhead / Fierce Electronics
Nvidia’s Blackwell fix to bust out billions in Q4

VMware Explore 2024 / Patrick Moorhead, Matt Kimball / Fierce Network 
Analysts respond to VMware Explore 

New Gear or Software We Are Using and Testing

  • Cisco Desk Pro (Melody Brue)
  • OnePlus Buds Pro 3 (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • IFA Berlin, September 6-11, Berlin, Germany (Anshel Sag) 
  • Oracle Cloud World, September 9-12, Las Vegas (Melody Brue, Robert Kramer)
  • JFrog swampUP 24, September 9-11, Austin (Jason Andersen)
  • Connected Britain, September 11-12, London (Will Townsend)
  • Connected Britain panel moderation, September 11-12, London (Will Townsend)
  • Snowflake Industry Day 2024, September 12 (virtual) (Robert Kramer)
  • Snap Partner Summit, September 17, Santa Monica (Anshel Sag)
  • Zayo Network Transformation webinar moderation, September 17 (Will Townsend)
  • Salesforce Dreamforce, September 17-19, San Francisco (Robert Kramer)
  • Intel Innovation, September 23-26 — EVENT CANCELED
  • HP Imagine, September 24, Palo Alto (Anshel Sag)
  • Meta Connect, September 25, San Jose (Anshel Sag)
  • Verint Engage, September 23-25, Orlando (Melody Brue)
  • Infor Annual Summit, September 30-October 2, Las Vegas (Robert Kramer)
  • IFA Berlin, September 6-11, Berlin, Germany (Anshel Sag) 
  • Oracle Cloud World, September 9-12, Las Vegas (Melody Brue, Robert Kramer)
  • JFrog swampUP 24, September 9-11, Austin (Jason Andersen)
  • Connected Britain, September 11-12, London (Will Townsend)
  • Connected Britain panel moderation, September 11-12, London (Will Townsend)
  • Snowflake Industry Day 2024, September 12 (virtual) (Robert Kramer)
  • Snap Partner Summit, September 17, Santa Monica (Anshel Sag)
  • Zayo Network Transformation webinar moderation, September 17 (Will Townsend)
  • Salesforce Dreamforce, September 17-19, San Francisco (Robert Kramer)
  • Intel Innovation, September 23-26 — EVENT CANCELED
  • HP Imagine, September 24, Palo Alto (Anshel Sag)
  • Meta Connect, September 25, San Jose (Anshel Sag)
  • Verint Engage, September 23-25, Orlando (Melody Brue)
  • Infor Annual Summit, September 30-October 2, Las Vegas (Robert Kramer)
  •  
  • Fem.AI Summit, Menlo Park, October 1 (Melody Brue) 
  • Microsoft Industry Analyst Event, Burlington, Mass, October 2 (Melody Brue)
  • LogicMonitor, Austin, October 2-4 (Robert Kramer)
  • Teradata, October 7-10, Los Angeles (Robert Kramer)
  • Zoomtopia, San Jose, October 8-9 (Melody Brue)
  • Embedded World NA, Austin, October 8-10 (Bill Curtis)
  • MWC Americas and T-Mobile for Business Unconventional Awards event judge, October 8-10, Las Vegas, October 8-10, Las Vegas (Will Townsend)
  • AWS GenAI Summit, October 9-10, Seattle (Jason Andersen, Robert Kramer)
  • AdobeMAX, October 14-16, Miami (Melody Brue)
  • Lenovo Global Analyst Summit & Tech World, October 14-17, Bellevue, WA (Matt Kimball, Paul Smith-Goodson, Anshel Sag)
  • IBM Analyst Summit, October 16-18, New York City (Matt Kimball, Robert Kramer)
  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • RISC-V Summit, October 22-23 — virtual (Matt Kimball)
  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Dell Tech Analyst Summit, November 6-8, Austin (Matt Kimball, Anshel Sag, Paul Smith-Goodson)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • IBM, November 6-8, New York City (Paul Smith-Goodson)
  • Fyuz, November 11-13, Dublin (Will Townsend)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Analyst Summit, November 12-13, San Francisco (Melody Brue)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • AWS re:Invent, December 2-6, Las Vegas (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending August 30, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending August 23, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-august-23-2024/ Tue, 27 Aug 2024 03:56:03 +0000 https://moorinsightsstrategy.com/?p=41717 MI&S Weekly Analyst Insights — Week Ending August 23, 2024

The post MI&S Weekly Analyst Insights — Week Ending August 23, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

The Moor Insights & Strategy team hopes you had a nice weekend!

Last week, Robert Kramer attended the Modern Data Quality Summit 2024 virtually and Will Townsend hosted a live webinar with Nile: From Complexity to Cloud-Native: Top-10 Reasons to Start Building Your Next-Gen Enterprise Network. If you missed it, it’s now available on demand.

This week, Will Townsend is attending VMware Explore while Matt Kimball attends virtually. Matt is also attending the GlobalFoundries Analyst event. Robert Kramer is attending the IBM SAP Analyst and Advisory Services Day in New York. Robert and Melody Brue will attend the US Open with IBM in New York.

Last week, our MI&S team published 16 deliverables:

7 Forbes Insight Columns

1 MI&S Research Paper

1 MI&S Research Note

2 MI&S Blog Posts

5 Podcasts

Over the last week, our analysts have been quoted multiple times in international publications with our thoughts on Amazon, AMD, HPE, Intel, and T-Mobile. Patrick Moorhead appeared on CNBC Closing Bell Overtime to discuss the expectations ahead of this week’s NVIDIA earnings.

MI&S Quick Insights

An AI “oops” — OpenAI does a lot of development with advanced AI models that may act autonomously. It has developed a Preparedness Framework designed to assess and mitigate potential risks associated with those models to ensure that anything related to autonomy is identified and managed. However, to address the challenges of evaluating generated code and simulating real-world development scenarios accurately, OpenAI uses the SWE-bench, a benchmark that evaluates large language models’ ability to solve real-world software issues sourced from GitHub. During its testing, OpenAI found that some tests in SWE-bench could be too hard or even impossible to solve. That could cause a model’s capabilities to be underestimated. OpenAI is currently working with SWE-bench to fix these issues.

The last thing we need is an AI safety incident caused by underestimating a model’s autonomous capabilities.

An AI “oops” — OpenAI does a lot of development with advanced AI models that may act autonomously. It has developed a Preparedness Framework designed to assess and mitigate potential risks associated with those models to ensure that anything related to autonomy is identified and managed. However, to address the challenges of evaluating generated code and simulating real-world development scenarios accurately, OpenAI uses the SWE-bench, a benchmark that evaluates large language models’ ability to solve real-world software issues sourced from GitHub. During its testing, OpenAI found that some tests in SWE-bench could be too hard or even impossible to solve. That could cause a model’s capabilities to be underestimated. OpenAI is currently working with SWE-bench to fix these issues.

The last thing we need is an AI safety incident caused by underestimating a model’s autonomous capabilities.

This week I published a new piece on Forbes about whether developers should be worried about AI replacing their jobs. The research I performed led me to the conclusion that there is no imminent danger for devs. Yet there still remains a gap between what the general public thinks versus what developers understand about the nature of what developers do, how they do it, and how the role has evolved. To that end I wrote this new article to help go beyond all of the soundbites and opinions out there.

This week has also been dominated by follow-up conversations and interactions from this new article. The topic moved from developers to the broader idea of AI augmenting humans and processes versus replacing them. I have a new piece coming up where I don my developer hat and fail spectacularly. But in the process, I learned a lot about how to change your mindset to get better results from AI, instead of getting the same results only faster and/or cheaper.

LiquidStack, the leader in immersion cooling technology, has done a good job of expanding its portfolio to be competitive in the direct-to-chip cooling space. This company, which cut its teeth in the crypto-mining space with a two-phase immersion solution, has clearly seen the trends and transitioned quite well into a company with a much broader portfolio.

Liquid cooling is the future—and not the distant future. The amount of investment dollars pouring into this space is astounding and companies like LiquidStack and JetCool are very well positioned (and funded) to play a significant role in both shaping and capturing the market. There is also a lot of promotion from server vendors around their own proprietary cooling solutions. However, as I speak with datacenter operators, it is clear they are looking for solutions that can span all systems across all racks, especially as AI and other workloads drive heterogeneity across the enterprise.

What to make of AMD’s acquisition of ZT Systems? Are you a fan? $4.9 billion is a lot of money to pay for a company when the intent is to spin off half of its operations. When looking at what AMD is actually acquiring, it’s about having a dedicated team to design AI systems. Given that about 1,000 engineers are coming over from ZT, some have framed the acquisition at a cost of about $4.9 million per systems engineer.

I’m a fan of the move—a big fan. By many estimates, the AI market is expected to grow to more than $400 billion dollars annually in the next few years. This AI market is going to be powered by servers that are unlike what is being deployed today. These will be highly bespoke systems that tightly integrate CPUs, GPUs, I/O, networking, and storage to best move, process, train, and operationalize data. In this context, AMD is the only company that can (at this moment) realistically challenge the dominance of NVIDIA with its IP portfolio.

Putting the pieces together in a bespoke platform is really difficult and time-consuming. And time-to-market is absolutely critical if AMD wants to compete beyond just spec sheets and capturing overflow business. To compete in a significant way, AMD needs the resources to design these systems faster and more completely. Further, the system design work has to integrate with and inform silicon design. THAT is what AMD has bought with ZT Systems, and it is going to pay dividends down the line.

Juniper Networks recently announced its Blueprint for AI-Native Acceleration. The company is offering training, trial offers that include software and hardware, and flexible licensing to reduce the friction for customers that are hesitant about embracing AI-infused networking. It is a novel approach, one that provides Juniper channel partners with a new set of tools that could lead to closing more network infrastructure sales opportunities.

During its August 2024 Security Patch Day, SAP released fixes for 17 vulnerabilities, six of which were particularly severe, scoring between 7 and 10 on the Common Vulnerability Scoring System (CVSS) scale. SAP urged customers to apply these patches immediately and provided workarounds for situations where immediate patching isn’t feasible.

These are the two most critical vulnerabilities:

  • CVE-2024-41730 — An authentication bypass flaw in SAP’s BusinessObjects intelligence platform, with a CVSS score of 9.8. This vulnerability allows unauthorized users to obtain a logon token via a REST endpoint if single sign-on is enabled, potentially compromising the system’s confidentiality, integrity, and availability.
  • CVE-2024-29415 — A server-side request forgery (SSRF) vulnerability in applications built with SAP Build Apps. This issue arises from improper categorization of IP addresses that was not fully addressed in a previous fix.

Hackers frequently target ERP systems because they present such big—and potentially disruptive—targets, most of all with major vendors such as SAP. There has also been a significant increase in ransomware attacks specifically targeting ERP systems since 2021. I suggest always staying current on your ERP system and taking immediate advantage of updates/patches to reduce the threat of attacks.

Contact-center software company Genesys released its 2024 Sustainability Report, which showed that it is making significant progress towards its 2030 sustainability goals. These include reducing emissions, improving its CDP and EcoVadis assessments, and opening a new LEED Gold-certified R&D center. Socially, Genesys has expanded its charitable offerings and continued to focus on diversity and inclusion in the organization. Genesys’ public commitment to sustainability is commendable and, I believe, a standout in the industry

Gamescom this year has really embraced its role as the replacement for E3, with tons of game announcements and many hardware manufacturers taking their teasers from Computex and relaunching them at Gamescom. One company that didn’t do that was HP, which announced a new highly customizable Omen 35L gaming PC along with some new keyboards and microphones.

There are rumors that Meta is canceling its La Jolla mixed-reality headset, a potential successor to the Quest Pro—and a device that I honestly didn’t think was necessary to begin with. Especially given that potential issues with LG’s manufacturing could already be pushing back production, I don’t think Meta has much wiggle room on timelines. I’m also not sure that we need more headsets over $1,000.

It has been really interesting to see how many titles are coming out for Xbox and PS5 at Gamescom thanks to both Sony and Microsoft mostly abandoning console exclusives in favor of simply selling as many copies of a game as possible. This approach showed how successful it could be with Helldivers, although Sony got a bit too greedy and tried to make people sign up for a mandatory Sony Online account after having already bought the game.

“QuitToking” is a growing trend in which employees publicly resign on TikTok or other social media platforms. This clearly reflects a shift in employees’ priorities towards work-life balance and a willingness to voice their discontent. This trend presents a double whammy for companies in that they are losing valuable talent, and public resignations can damage their brand. While it can be risky for employees to air grievances publicly, by the time they’re on TikTok recording themselves walking out of the office for the last time, the damage is done for that employer. Savvy organizations will take heed of this trend—and what it says about today’s workplace atmosphere—and put themselves ahead of the curve. One way is to take advantage of solutions provided by companies such as Workvivo, Zoho, and Slack. These platforms help foster open communication and track employee engagement (among other things) with the goal of creating a more positive work environment—one that helps companies protect their employer brand and retain top talent.

NXP released a fully supported Debian 12 Linux distribution for “select” i.MX and Layerscape evaluation kits. Debian is among the most popular distributions for embedded applications because of its stability, extensive package support, and long-term updates. NXP provides complete board support packages (BSPs) and a Yocto configuration toolchain, enabling developers to start building production-ready IoT applications with little or no system development. This announcement further proves that IoT device development is rapidly transitioning from DIY custom mashups to software-defined application platforms. Product companies using a platform approach rather than full-stack embedded development have much faster product cycles, lower costs, more advanced features, better security, and higher quality.

Smart metering growth has remained steady at a CAGR of about 7% for years. In a recent report, Transforma projects that the smart metering market will increase to $40 billion by 2033 as connectivity technologies consolidate to LPWA—cellular mMTC, LoRaWAN, and Sigfox. Today, metering accounts for ten percent of all IoT connections. Although the company reckons metering’s share will drop to 9% over the next decade as other IoT use cases accelerate, I predict a much sharper decline. As AI applications create insatiable demands for instrumenting enterprise and consumer physical assets, I believe the metering share will drop to less than 5% over that period. I describe how AI is transforming the industrial IoT landscape in this new Forbes article.

The Google Pixel 9 just hit the shelves, so people can now experience the new Gemini Live voice assistant, along with the new Panorama mode (exclusive to the Pixel 9) and satellite messaging capability. I’ll have much more on all aspects of the Google Pixel launch in a writeup later this week on Forbes.

Many wearables, especially earbuds and rings, have a major repairability problem: once they break, they’re mostly not repairable and turn into throwaway tech. This was brought home to me by a recent teardown of a Samsung Galaxy Ring from iFixit and another video about the same issue with Apple’s AirPods Pro. The scope of the problem suggests there should be a lurking entrepreneurial opportunity, so if any of our readers are familiar with companies attacking this problem, I’d love to hear about them.

Google’s Android 15 Quarterly Platform Release 1 is live for Pixel devices, and one of the interesting tidbits to emerge from it is that the Pixel 6 might be getting longer support than Google had promised. The Pixel 6 was only supposed to receive system updates until October of 2024, but it seems that it is now being included in the QPR, which means it might be getting Android 15—and stay fresher than most people expected.

Apple has split its App Store team in two and reorganized the group amid global regulatory scrutiny. The company has also begun allowing users to delete the App Store from their iPhones altogether. This comes in the wake of rulings earlier this year by the U.S. Supreme Court and the European Commission that have effectively forced Apple to modify its practices for the App Store. So while further changes are expected, nobody fully knows yet how it will all pan out

RingCentral recently announced updates to its contact center solution, RingCX. These updates include native, real-time AI assistance for agents and supervisors and AI-based coaching and feedback tools. The company also reported substantial growth in its RingCX customer base in its first year. RingCentral hosted a virtual event to showcase the new AI capabilities within RingCX and their practical applications. The company had CX author Blake Morgan discuss the importance of customer-focused leadership, so the event strategically combined product announcements with thought leadership. It was a smart move to showcase product updates alongside practical advice on how to leverage them for improved customer experiences.

Zoom’s stock surged following strong Q2 2025 results, marking its best day in almost two years. The company beat revenue and earnings expectations and raised its full-year guidance. This success comes as Zoom expands its product portfolio and invests heavily in AI to reignite growth, which stalled after its pandemic boom. The company reported that lower customer churn and its growing contact-center business helped with the results. This should indicate a promising future for Zoom beyond its videoconferencing roots. The company also announced the departure of its long-time CFO, Kelly Steckelberg, a significant loss despite the positive financial news.

Webex AI Codec is now generally available in the Webex app; it aims to enhance audio quality during online meetings and calls, particularly in challenging network conditions. Audio continues to be one of the most important aspects of collaboration and is a significant factor in providing the best experience for employees and customers. The company says the technology can deliver clear audio while using minimal bandwidth—which addresses a common frustration in remote and hybrid work environments. I saw and heard a demo of the codec last year, and it works as advertised. I think this is a significant development in the industry as it reflects companies’ focus on ensuring users’ ability to communicate regardless of their location or Internet connectivity.

NVIDIA has announced another partnership with MediaTek, this time for monitor scalers to enable G-Sync Pulsar, the latest generation of NVIDIA’s frame-rate panel refresh technology on more affordable computers. Previously, G-Sync required an additional G-Sync module to run all the technologies NVIDIA has created for it, but now it will be integrated into the monitor’s scaler, saving on cost and complexity. This will broaden the appeal of G-Sync and bring it to an even bigger audience.

PsiQuantum has made significant progress towards its goal of developing a fault-tolerant quantum computer with a million qubits. The company recently announced a partnership to establish a quantum computing hub in Chicago, backed by $500 million in public funds. PsiQuantum is also building prototypes in the U.K., at Stanford University, and in Brisbane, Australia, where it plans to develop its first fault-tolerant photonic quantum computer by 2027.

Photons have long coherence times with minimal environmental interaction, making them ideal for quantum computing. PsiQuantum uses single-photon sources, integrated superconducting detectors and photonic chips, all produced through a high-volume manufacturing partnership with GlobalFoundries. PsiQuantum’s photonic quantum architecture employs advanced filtering and interference techniques to ensure high-quality photon qubits and help it achieve a million-qubit system capable of solving complex, previously intractable problems. You can read more about PsiQuantum in my latest Forbes article.

Research by Quantinuum has simplified quantum error correction. It is easy for quantum computer qubits to make errors, but determining where these errors occur is a complicated process that requires a lot of time-consuming checks called syndrome extractions. Now scientists from Quantinuum and the University of Maryland have found a way to correct errors by using a method called single-shot quantum error correction. Quantinuum’s latest H2 quantum computer was used to test this method using a 4-D surface code that makes it easier to find and fix errors. Compared to the older 2-D surface code, the 4-D code did as well or better by using fewer resources and less time. This shows that single-shot error correction can speed up quantum machines for complex calculations. You can dig into the details in the preprint article conveying this research.

Halliburton is the latest big company to fall victim to a cyberattack. Critical infrastructure, including oil and gas services, will continue to be a target for bad actors that use denial-of-service techniques. What is especially troubling about this incident is the impact to the energy industry and the negative consequences for consumers and enterprises that rely on refinery and natural gas production. Consequently, it is imperative that a multi-layered approach to security is employed to safeguard an industry that is tied to the United States’ national security.

xMEMS has come out with a semiconductor cooling solution that has no fans and can move air without needing heat pipes or copper to transfer heat. This will hopefully give Frore Systems a run for its money and could give the entire solid-state cooling market a credibility boost.

I am looking forward to attending the US Open to see firsthand how IBM is leveraging AI to reshape tennis fans’ experience. The partnership between IBM and the USTA takes a strategic approach for brand and technology integration with AI-powered match commentary and personalized insights, which should enhance fan engagement on a granular level. What’s really impressive is IBM’s AI learning initiative through SkillsBuild using tennis as a familiar framework. This shows the company’s dedication to fostering broader technological literacy and the potential of AI to not only revolutionize industries such as sports but also create educational opportunities for individuals across diverse backgrounds.

The Moor Insights & Strategy sports technology team (Melody Brue and Robert Kramer) will be in action at Flushing Meadows for this year’s US Open tennis tournament. IBM, which has been working with the United States Tennis Association for 30 years, combines data, AI, and the IBM watsonx platform with tennis to bring fans the future experience of sports technology.

Tune in and take advantage of the team’s live podcasts. We will be taking a closer look at the technology and talking with IBM experts, fans, players, and other vendors. For now, you can find out more from this IBM post about AI at the US Open.

Nokia and Axiom Space are partnering to embed 4G LTE communications in spacesuits for the Artemis III mission to the Moon’s south pole in 2026. This effort complements Nokia’s already planned cellular network deployment on the Moon as part of NASA’s Tipping Point program. The applications are exciting, including helmet camera HD video streaming, telemetry data transmission, and voice communications over long distances. Pairing Nokia’s cellular infrastructure with spacesuit communications is truly out of this world, and it could help unlock new findings on an uncharted part of the lunar surface.

Research Papers Published

Research Notes Published

Citations

Amazon / GenAI / Jason Andersen / WorkLife
How Amazon’s GenAI tool for developers is saving 4,500 years of work, $260 million annually

AMD / Acquisition of ZT Systems / Patrick Moorhead / Fierce Network
Here’s what analysts think of AMD’s $4.9B bid to challenge Nvidia

AMD / Acquisition of ZT Systems / Patrick Moorhead / Fierce Electronics
What ZT brings to AMD: Design insight, systems expertise, maybe even a sales edge

AMD / Acquisition of ZT Systems / Patrick Moorhead / Inside HPC
Upping the AI Ante: AMD to Acquire Data Center Server Company ZT Systems for $5B

AMD / Acquisition of ZT Systems / Patrick Moorhead / R&D World
AMD challenges NVIDIA with $4.9B ZT Systems buyout

HPE / Acquisition of Morpheus Data / Matt Kimball / TechTarget
HPE acquires Morpheus Data, bolstering hybrid cloud offering

Intel / Company Status, ASICs, and GPUs / Patrick Moorhead / Fierce Network
Is Intel too big to fail? Here’s what analysts say.

SaaS Spending Slowdown / Patrick Moorhead / Channelholic
What’s Behind the SaaS Spending Slowdown?

T-Mobile / 5G / Patrick Moorhead / SDxCentral
Are telecom operators directionally challenged when it comes to enterprise?

New Gear or Software We Are Using and Testing

  • Cisco Desk Pro (Melody Brue)
  • OnePlus Buds Pro 3 (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • VMware Explore, August 26-29, Las Vegas (Matt Kimball – virtual, Will Townsend – virtual)
  • GlobalFoundries Analyst Event, August 26-28 (Matt Kimball)
  • IBM SAP Analyst and Advisory Services Day & US Open, August 29, New York (Robert Kramer)
  • US Open with IBM, August 28, New York (Robert Kramer, Melody Brue)
  • VMware Explore, August 26-29, Las Vegas (Matt Kimball – virtual, Will Townsend – virtual)
  • GlobalFoundries Analyst Event, August 26-28 (Matt Kimball)
  • IBM SAP Analyst and Advisory Services Day & US Open, August 29, New York (Robert Kramer)
  • US Open with IBM, August 28, New York (Robert Kramer, Melody Brue)
  • IFA Berlin, September 6-11, Berlin, Germany (Anshel Sag) 
  • Oracle Cloud World, September 9-12, Las Vegas (Melody Brue, Robert Kramer)
  • JFrog swampUP 24, September 9-11, Austin (Jason Andersen)
  • Connected Britain, September 11-12, London (Will Townsend)
  • Connected Britain panel moderation, September 11-12, London (Will Townsend)
  • Snowflake Industry Day 2024, September 12 (virtual) (Robert Kramer)
  • Snap Partner Summit, September 17, Santa Monica (Anshel Sag)
  • Zayo Network Transformation webinar moderation, September 17 (Will Townsend)
  • Salesforce Dreamforce, September 17-19, San Francisco (Robert Kramer)
  • Intel Innovation, September 23-26 — EVENT CANCELED
  • HP Imagine, September 24, Palo Alto (Anshel Sag)
  • Meta Connect, September 25, San Jose (Anshel Sag)
  • Verint Engage, September 23-25, Orlando (Melody Brue)
  • Infor Annual Summit, September 30-October 2, Las Vegas (Robert Kramer)
  • Fem.AI Summit, Menlo Park, October 1 (Melody Brue) 
  • Microsoft Industry Analyst Event, Burlington, Mass, October 2 (Melody Brue)
  • LogicMonitor, Austin, October 2-4 (Robert Kramer)
  • Teradata, October 7-10, Los Angeles (Robert Kramer)
  • Zoomtopia, San Jose, October 8-9 (Melody Brue)
  • Embedded World NA, Austin, October 8-10 (Bill Curtis)
  • MWC Americas and T-Mobile for Business Unconventional Awards event judge, October 8-10, Las Vegas, October 8-10, Las Vegas (Will Townsend)
  • AWS GenAI Summit, October 9-10, Seattle (Jason Andersen, Robert Kramer)
  • AdobeMAX, October 14-16, Miami (Melody Brue)
  • Lenovo Global Analyst Summit & Tech World, October 14-17, Bellevue, WA (Matt Kimball, Paul Smith-Goodson, Anshel Sag)
  • IBM Analyst Summit, October 16-18, New York City (Matt Kimball, Robert Kramer)
  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • RISC-V Summit, October 22-23 — virtual (Matt Kimball)
  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Dell Tech Analyst Summit, November 6-8, Austin (Matt Kimball, Anshel Sag, Paul Smith-Goodson)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • IBM, November 6-8, New York City (Paul Smith-Goodson)
  • Fyuz, November 11-13, Dublin (Will Townsend)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Analyst Summit, November 12-13, San Francisco (Melody Brue)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend – virtual)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
  • AWS re:Invent, December 2-6, Las Vegas (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending August 23, 2024 appeared first on Moor Insights & Strategy.

]]>
Ep. 28: MI&S Datacenter Podcast: Talking Cisco, IBM, Dell & Nutanix, Black Hat USA 2024, AI, HPE https://moorinsightsstrategy.com/data-center-podcast/ep-28-mis-datacenter-podcast-talking-cisco-ibm-dell-nutanix-black-hat-usa-2024-ai-hpe/ Wed, 21 Aug 2024 20:03:09 +0000 https://moorinsightsstrategy.com/?post_type=data_center&p=41709 The Datacenter team talks Cisco, IBM, Dell & Nutanix, Black Hat USA 2024, AI and HPE

The post Ep. 28: MI&S Datacenter Podcast: Talking Cisco, IBM, Dell & Nutanix, Black Hat USA 2024, AI, HPE appeared first on Moor Insights & Strategy.

]]>
Welcome to this week’s edition of “MI&S Datacenter Podcast” I’m Patrick Moorhead with Moor Insights & Strategy, and I am joined by co-hosts Matt, Will, and Paul. We analyze the week’s top datacenter and datacenter edge news. We talk compute, cloud, security, storage, networking, operations, data management, AI, and more!

Watch the video here:

Listen to the audio here:

2:42 Cisco 4Q Earnings & Leadership Shake-up
11:23 What Is An ML-KEM?
18:24 Dell & Nutanix Get Serious-er
25:03 Black Hat USA 2024 Insights
30:46 Professor Bot
36:44 Morpheus – The God Of Dreams – & HPE’s Latest Acquisition

Cisco 4Q Earnings & Leadership Shake-up

https://x.com/WillTownTech/status/1824156801711083707

What Is An ML-KEM?

https://research.ibm.com/blog/nist-pqc-standards

Dell & Nutanix Get Serious-er

https://www.linkedin.com/feed/update/urn:li:activity:7229484227182911488/

Black Hat USA 2024 Insights

https://x.com/WillTownTech/status/1824121727456026696

Professor Bot

https://sakana.ai/ai-scientist/%C3%82%C2%A0

https://arxiv.org/pdf/2408.06292

Morpheus – The God Of Dreams – & HPE’s Latest Acquisition

https://www.linkedin.com/feed/update/urn:li:activity:7229842227567411200/

Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.

The post Ep. 28: MI&S Datacenter Podcast: Talking Cisco, IBM, Dell & Nutanix, Black Hat USA 2024, AI, HPE appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending August 16, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-august-16-2024/ Mon, 19 Aug 2024 14:00:40 +0000 https://moorinsightsstrategy.com/?p=41573 MI&S Weekly Analyst Insights — Week Ending August 16, 2024

The post MI&S Weekly Analyst Insights — Week Ending August 16, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

The Moor Insights & Strategy team hopes you had a nice weekend!

Last week, Anshel attended Google’s Made by Google event in Mountain View, California, and Jason attended (virtually) AI Innovation through AWS Workplace.

On Wednesday, August 21st (9 am PT / 12 pm ET), Will Townsend is hosting a free live webinar with Nile: From Complexity to Cloud-Native: Top 10 Reasons to Start Building Your Next-Gen Enterprise Network.

Later this month, Matt will attend VMware Explore virtually while Will attends in person and Patrick Moorhead and The Six Five crew will broadcast live. Matt will then attend a GlobalFoundries analyst event in Santa Clara, and Robert will attend the IBM SAP Analyst and Advisory Services Day in New York. Melody and Robert will attend the US Open with IBM in New York.

Last week, our MI&S team published 14 deliverables:

6 Forbes Insight Columns

1 MI&S Research Note

3 MI&S Blog Posts

4 Podcasts

Over the last week, our analysts have been quoted three times in international publications with our thoughts on Crowdstrike, Google, and T-Mobile.

MI&S Quick Insights

X.ai has released its frontier model, Grok-2 mini, on the X platform. I ran it through its paces; its image performance is impressive and blazing fast. It has been evaluated across academic benchmarks, including reasoning, reading comprehension, math, science, and coding. The model is available to X Premium and Premium+ users. X is also collaborating with Black Forest Labs to test its FLUX.1 model. Over the weekend, I’ll test X’s enhanced search capabilities, insights on X posts, and improved reply functions.

Google DeepMind’s Imagen 3 is a text-to-image model for generating high-quality images from text prompts. This model outputs exceptionally photorealistic images at a default resolution of 1,024 x 1,024 pixels, with options for higher resolution, and it outperformed other leading text-to-image models in human evaluations. Training used a large dataset of images paired with original and synthetic text descriptions. To ensure the model’s quality and safety, a rigorous multi-stage filtering process was implemented to remove unsafe, violent, or low-quality images and eliminate AI-generated images to prevent learning biases. Additionally, synthetic captions were generated with multiple Gemini models to provide diverse and high-quality linguistic input. Captions were filtered for unsafe or personally identifiable information. In short, Imagen 3 offers superior image quality while prioritizing ethical considerations and safety in its deployment.

X.ai has released its frontier model, Grok-2 mini, on the X platform. I ran it through its paces; its image performance is impressive and blazing fast. It has been evaluated across academic benchmarks, including reasoning, reading comprehension, math, science, and coding. The model is available to X Premium and Premium+ users. X is also collaborating with Black Forest Labs to test its FLUX.1 model. Over the weekend, I’ll test X’s enhanced search capabilities, insights on X posts, and improved reply functions.

Google DeepMind’s Imagen 3 is a text-to-image model for generating high-quality images from text prompts. This model outputs exceptionally photorealistic images at a default resolution of 1,024 x 1,024 pixels, with options for higher resolution, and it outperformed other leading text-to-image models in human evaluations. Training used a large dataset of images paired with original and synthetic text descriptions. To ensure the model’s quality and safety, a rigorous multi-stage filtering process was implemented to remove unsafe, violent, or low-quality images and eliminate AI-generated images to prevent learning biases. Additionally, synthetic captions were generated with multiple Gemini models to provide diverse and high-quality linguistic input. Captions were filtered for unsafe or personally identifiable information. In short, Imagen 3 offers superior image quality while prioritizing ethical considerations and safety in its deployment.

At the end of July, I published a piece on Forbes.com suggesting that CRMs are an ideal place to begin the AI journey. While the overall feedback on the article has been good, one very specific item of feedback is worth mentioning here. In my article I mentioned that a great use case for AI in CRM would be to find a way to reduce the effort of entering data into CRMs for front-line salespeople. I even posited using meeting summarization technology as a way to do this. Turns out there is a company that is attempting to use AI to create an omnichannel workflow for customer sales and service interactions and manage those interactions in its own CRM. The company is called Flowcall, and for businesses that heavily use online sales and service, it looks like an interesting way to automate.

I have been a part of multiple inquiries this week in the lead-up to technology conferences this fall. While I cannot elaborate with any details, there are a few themes emerging in the dev space that will be coming your way soon. (1) What are the second-generation AI features now that AI assistants and agents are everywhere? (2) Improving and simplifying developer responsibility and security. (3) DevOps and convergence with other IT Automation technologies. Topic number three is so interesting and applies much more broadly than just to developers. So much so, that Matt Kimball and I will be launching a new podcast for anyone with “*/ops” in their job description. The Moor Insights and Analysis Ops Podcast is launching very soon. Stay tuned!

Expansion of Nutanix-Dell partnership: Given the dynamic nature of the virtualization and hybrid cloud market, it seems as if new alliances are being formed every day. One of the biggest movers in the game has been Nutanix, which has built on a strong market position with its Nutanix Cloud Platform (NCP) and hardware partnerships. One of those partnerships has been with Dell, whereby the companies cooperate to perform joint go-to-market on integrated solutions.

This week, the companies upped the ante with the availability of the Dell XC Plus, a hybrid cloud platform with a centralized control plane, advanced management, and AI-driven performance tuning. Additionally, Dell announced that its PowerFlex will be the first external storage platform supporting the Nutanix Cloud Platform and the Nutanix AHV hypervisor. This software-defined storage solution enables enterprise IT organizations to scale compute and storage independently.

So, what is the significance of this announcement? A few things. First and foremost, this partnership shows, as previously mentioned, that Nutanix is capitalizing on the uncertainty in the virtualization and hybrid cloud market through the partnerships it has been able to establish. These are partnerships that go beyond superficial integrations and a datasheet. The company is building deep integration with hardware partners like Dell and following that with well funded joint go-to-market efforts that should land well in the enterprise.

This partnership also demonstrates why Dell is the number one server vendor in the market. The company has the uncanny ability to respond to the needs of its customers with partnerships like these at the right time.

HPE’s acquisition of Morpheus Data, an orchestration and management platform, filled a gap in the company’s hybrid cloud solution. It’s the latest move in the company’s hybrid cloud strategy, following a series of announcements at the annual HPE Discover conference, where the GreenLake virtualization stack was unveiled. For the GreenLake platform, Morpheus fills critical holes for infrastructure and cloud management, complementing what the company offers in OpsRamp.

What does this mean for enterprise IT and GreenLake customers? There are a couple of ways to look at this. There’s the obvious value-add to GreenLake as Morpheus complements infrastructure management (also provided by OpsRamp) and delivers a rich cloud management platform for the hybrid cloud environment. And there’s a lot packed into that “rich cloud management” statement: Morpheus offers a single console for self-service, consumption and management, managing cloud spend, and so on. Morpheus is the orchestration layer that brings all of the enterprise tools together to enable that real point-and-click experience for both users and IT administrators. I can tell you as somebody who spent a few years in IT management—complexity is the enemy of IT. If I can find a platform that allows my own team and those I support (business units, embedded DevOps teams) to do more easier and faster, I can focus on the projects that have higher visibility and priority across the organization.

The other way to look at this is to think about how enterprise IT is moving toward deploying bespoke, complete solution stacks—from silicon to server to software—that serve the needs of the business. So, GreenLake customers may also have Dell or Cisco deployed for other purposes. In this case, Morpheus enables enterprise IT organizations to manage a larger part of their data estate (maybe all of it) from a single platform. That’s a big win.

I’m really curious to see how HPE handles the integration of Morpheus into its portfolio. Integration of technology portfolios is equal parts science and art. Further, Morpheus’s support for the broader server market must be taken into consideration.

Lenovo’s Infrastructure Solutions Group (ISG) had an incredibly strong Q1FY25, showing 65% year-over-year growth, driven by cloud and HPC. In addition, the company narrowed its losses, though the amount of that narrowing was not articulated. While this is encouraging to see, perhaps more encouraging is acknowledgement from the unit’s new co-presidents that the commercial market (SMB and enterprise) is critical to long-term success. Hopefully we will see go-to-market efforts that are aligned to those words.

HYCU has a new report, “The State of SaaS Resilience in 2024,” that reviews vulnerabilities in SaaS data protection. It’s based on a survey of 417 global IT practitioners. The report details significant gaps—one of which is no shocker: businesses are underestimating the number of SaaS applications in use. They are also over-relying on SaaS vendors for data protection and lacking skilled staff to manage SaaS data security. With 61% of ransomware attacks targeting SaaS applications, many businesses struggle to recover encrypted data quickly, posing significant operational risks. The report also offers best practices to enhance SaaS data resilience.

This report reflects what I’ve been saying. Businesses are constantly adding to their technology stacks, which is leading to problems with data protection, ERP integration, data management, and security.

Monday.com posted impressive Q2 2024 results, with 34% year-over-year revenue growth in a highly competitive market. This growth is also shown in monday.com’s headcount increase amidst widespread layoffs across the tech industry. The company has demonstrated its ability to cater to the needs of large enterprises, for example through its 49% increase YoY in customers with an ARR of $100,000 or more, and by a recently announced 80,000-seat sale to a multinational healthcare organization—the largest in monday.com’s history. Overall, the company’s performance indicates it is well-positioned to compete in the work-management space.

Cisco recently reported its Q4 earnings. Security and observability were bright spots, up 81% and 41% YoY respectively, but its core networking business was down 28% year over year. The latter is likely the rationale for an executive management realignment to create a singular focus on products and services, which has the potential to accelerate the company’s ongoing simplification strategy. It is a logical plan to reverse the slide in Cisco’s networking infrastructure sales.

ERP vendors are strategically acquiring smaller companies to modernize their platforms. Manufacturing ERP vendors such as Epicor, IFS, Infor, and Aptean have been focusing on acquisitions to enhance their capabilities within specific industries. Epicor targets companies that improve its manufacturing and distribution functions, particularly through AI-driven inventory planning and optimization. IFS and Infor are acquiring firms that expand their reach in sectors such as aerospace and revenue growth management, integrating technologies such as AI and IoT and solutions for maintenance management, data migration, and product management to better meet industry-specific demands. Aptean has acquired technology to improve its warehouse management capabilities.

Robinhood’s Q2 2024 earnings missed Wall Street expectations for user growth but surpassed revenue estimates, signaling a positive turn for the company. It also announced an upcoming desktop version of its mobile app, which should cater to more serious traders.

Transaction-based revenue surged 69% to $327 million, partly driven by a solid first half in the crypto markets following SEC approvals for bitcoin and ether ETFs. On the earnings call, the company also said it gained retail trading market share from competitors.

It’s essential to consider Robinhood’s recent success against the backdrop of its somewhat turbulent history. The company has faced regulatory challenges, experienced outages during volatile markets, and weathered a PR crisis related to the GameStop short squeeze. The ongoing debate over crypto regulation further adds to the uncertainty. However, CEO Vlad Tenev has expressed confidence in Robinhood’s ability to navigate the evolving landscape regardless of the result of the upcoming U.S. elections.

While its recent earnings beat and product announcements suggest a positive trajectory for Robinhood, the company still faces challenges. Sustaining this growth and successfully addressing regulatory concerns will be crucial factors in determining its long-term success.

Meta Quest now supports HDMI as an input so you can connect your handheld gaming device or console to the headset and experience it on a much larger virtual screen than your TV at home.

I believe that Valve’s support of SteamOS on non-Valve hardware is going to change the handheld landscape; the Asus ROG Ally is the first system to support it. I believe that Valve has better software and support for handheld gaming than any other vendor; conversely, Microsoft has really dropped the ball in addressing the needs of the market. Valve’s move could potentially shift the entire handheld market away from Windows, affecting Microsoft’s relevance in the gaming market.

In other news, Valve is testing a new alpha version of its Deadlock game, which could be one of the most successful shooter games in a long time. Truthfully, the PC gaming market has desperately needed a successful shooter game for years to boost demand for gaming systems. Right now, the most popular games are all years-old titles because they are still fun and run smoothly.

The transition to hybrid work has brought about subtle yet significant changes in how we conduct meetings, impacting employee engagement. My recent Forbes article delves into how the rise of virtual meetings and the blending of remote and in-person attendees have altered meeting dynamics, communication patterns, and overall employee experience. It explores the challenges and opportunities in this new landscape, offering insights into how organizations can adapt to foster better collaboration and engagement in a hybrid work environment.

According to a recent study by Vyopta, which makes software for monitoring and optimizing meetings and digital collaboration, the volume of virtual meetings has remained steady even as in-person meetings have more than doubled since the pandemic. Vyopta’s research findings point to a connection between virtual meeting engagement and employee retention. It suggests that verbal and visual active participation creates a sense of connection and belonging, which could influence an employee’s decision to stay with a company. Organizations need to rethink their virtual meeting strategies and address the underlying causes of disengagement—which I believe can be addressed with the right technologies—to improve employee retention and productivity.

A different recent industry report, “AI and the Workforce: Industry Report Calls for Reskilling and Upskilling as 92 Percent of Technology Roles Evolve” highlights AI’s impact on the job market, specifically within the technology sector. The report, led by Cisco and analyzed by Accenture, reveals that 92% of the 47 information and communication technology (ICT) roles studied are expected to undergo significant changes due to AI advancements. In response, the report emphasizes the urgent need for reskilling and upskilling the workforce. It identifies critical training areas like AI literacy, data analytics, and prompt engineering as crucial for workers to adapt and succeed in the evolving landscape. The study serves as a call to action for individuals, companies, and educational institutions to proactively prepare for the AI revolution and ensure a smooth transition for the workforce.

A select group of analysts, including my MI&S colleague Will Townsend and me, had the opportunity to meet with Cisco’s EVP and chief people, policy, and purpose officer Francine Katsoudas to discuss the outcomes and some of Cisco’s plans that resulted from the study. On the call, we discussed the impact of AI on the future of workforce development, focusing on upskilling and reskilling—particularly at the entry level—to adapt to a rapidly changing job market. Cisco has emphasized the need for companies to take a proactive role in shaping the future of work and for governments to establish effective policies and frameworks to address nascent talent gaps. The company is also focused on counteracting cybersecurity talent gaps in emerging economies, particularly in Africa. I came away from the discussion impressed with Cisco’s commitment to developing programs and solutions that solve business problems while tackling real-world issues.

The big news in quantum this week is that NIST finally announced its first quantum-resistant cryptography standards:

  • For general encryption, such as securing websites, ML-KEM (originally named CRYSTALS-Kyber) was chosen as the critical encapsulation method.
  • For digital signatures, NIST chose two algorithms: a lattice-based algorithm called ML-DSA (originally named CRYSTALS-Dilithium) and a stateless hash-based digital signature scheme called SLH-DSA (originally called SPHINCS+).
  • Another lattice-based algorithm for digital signatures called FN-DSA (originally called FALCON) has been selected for future standardization.

It looks like IBM has cornered the market on post-quantum cryptography (PQC), because ML-KEM, ML-DSA, and FN-DSA were all developed by IBM in collaboration with several industry and academic partners. SLH-DSA was co-developed by IBM and a researcher who has since joined the IBM staff.

It has taken sixteen years to produce a few useful PQC algorithms to defend against a fault-tolerant quantum computer expected to emerge ten years in the future. How long will it take us to defend our digital assets against the next threat wave, which is probably at least two decades in the future?

I went into more detail on PQC and the NIST standards in a recent Forbes piece, and I’ll publish more about this in my upcoming research notes on the Moor Insights & Strategy website.

Keysight Technology is leaning into its deep test and measurement capabilities to help the U.S. government continuously validate its security infrastructure. The company recently joined the Joint Cyber Defense Collaborative to deploy its threat simulator to test firewalls, endpoint protection, and SIEM tools with the latest malware and ransomware. It is an innovative use of Keysight’s capabilities, and one that could play a role in securing critical infrastructure and safeguarding national security.

Google has done a great job with the new Pixel 9 series, featuring a new G4 chip across four devices: the Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL, and Pixel 9 Pro Fold. While I don’t love the naming for the Fold, I do think that Google is genuinely working on cohesion with the latest Made by Google products—something you really feel with the Pixel Watch 3 and Pixel Buds Pro 2.

Google is finally creating a cohesive ecosystem that builds on its progress in AI with Gemini Advanced. The new Pixel Watch 3 is everything I had hoped it would be: better battery life, faster performance, better display, improved bezel, and larger size. This will be the Wear OS watch to beat. And Google’s Gemini Live is a nice user interface upgrade to Gemini Advanced; I believe it is designed to leverage the Pixel Buds Pro 2’s A1 chip to deliver a real-time enhanced AI experience.

The rise of AI in sports is undeniable. According to Globant, the global market for AI in sports is projected to reach $19.2 billion by 2030, and its influence on everything from athlete management to game strategy is growing exponentially. AI can now predict player injuries, optimize training regimens based on real-time data, and even identify the next generation of superstar athletes through computer vision. It’s revolutionizing how we scout, train, and compete.

However, it can also get things wrong. While AI undoubtedly brings advancements, there’s a growing concern that we might lose something intangible in the process. Could data-driven decisions overshadow raw talent and the instinctive, human element of sports? Are we prioritizing algorithms over the honed intuition of experienced coaches and scouts? Will reliance on AI-generated game simulations lead to a homogenization of strategies, potentially stifling creativity and spontaneity on the field?

I believe that striking a balance between technological advancements and preserving the human spirit of competition will be vital to ensuring that AI truly enhances sport rather than over-engineering it.

As the dust settles, it’s clear that this year’s Summer Olympics in Paris showed how technology is shaping the future of sports. Let me give a few examples.

  1. Omega Technology — Omega’s sensors and cameras, capable of capturing 40,000 frames per second, were vital in determining outcomes in close races. For example, Noah Lyles of the U.S. won the men’s 100m dash by just five thousandths of a second over Kishane Thompson of Jamaica.
  2. Olympic AI Agenda — Embracing AI technology in the Olympics demonstrates not only the recognition of its importance but also strategic approaches to how it can benefit the games and support the athletes. This involves integrating AI into various aspects such as athlete performance, fan engagement, data analysis, judging, scheduling, augmented reality, governance, and more, all aimed at driving positive change across global sports.
  3. Security with AI — AI was used to analyze behaviors across all of the Olympic venues to identify patterns and potential threats, supported by facial recognition technology. It also monitored network traffic to detect anomalies and possible attacks. AI systems encrypted data to ensure protection against unauthorized access.
  4. AthleteGPT — With 10,000 athletes converging from 200 countries, many questions from them were bound to arise. AthleteGPT, a chatbot in the Athlete365 mobile app for Olympic competitors, was created to provide round-the-clock information and support to the athletes.
  5. Intel Corporation’s 3-D Tracking — Intel’s 3-D tracking technology was used to monitor 21 key points on each athlete, providing coaches with real-time biomechanical data to help guide future strategies. AI analyzed this data in real time to identify patterns and provide suggested changes to techniques that could enhance performance.

Watching this year’s Olympic Games was a great experience. I think France did a wonderful job, starting with a memorable opening ceremony. It was tremendous to see athletes from around the world competing in 32 sports over two weeks.

IBM brings its digital platforms to this year’s US Open. Moor Insights & Strategy’s sports technology team will be on site in Queens, New York, to provide details on how data and AI are shaping the game and enhancing the fan experience. For this year’s Open, IBM and the United States Tennis Association have introduced new generative AI features. These updates include AI-generated Match Report summaries for all 254 singles matches, enhanced AI commentary for highlight videos, and a redesigned IBM SlamTracker that offers real-time match insights. IBM will power these solutions using its technologies, including IBM the watsonx platform and Granite LLMs. In the coming weeks, follow my colleague Melody Brue and me as we discuss these developments and more on our Game Time Tech podcast and in our articles.

SK Telecom has big plans to transform itself into an AI powerhouse. The company has appointed an AI Chief who has architected what is dubbed an “AI Pyramid Strategy” focused on three pillars—AI infrastructure, transformation, and service. The strategy is in stark contrast to other mobile network operators, which are focusing on more discrete use cases and applications that leverage curated AI platforms. It is a risky gamble, given the company’s investment of tens of millions of dollars to develop its own telco LLM and personal AI assistant. However, SK Telecom has emerged as a global leader in providing innovative 5G services and it could replicate that success with its AI ambitions.

New Gear or Software We Are Using and Testing

  • AWS Q Developer (Jason Andersen)
  • Google Notepad (for some Python) (Jason Andersen)
  • Google Pixel 9 (Anshel Sag)
  • Dell XPS 13 – Snapdragon X Elite (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Modern Data Quality Summit 2024, August 20 (virtual) (Robert Kramer)
  • Nile Webinar Host, August 21 (Will Townsend)
  • VMware Explore, August 26-29, Las Vegas (Matt Kimball – virtual, Will Townsend)
  • GlobalFoundries Analyst Event, August 26-28 (Matt Kimball)
  • IBM SAP Analyst and Advisory Services Day & US Open, August 29, New York (Robert Kramer)
  • US Open with IBM, August 28, New York (Robert Kramer, Melody Brue)
  • Modern Data Quality Summit 2024, August 20 (virtual) (Robert Kramer)
  • Nile Webinar Host, August 21 (Will Townsend)
  • VMware Explore, August 26-29, Las Vegas (Matt Kimball – virtual, Will Townsend)
  • GlobalFoundries Analyst Event, August 26-28 (Matt Kimball)
  • IBM SAP Analyst and Advisory Services Day & US Open, August 29, New York (Robert Kramer)
  • US Open with IBM, August 28, New York (Robert Kramer, Melody Brue)
  • IFA Berlin, September 6-11, Berlin, Germany (Anshel Sag) 
  • Oracle Cloud World, September 9-12, Las Vegas (Melody Brue, Robert Kramer)
  • JFrog swampUP 24, September 9-11, Austin (Jason Andersen)
  • Connected Britain, September 11-12, London (Will Townsend)
  • Connected Britain panel moderation, September 11-12, London (Will Townsend)
  • Snowflake Industry Day 2024, September 12 (virtual) (Robert Kramer)
  • Snap Partner Summit, September 17, Santa Monica (Anshel Sag)
  • Zayo Network Transformation webinar moderation, September 17 (Will Townsend)
  • Salesforce Dreamforce, September 17-19, San Francisco (Robert Kramer)
  • Intel Innovation, September 23-26 — EVENT CANCELED
  • HP Imagine, September 24, Palo Alto (Anshel Sag)
  • Meta Connect, September 25, San Jose (Anshel Sag)
  • Verint Engage, September 23-25, Orlando (Melody Brue)
  • Infor Annual Summit, September 30-October 2, Las Vegas (Robert Kramer)
  • Microsoft Industry Analyst Event, Burlingame, Mass, October 2 (Melody Brue)
  • LogicMonitor, Austin, October 2-4 (Robert Kramer)
  • Teradata, October 7-10, Los Angeles (Robert Kramer)
  • Zoomtopia, San Jose, October 8-9 (Melody Brue)
  • MWC Americas and T-Mobile for Business Unconventional Awards event judge, October 8-10, Las Vegas, October 8-10, Las Vegas (Will Townsend)
  • AWS GenAI Summit, October 9-10, Seattle (Jason Andersen, Robert Kramer)
  • AdobeMAX, October 14-16, Miami (Melody Brue)
  • Lenovo Global Analyst Summit & Tech World, October 14-17, Bellevue, WA (Matt Kimball, Paul Smith-Goodson, Anshel Sag)
  • IBM Analyst Summit, October 16-18, New York City (Matt Kimball, Robert Kramer)
  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • Cisco Partner Summit, Los Angeles, October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • Red Hat Analyst Day, October 29 (Jason Andersen — virtual)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Dell Tech Analyst Summit, November 6-8, Austin (Matt Kimball, Anshel Sag)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • IBM, November 6-8, New York City (Paul Smith-Goodson)
  • Fyuz, November 11-13, Dublin (Will Townsend)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Analyst Summit, November 12-13, San Francisco (Melody Brue)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • AWS re:Invent, December 2-6, Las Vegas (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending August 16, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending August 9, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-august-9-2024/ Mon, 12 Aug 2024 23:14:32 +0000 https://moorinsightsstrategy.com/?p=41424 MI&S Weekly Analyst Insights — Week Ending August 9, 2024

The post MI&S Weekly Analyst Insights — Week Ending August 9, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

The Moor Insights & Strategy team hopes you had a nice weekend!

Last week, Will attended Black Hat in Las Vegas. This week, Jason is attending (virtually) AI Innovation through AWS Workplace, and Anshel is heading to Mountain View, California to attend Google’s Made By Google event. Later this month, Matt and Will are traveling to Las Vegas for VMware Explore, Matt will be attending GlobalFoundries Analyst Event, and Robert will be at the IBM SAP Analyst and Advisory Services Day and US Open in New York.

Last week, our MI&S team published 14 deliverables:

2 Forbes Insight Columns

2 MI&S Research Notes

5 MI&S Blog Posts

5 Podcasts

Over the last week, our analysts have been quoted in multiple top-tier publications, including New York Times, The Globe and Mail, Barrons, and more. Patrick Moorhead appeared on Yahoo! Finance Morning Brief to discuss AI and the stock market sell-off. In total, the press quoted MI&S analysts 12 times with our thoughts on Intel, Nvidia, T-Mobile, AI bots, and the stock market.

MI&S Quick Insights

Last week I mentioned a potential stealth project by OpenAI called “Strawberry.” It was pure speculation based on a Reuters rumor that OpenAI had created a very powerful model and a possible precursor to artificial general intelligence (AGI).

First, let me explain that LMSYS Chatbot Arena is a platform developed by Last Mile Systems that allows users to interact with, evaluate, and compare various chatbot models. It has a leaderboard of models being evaluated. It’s also a research tool that provides a way for developers to get data on model performance and preferences. OpenAI has previously tested models before releasing them.

Adding fuel to the rumor flame, it was reported that an anonymous model scored higher on the leaderboard in reasoning than GPT-4o. Strawberry? When I checked the leaderboard just before filing this piece, it was gone. Everyone keeps hoping that one of these new models will have a real breakout in reasoning; it will send up a flare that a powerful new form of AI has arrived.

Since we are on the subject of LMSYS, I may as well cover a new model, Mistral Large 2, which has 123 billion parameters and has done quite well on the leaderboard. Compared to the previous Mistral model, it is strong in code with over 80 languages, math, and reasoning. It doesn’t have a huge context window—128,000 tokens. That said, its instruction-following beats the Llama 3.1 405B model, and it is currently leading the Arena hard leaderboards. I believe there is much more to come from the Mistral AI team.

Last week I mentioned a potential stealth project by OpenAI called “Strawberry.” It was pure speculation based on a Reuters rumor that OpenAI had created a very powerful model and a possible precursor to artificial general intelligence (AGI).

First, let me explain that LMSYS Chatbot Arena is a platform developed by Last Mile Systems that allows users to interact with, evaluate, and compare various chatbot models. It has a leaderboard of models being evaluated. It’s also a research tool that provides a way for developers to get data on model performance and preferences. OpenAI has previously tested models before releasing them.

Adding fuel to the rumor flame, it was reported that an anonymous model scored higher on the leaderboard in reasoning than GPT-4o. Strawberry? When I checked the leaderboard just before filing this piece, it was gone. Everyone keeps hoping that one of these new models will have a real breakout in reasoning; it will send up a flare that a powerful new form of AI has arrived.

Since we are on the subject of LMSYS, I may as well cover a new model, Mistral Large 2, which has 123 billion parameters and has done quite well on the leaderboard. Compared to the previous Mistral model, it is strong in code with over 80 languages, math, and reasoning. It doesn’t have a huge context window—128,000 tokens. That said, its instruction-following beats the Llama 3.1 405B model, and it is currently leading the Arena hard leaderboards. I believe there is much more to come from the Mistral AI team.

A simple oversight and a possible Python catastrophe averted — Security issues often start in the development process, and this can manifest in many ways. Sometimes, a developer will create a security loophole to save time with the intention of removing that loophole before they build and deploy the software to production. But intentions don’t always equal reality, and that’s when bad things can happen. Companies like JFrog have tools that customers use to mitigate security risks throughout the development lifecycle. JFrog also takes a novel approach to testing its own products and services by running them against public code repositories. Fortunately, that testing process saved a lot of developers from an unintentional but potentially catastrophic security breach of multiple popular frameworks including Python. I had a chance to meet with the JFrog team this week to discuss its blog post that told the story. Stay tuned for a deeper dive from me on this topic soon.

Cloud revenues are way up. Is it AI? — The big three cloud providers all held their quarterly earnings calls over the past week, and all three reported very strong cloud performance. With all of the AI hype, it’s an easy connection to make that AI is a big driver of this success. But reality may be different. Microsoft, which grew its cloud revenue at 29% year over year, was the only company that spoke to AI’s impact on that growth. CEO Satya Nadella attributed less than a third of the growth to AI. And while AWS did not give any insight on how much of its cloud growth (19% year over year) could be attributed to AI, it did point out that a large share came from migrations of older on-premise apps. I think that the steady march of containerization and microservice-driven app development, combined with a dwindling number of IT resources, has hit a crucial tipping point. And, with or without AI, cloud businesses will continue to post solid growth.

How do enterprise IT organizations, global system integrators, IT consultants, and others map a comprehensive cloud deployment/migration strategy that accounts for data sovereignty, resilience, availability, and lowest-cost multi-cloud environments? This is a challenge I hear repeatedly as I talk with IT executives and others. Some company could create a nice market by developing such a tool that could be used on an ad hoc basis by enterprise IT organizations. I have heard of many companies claiming to have such a tool, but have yet to see a comprehensive solution commercially available.

Given the dominance of NVIDIA in the AI acceleration space, how do OEMs differentiate in addressing the enterprise datacenter market? More precisely, how does Dell truly differentiate from HPE and Lenovo? Likewise, what becomes HPE’s calling card that it can apply to the server market when all of the attention seems to be focused on silicon? As much as OEMs already differentiate in this market, the likes of Supermicro continue to grow in share (and recognition), and it is important for server vendors to continue to drive differentiation through platform innovation. A good example of this is the security HPE built into its Gen10 platform—a set of capabilities that caused the company to gain a strong market position. As Intel and AMD roll out new generations of silicon that will lead to platform refreshes, it’s important for this platform differentiation to continue.

Latest numbers indicate that AMD continues to close the gap with Intel in the datacenter space as EPYC processors reach just a touch above 24% market share. This approaches, if not matches, AMD’s all-time high for server market share, and is certainly the highest it has reached in almost two decades. While AMD has experienced a strong run in the cloud, its somewhat recent growth (and acceleration of growth) in the enterprise is perhaps more impressive, as it demonstrates a shift in the transactional server market landscape. While Intel has a very solid product in Sierra Forest/Granite Rapids, and an equally solid roadmap, it will be challenging for it to reestablish footing in this commercial server business.

There’s been a good amount of ridicule directed at NVIDIA (and to a lesser extent, TSMC) around the recent delays encountered in delivering the Blackwell GPU to the market. But at one time or another every (and I mean every) silicon provider has run into challenges that have caused delays in shipping for good reason. It’s because chip design and manufacturing is really hard. Is it rocket science? No. But do you know what the folks at NASA say? “It isn’t silicon design!”

Good on NVIDIA for identifying the Blackwell challenges and pressing pause to ensure that only high-quality silicon ships to the marketplace. And let’s hope it doesn’t happen again.

Dynatrace announced its financial results for the first quarter of fiscal 2025, with a 19% increase in ARR to $1.541 billion and a 20% rise in total quarterly revenue to $399 million. Key developments included the addition of new platform extensions Site Reliability Guardian App, Davis Anomaly Detection App, and Vulnerabilities App, expansion of security features with Kubernetes Security Posture Management, and partnerships such as being the first AWS partner to integrate with its Application Migration Service. This aligns with industry trends towards higher demand for observability and security solutions. Dynatrace’s growth is a sign of enterprises maintaining their current strategies for data management, performance, and security.

IBM released its 2024 Cost of a Data Breach report The global average cost of a data breach rose 10% to $4.88 million. Shadow data, or data stored outside the main secured system, contributes to these costs. IBM’s report shows that security AI and automation can save an average of $2.22 million. It stresses the importance of AI-driven security, breach response training, and securing generative AI to reduce risks and expenses.

My thoughts: This isn’t surprising. Enterprises are constantly adding SaaS applications to their technology stacks, which increases the volume of data. This expansion creates more opportunities for data breaches and increases security risks.

Box announced its acquisition of Alphamoon, an AI startup specializing in intelligent document processing. This is a strategic move to strengthen the Box AI content management platform. By automating metadata extraction from complex documents using OCR, large language models, and a no-code interface, Alphamoon’s technology empowers users to streamline workflows and make data-driven decisions. This acquisition complements Box’s earlier purchase of Crooze, a no-code workflow automation platform, and solidifies its vision for an Intelligent Content Cloud. Combining these technologies enables Box to move beyond storing and managing content to intelligently process and automate workflows around it.

Network detection and response (NDR) can be an effective tool for monitoring and detecting suspicious activity. HPE Aruba Networking recently announced its offering in this area that applies AI to behavioral analytics. For this to be effective, infrastructure providers must have significant data lakes to train and refine models. HPE is among a handful of providers that can effectively lean into its own data lakes to provide customers with optimal NDR outcomes.

Körber Supply Chain Software has acquired Transportation Management Systems (TMS) provider MercuryGate. This expands Körber’s capabilities in transportation management, giving it a more integrated solution across the supply chain. The acquisition is part of Körber’s broader strategy to provide advanced features for logistical visibility to help companies make good decisions especially when there are transportation disruptions. This acquisition also places Körber in a stronger competitive position within the supply chain software market against SCM competitors Blue Yonder and Manhattan Associates.

PayPal has launched Fastlane, a streamlined guest checkout solution, for all U.S. merchants. By securely storing customer data, Fastlane allows returning shoppers to complete transactions with a single click, while new users can opt in to save their details for future use. During its initial testing phase, Fastlane demonstrated significant success, leading to a 34% increase in conversions compared to traditional guest checkout processes. PayPal anticipates that Fastlane’s widespread availability will help merchants reduce cart abandonment rates, particularly during the upcoming holiday shopping season.

Streamlined checkouts enhance the user experience by eliminating repetitive data entry and simplifying the payment process, particularly on mobile devices. Customers feel more comfortable sharing their information with reputable providers who they are familiar with and who have a reputation for secure data handling.

Fastlane is now generally available on PayPal Complete Payments and PayPal Braintree. Merchants can also access Fastlane through platforms such as Adobe Commerce, BigCommerce, Salesforce Commerce Cloud, and others for U.S. merchants—meaning it could have a substantial footprint.

Dayforce reported stronger-than-expected Q2 2024 results, showing significant growth in customer base and adoption of its AI-powered solutions. This positive performance further establishes the company as a formidable competitor in the expanding HCM market. Dayforce’s appeal lies in its cloud-based platform, which offers a modern and agile alternative to traditional on-premises systems, making it particularly attractive for enterprises. The company’s strategic emphasis on human-focused AI integration appears to be resonating with customers. On the earnings call, the company reported that revenue for its on-demand pay solution, Dayforce Wallet, is expected to more than double this year, and it is the fastest-growing product at the company.

When I first covered Dayforce Wallet in my research, I saw much potential in the offering. Dayforce was already emerging as a leader in the space with earned wage access, and the roadmap of features indicated that the company was very focused on providing additional financial health tools for workers who get paid through the Dayforce platform. Dayforce has processed $4 billion in on-demand, early direct deposit, and paycard payments to employees using Dayforce Wallet.

Dayforce Wallet is also integrated with Dayforce’s existing platform so the technology can produce accurate, real-time calculations and remain in compliance with relevant regulations. Dayforce has the opportunity to unlock significant revenue streams by expanding its offerings to include services such as loans, insurance, mortgages, and investments through the app and marketplace.

Mbed, Arm’s software development platform for Cortex-M embedded devices, has reached its end of life. The announcement did not surprise Mbed developers because Arm decreased support years ago and no longer maintains the open-source Mbed OS project. The online tools will remain accessible until July 2026.

The Mbed story began 20 years ago when Arm introduced its first 32-bit microcontroller, the Cortex-M3. These tiny computer systems required specialized development toolchains and techniques, and the learning curve was steep, so a couple of smart engineers built a simple online interactive development environment (IDE) and devised a clever way to inject a system image into a prototype device using a simple USB connection. Partner chip companies were eager to provide compatible boards, so over the next few years, Mbed introduced a new generation of programmers to the world of embedded devices.

Since then, despite many upgrades to the OS, libraries, and toolchain, Mbed has been used mainly for experimentation, education (it inspired the BBC micro:bit kits), and prototyping. Although Arduino adopted Mbed libraries and tooling inside many of its platforms, Mbed never captured much market share in commercial applications because professional developers use mainstream IDEs, and Mbed OS was not competitive. Mbed’s demise is the latest evidence of IoT’s evolution from custom, one-off, DIY projects to platform-based, scalable product development. Arduino and most other Mbed customers are switching from Mbed OS to Zephyr. Zephyr is emerging as the clear winner in the microcontroller OS wars, and some enthusiastic fans refer to it as “the Linux of microcontrollers.”

As Arm’s Mbed platform closes up shop, Raspberry Pi is open for business with Pico 2, a new version of the company’s small (21mm x 51mm), inexpensive, microcontroller-powered single-board computer. Pico 2 is a significant upgrade from the original Cortex-M0+-based Pico, featuring two processor options—a dual-core Cortex M33 or a dual-core RISC-V Hazard3. I think this is the first module of its type to have a checkbox option for Arm vs. RISC-V processors.

The new board has twice the memory, twice the flash, and several security enhancements, including secure boot. Developers write software for all Picos in MicroPython, CircuitPython, C, or C++. The Pico W variant of the original Pico has built-in Wi-Fi and Bluetooth (Infineon 43439), and I expect a Pico 2 W by the end of this year. Prices are astonishingly low—about the cost of a fast food value meal. The original Pico is $4, the Pico W is $6, and the Pico 2 is $5. I suppose the Pico 2 W will be $7.

Raspberry Pi Linux boards are great for learning and experimentation, and many industrial applications also use the platform, but the original Pico has no security and a very small processor. The Pico 2, with its additional processing power, security, and memory, might find its way into some low-volume commercial products.

Google announced the Google TV Streamer, a TV-attached box (not a dongle) similar to Roku, Apple TV, and Amazon FireTV. Customers can pre-order for $99; that’s competitive with higher-end units like Apple TV but much more costly than Chromecast dongles and other cost-optimized streamers. The Google TV Streamer defines a new product category because it runs Android, uses Gemini to curate TV experiences, and functions as a Matter controller. The company hasn’t yet made press units available, so expect my review in late September.

Will the Matter controller support third-party extensions? That’s important because that feature would fulfill Matter’s promise of controller unification. The alternative is ugly; each third-party interactive app would need its own controller and ecosystem, which would be chaotic. Stay tuned!

During the Black Hat conference, a group of researchers discovered and disclosed a 5G baseband vulnerability found in most 5G modems. They created a tool to let people find and fix their own vulnerabilities as well, but it also demonstrated how supporting older standards makes 5G more vulnerable because many hacks of cellular networks force you onto a lower, less secure G.

Vodafone has found that 39% of businesses surveyed in the UK are ready for 5G standalone services and has begun rolling out a separate 5G SA service for enterprise and SMB customers. 5G SA will likely benefit businesses the most, while also offering operators additional revenue to help pay off the added cost of deploying a new 5G core and even the expanded 5G networks.

The Pentagon’s FutureG office continues to explore and deploy new 5G use cases for the U.S. Department of Defense worldwide. It recently disclosed a project in Africa where it partnered with Anduril to build surveillance towers for security purposes in AFRICOM to help protect American bases.

Zoom has expanded its offerings beyond video meetings with the launch of Zoom Docs, an AI-powered collaborative document creation and editing solution integrated with its AI assistant, Zoom AI Companion. Zoom Docs aims to improve teamwork and streamline workflows within the Zoom Workplace platform, enabling users to collaborate more effectively and access critical information within a unified workspace.

With this new offering, Zoom is directly challenging established players such as Google and Microsoft in the collaborative document space. By integrating AI capabilities and leveraging its existing video conferencing strengths, Zoom is positioning itself as a comprehensive productivity platform for the hybrid workplace. This move highlights Zoom’s ambition to compete beyond its core video conferencing market and become a more holistic solution.

A U.S. federal judge has ruled that Google has a search monopoly, but Apple says there is no price it would pay Microsoft to use Bing as the default search. I believe that competition for search is good for consumers, and while I enjoy using Google, it has become bloated and less useful over the years.

Google is killing the Chromecast in favor of a MediaTek-powered TV set-top box which has many of the same capabilities but adds Thread connectivity and Matter support for enhanced smart home controls. (See more from Bill Curtis under the “IoT and Edge” heading on this page.) This indicates to me that Google is looking to expand the power of its ecosystem into the smart home market—and to tighten the integration across Google’s devices.

Sony has made the PSVR2 PC adapter kit available for $60. People are discovering that it’s a virtual link box that takes advantage of an already existing standard to connect the PSVR2 to a PC.

QuEra Computing has launched the QuEra Quantum Alliance Partner Program. The objective is to accelerate the development of neutral-atom quantum computers. Initial members of the alliance have expertise in various fields. The membership includes BIP, BlueQubit, Classiq, E4 Computer Engineering, ITQAN Al Khaleej, Kipu Quantum, Links Foundation, Pawsey Supercomputing Centre, Phasecraft, QAI Ventures, qBraid, QCWare, QMWare, QPerfect, Quantum Machines, QunaSys, Strangeworks, Venturus, and Wolfram Research.

Yuval Boger, chief commercial officer at QuEra Computing, said that the alliance represents QuEra’s commitment to fostering collaboration and driving innovation in the quantum computing landscape. Members of the group will be able to pool resources and share expertise for accelerated development, the promotion of new technologies, improved market penetration, and a more competitive and innovative tech industry.

Last week I attended Black Hat 2024, meeting with Fastly, HP, HPE, IBM, Infoblox, NTT, Wiz, and others. I was particularly impressed with what Wiz is accomplishing in cloud security with a complete portfolio and an easy-to-navigate user interface that consolidates multiple functionalities. The company is one to watch—especially now, given its confidence in walking away from a huge payday to be acquired by Google in lieu of a future IPO.

Dell and Alienware are honoring Intel’s extended warranty on Raptor Lake CPUs, offering a five-year warranty on chips. This is a great move because the failure rates for these chips are actually still quite low and will represent only a small uptick in service costs, but making this move will build a lot of brand loyalty and trust for Dell and Alienware. Lenovo and Acer have yet to disclose their policies on these Intel chips.

Humane, the company that makes the AI Pin, has been shown to have significant product returns on its already meager sales. The AI Pin continues to struggle because of its high cost, low battery life, limited functionality, and outdated chipset. Ultimately, Humane would have been better suited for wearables like what Meta has built with Ray-Ban.

Snapdragon, Manchester United’s current shirt sponsor, is reportedly exploring acquiring the naming rights to the football club’s iconic stadium, Old Trafford. Beyond brand visibility, the naming rights could enable Snapdragon to transform Old Trafford into a showcase for its cutting-edge technology. Qualcomm imagines a stadium with state-of-the-art connectivity, immersive fan experiences powered by Snapdragon processors, and innovative digital solutions that redefine how fans interact with the game. If the move succeeds, it would create a unique and memorable experience for fans and serve as a powerful demonstration of Snapdragon’s capabilities to a global audience.

In another significant move associated with Snapdragon’s Manchester United sponsorship, Qualcomm and Microsoft announced a partnership to add the Copilot+ PC logo to the back of the club’s jerseys. Qualcomm’s Snapdragon X Series processors exclusively power the next-gen Copilot+ PCs. The team displayed the new kit this past weekend at the Community Shield game that pitted Manchester United against crosstown rivals (and defending Premier League champions) Manchester City. The collaboration with Microsoft to highlight the Copilot+ PC logo on the jersey is a testament to the strong partnership between Qualcomm and Microsoft in PCs. Another interesting point of this deal is that the Copilot+ PC logo will appear only on the players’ jerseys, not the replica jerseys available for sale. This could indicate that Qualcomm intends to rotate partners or brands on this sports jersey “real estate.”

As we watch this year’s Olympics, have you noticed the advancements in sporting gear? The 2024 Paris Olympics has been a showcase for the intersection of sports and technology. Here are some key examples:

  • Moisture-wicking tops reduce muscle vibration and fatigue for track and field athletes
  • Swimwear made from polyurethane and hydrophobic textiles repels water and minimizes drag.
  • Lightweight Nike shoes with carbon-fiber plates provide durability and grip on track surfaces. Patents cover the shoe’s sole structure, cushioning arrangement, and cleat patterns.
  • Biometric sensors, motion-capture instruments, and data analytics platforms enable athletes to fine-tune their techniques and analyze physical performance.
  • Virtual reality (VR) goggles help the Australian swim team visualize and optimize the speed of relay changeovers.
  • AI monitors social media for cyber abuse and helps law enforcement track unusual patterns to improve event security.

It’s great to see the evolution of technology in sports. I expect many of these innovations to eventually be commercialized for consumer markets.

T-Mobile recently announced an initiative in partnership with Cradlepoint to lower the barriers to the adoption of 5G-connected PCs, as well as LTE and 5G fixed wireless access services. The carrier will work with IT distributors Ingram Micro and TD SYNNEX to offer a channel subsidy program designed to lower the cost of mobile-broadband-enabled PCs and FWA customer premise equipment. It is a great start, but less than stellar provisioning of these devices in the past must be addressed for the program to be ultimately successful.

New Gear or Software We Are Using and Testing

  • Motorola Razr+ (2024) — In T-Mobile Exclusive Hot Pink (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • AI Innovation through AWS Workplace, August 12 — virtual (Jason Andersen)
  • Google’s Made By Google Event, August 13, Mountain View, CA (Anshel Sag)
  • VMware Explore, August 26-29, Las Vegas (Matt Kimball, Will Townsend)
  • GlobalFoundries Analyst Event, August 26-28 (Matt Kimball)
  • IBM SAP Analyst and Advisory Services Day & US Open, August 29, New York (Robert Kramer)
  •  
  • AI Innovation through AWS Workplace, August 12 — virtual (Jason Andersen)
  • Google’s Made By Google Event, August 13, Mountain View, CA (Anshel Sag)
  • VMware Explore, August 26-29, Las Vegas (Matt Kimball, Will Townsend)
  • GlobalFoundries Analyst Event, August 26-28 (Matt Kimball)
  • IBM SAP Analyst and Advisory Services Day & US Open, August 29, New York (Robert Kramer)
  • IFA Berlin, September 6-11, Berlin, Germany (Anshel Sag) 
  • Oracle Cloud World, September 9-12, Las Vegas (Melody Brue, Robert Kramer)
  • Connected Britain, September 11-12, London (Will Townsend)
  • JFrog swampUP 24, September 9-11, Austin (Jason Andersen)
  • Snap Partner Summit, September 16 — virtual (Anshel Sag)
  • Salesforce Dreamforce, September 17-19, San Francisco (Robert Kramer)
  • Intel Innovation, September 23-26 (Matt Kimball)
  • HP Imagine, September 24, Palo Alto (Anshel Sag)
  • Meta Connect, September 25, San Jose (Anshel Sag)
  • Verint Engage, September 23-25, Orlando (Melody Brue)
  • Infor Annual Summit, September 30-October 2, Las Vegas (Robert Kramer)
  • LogicMonitor, Austin, October 2-4 (Robert Kramer)
  • Teradata, October 7-10, Los Angeles (Robert Kramer)
  • Zoomtopia, San Jose, October 8-9 (Melody Brue)
  • MWC Americas, October 8-10, Las Vegas (Will Townsend)
  • AWS GenAI Summit, October 9-10, Seattle (Jason Andersen, Robert Kramer)
  • AdobeMAX, October 14-16, Miami (Melody Brue)
  • Lenovo Global Analyst Summit & Tech World, October 14-17, Bellevue, WA (Matt Kimball, Paul Smith-Goodson, Anshel Sag)
  • IBM Analyst Summit, October 16-18, New York City (Matt Kimball, Robert Kramer)
  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • Cisco Partner Summit LA October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Dell Tech Analyst Summit, early November, Austin (Matt Kimball, Anshel Sag)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • IBM, November 6-8, New York City (Paul Smith-Goodson)
  • Fyuz, November 11-13, Dublin (Will Townsend)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Analyst Summit, November 12-13, San Francisco (Melody Brue)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • AWS re:Invent, December 2-6, Las Vegas, (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson)
  • T-Mobile Analyst Summit, December 9-10 (Anshel Sag)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending August 9, 2024 appeared first on Moor Insights & Strategy.

]]>
RESEARCH PAPER: Digital Transformation Starts with a Digital Experience Platform https://moorinsightsstrategy.com/research-papers/research-paper-digital-transformation-starts-with-a-digital-experience-platform/ Tue, 06 Aug 2024 12:30:13 +0000 https://moorinsightsstrategy.com/?post_type=research_papers&p=41264 This research brief explores the tensions organizations face while driving toward an AI-enabled, digitally transformed state, and introduces the Iron Mountain InSight Digital Experience Platform (DXP).

The post RESEARCH PAPER: Digital Transformation Starts with a Digital Experience Platform appeared first on Moor Insights & Strategy.

]]>
How Iron Mountain Enables the Digital Transformation and AI Journey

Digital transformation is a term that has existed for some time. It is also a practice (and trend) that is evergreen. In fact, technology has been used to drive better business outcomes for decades. What’s new is the focus on data feeding artificial intelligence (AI) models and analytics engines as key enablers of automated business processes.

The most recent wave of digital transformation has seen a second trend that has caused many organizations to reconsider efforts — generative AI (GAI). The use of foundational models and large language models (LLMs) to drive all facets of business operations has become essential.   As a result, many organizations have rescoped transformation efforts to optimize deployments.

With such a focus on data-driven outcomes, the expectations across an organization are understandably high. Faster, better, and higher quality are not just platitudes; they are key metrics that determine success, regardless of whether an organization delivers a new product to the market or provides public services.

Indeed, digital transformation is the monetization of data.

The challenges many enterprises face when undergoing digital transformation can be mapped across four vectors — culture (people), operational (processes, procedures), technology, and data. Each vector is a critical element to the success of any transformational effort.

This research brief explores the tensions organizations face across these success factors while driving toward an AI-enabled, digitally transformed state. Further, this paper introduces the Iron Mountain InSight Digital Experience Platform (DXP) and explains how this SaaS-based platform is critical to the digital transformation process.

You can download the paper by clicking on the logo below:

Digital Transformation Starts With A Digital Experience Platform

 

Table of Contents

  • Situation Analysis
  • The Potential of the AI Wave is Significant
    • Successful Outcomes Require Successful Planning
  • Digital Transformation is Built on a Digital Platform
    • Defining the Ideal Digital Platform
  • InSight Digital Experience Platform — The Cornerstone of Digital Transformation
    • DXP — Looking Under the Hood
  • Iron Mountain is the Original Data Company
  • Summary

Companies Cited:

  • Iron Mountain
  • Boston Consulting Group

The post RESEARCH PAPER: Digital Transformation Starts with a Digital Experience Platform appeared first on Moor Insights & Strategy.

]]>
MI&S Weekly Analyst Insights — Week Ending August 2, 2024 https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-august-2-2024/ Tue, 06 Aug 2024 01:32:30 +0000 https://moorinsightsstrategy.com/?p=41142 MI&S Weekly Analyst Insights — Week Ending August 2, 2024

The post MI&S Weekly Analyst Insights — Week Ending August 2, 2024 appeared first on Moor Insights & Strategy.

]]>
MI&S Logo_color

The Moor Insights & Strategy team hopes you had a nice weekend!

Last week, Matt attended SIGGRAPH, and Robert, Mel, and Anshel got an up-close look at Snapdragon’s Manchester United sponsorship while attending the Manchester United game with Qualcomm at Snapdragon Stadium. This week, Will is attending Black Hat in Las Vegas. Next week, Anshel will be heading to Mountain View, California, to attend Google’s Made by Google event, and Jason will be attending (virtually) AI Innovation through AWS Workplace.

Last week, our MI&S team published 16 deliverables:

2 Forbes Insight Columns

3 MI&S Research Notes

5 MI&S Blog Posts

6 Podcasts

Over the last week, our analysts have been quoted in multiple top-tier publications, including Yahoo! Finance, Barrons, Morningstar, Tom’s Guide, Wired, Washington Post and more. Patrick Moorhead appeared on CNBC Closing Bell Overtime’ to discuss Intel earnings. In total, the press quoted MI&S analysts no less than a dozen times with our thoughts on Intel, CrowdStrike, Microsoft, Qualcomm, and tech earnings.

MI&S Quick Insights

Apple Intelligence’s launch in Beta has been met with mixed reactions as people realize how limited the release truly is. The reactions here are similar to many of the earlier instances where AI hype quickly met reality, especially given that a huge component of Apple’s user experience, Siri, is not coming with Apple Intelligence until next year.

Apple Intelligence’s launch in Beta has been met with mixed reactions as people realize how limited the release truly is. The reactions here are similar to many of the earlier instances where AI hype quickly met reality, especially given that a huge component of Apple’s user experience, Siri, is not coming with Apple Intelligence until next year.

Investor discontent over CapEx — Many tech firms announced earnings this week, and overall performance was good to excellent, depending on the company and the metrics being watched. Interestingly, one area that came up again and again was the amount of future capital investment earmarked for AI workloads. While Alphabet elected to hold steady, other firms including Microsoft and AWS are committing to increase their rate of investment.

It’s a tricky proposition from both a strategic and investor point of view. For investors, most tech firms are not sharing a lot of data about AI-specific revenues and expenses. Microsoft does share a bit more financial data than other firms and was able to point to a healthy growth trajectory. However, given that Microsoft is talking about spending potentially more than $50 billion on CapEx next year, the outlay still dwarfs whatever AI revenue growth they have seen to date. AWS took a more holistic view in noting that there was growth across many types of workloads including AI to justify the continued investment. It also did not hurt that AWS said that it now has a $150 billion backlog. So demand for cloud services—including AI—is clearly there.

In spite of that, investors have a right to question when the ROI will happen, given the limited data being reported. On the vendor side, we are seeing what is effectively an arms race to be the biggest player in the next wave of technology. But given the infancy of the AI market, the cost of infrastructure, and the inefficiency of AI models, that race is not only expensive but also open to future cost disruption. This should not be a big surprise, and it’s going to dictate a tense balancing act for a while.

And just like that, foundation models are commodities —  A year ago, everyone was talking about which LLM was superior under what circumstances. And while we still are seeing new features and model sizes on what feels like a weekly basis, we have also seen a breakneck pace toward commoditization. This is being driven by three key factors.

First, there is broader choice and distribution. All major cloud providers acknowledge that there is no one perfect model, so they now host multiple models on their respective clouds. Second is the open-sourcing of some models, which reduces both costs and training hurdles. Like most things open source, the presence of a viable open alternative has a reverberating effect on all vendors, open or closed.

Third is the availability of end-to-end AI stacks that are optimized for certain types of performance. The inclusion of specialized instances on custom-designed chips in clouds is generating new cost-per-performance metrics. So while you may be able to choose any model you want from a cloud provider, those providers are also doing what it takes to optimize the infrastructure to fit the model they would most like you to use, sometimes leading to dramatic pricing differences.

Atlassian State of Developer Experience 2024 research shows some positive signals from devs — Atlassian recently conducted a broad-based developer survey to help better understand the requirements in providing a good developer experience. The report is free and available from Atlassian. It is worth reading, but here are a couple of quick highlights.

First, there is a big disconnect between developers and leadership on how developers could be more productive. While leaders see the hurdles to developer productivity as being about capacity and shifting roles (sins of the present), devs see barriers such as technical debt and poor documentation (sins of the past). Second, despite the disconnect with leadership, there are positive feelings about the future of developer experience. For instance, while many devs are seeing fairly low benefit from AI tools today, they also predict improvements leading to a much bigger impact in the next two years.

Finally, Devs are also saying that the developer experience is getting more attention because it is increasingly a factor in developer retention—and attrition. So while things aren’t necessarily great, they are improving. Not surprisingly, all of these findings tie into Atlassian’s value proposition and vision of the future; however, it would be unfair to write off these findings as mere marketing. As I pointed out in my recent piece on agile development, another recent survey pointed to psychological safety as a major factor in developer morale. So, if Atlassian has a solution that can better bridge the gap between devs and leadership, organizations should at least look into it.

AMD’s datacenter quarter impressed. $2.8 billion in revenue represented 115% growth year over year, bolstered by sales of MI300 chips that seemingly far outperformed expectations. The company announced extensive use of the GPU by Microsoft to help power its GPT Turbo and many Copilot services. And as the company announced that the MI300 crossed the $1billion per quarter mark for the first time, it upped its revenue forecast for the product line from $4 billion to $4.5 billion for 2024.

GPU wasn’t the only big winner in the datacenter segment. EPYC also saw strong double-digit growth in both the cloud and enterprise, with notable wins such as Netflix, Uber, Boeing, Siemens, and Adobe. Further, AMD announced that more than 900 public cloud instances are now available on this CPU. As a nice bonus, more than one-third of the company’s quarterly enterprise bookings for EPYC were new customers.

What is driving this? It’s a symbiotic dynamic between the MI300 and EPYC. While EPYC stands strong on its own, it’s also being deployed along with every GPU being sold into the cloud and into the enterprise—perhaps into enterprise datacenters that were previously Intel customers. The adoption of MI300 for AI and HPC-like workloads, accompanied by EPYC CPUs, will lead to broader adoption of EPYC for general-purpose compute needs, because enterprise IT prefers a single CPU supplier for the sake of easier provisioning and management.

In contrast to AMD’s stellar datacenter quarter, Intel’s struggles are real. While the company has been executing against its “five nodes in four years” strategy, it continues to lose datacenter share across all market segments—hyperscale, enterprise, mid-market, and below. Whereas its competition saw a 115% gain in datacenter revenue, Intel’s DCAI (Xeon CPUs and Gaudi AI accelerators) saw decreases in revenue, operating income, and operating margin year over year.

Some of this is to be expected as Intel continues to claw its way back to performance relevance. However, the pricing tactics Intel has employed to stave off the competition in the enterprise are beginning to stall. This indicates that Intel has lost its lock on enterprise IT—a market resistant to change, and one that AMD had been unable to penetrate until just recently. Once this market’s momentum shifts, it is very difficult to reverse. The next quarter is going to be critical for Intel’s DCAI business segment: Can the company somehow stem the flow with the promise of its next-generation Granite Rapids server processors?

Commvault reported positive fiscal Q1 2025 results, with total revenues of $225 million, up 13% year over year, and ARR growing by 17% to $803 million. Subscription revenue increased 28%, reaching $124 million. The company also repurchased $51.4 million worth of shares. The data management and data protection industry will undoubtedly continue to grow. As global enterprises adopt more SaaS applications and generate more data, the risk of cyberattacks increases, requiring advanced data security technologies.

Dynatrace, an observability and security platform, just celebrated the fifth anniversary of its IPO. Since its market debut in 2019, its stock has increased from $16 to $44. The company has exceeded $1 billion in annual revenue and now has more than 4,000 organizations using its observability capabilities. How much of this can be attributed to the proliferation of SaaS applications, and how much to the features and functionality of the platform? I will analyze the data protection industry in an upcoming article.

Adobe released data on its partnership with the California State University (CSU) system and Adobe Creative campuses. Recognizing the importance of digital literacy and creative skills in today’s workforce, CSU partnered with Adobe to provide students and faculty access to Adobe’s Creative Cloud suite of applications. Through the program, students get the hands-on experience with creative and generative AI tools that employers are increasingly looking for. According to Adobe’s case study, at Fresno State 100% of the students completing the capstone broadcast journalism class in Spring 2023 found jobs immediately after graduation, many working in newsrooms as reporters, producers, or designers. However, the program is not only for creative positions or marketing content. Finance students, for example, use Adobe Audition to produce podcasts that showcase their understanding of financial concepts. This demonstrates—and probably reinforces—their expertise and very likely enhances their communication skills.

Canva has announced its intent to acquire Leonardo.AI, an Australian AI content production platform, aiming to strengthen its in-house AI capabilities and accelerate the development of its AI-powered design tools.

Canva has also expanded its partnership with Getty Images, focusing on enriching Canva’s content library with Getty Images’ high-quality stock photos and ensuring fair compensation for creators and IP rights holders whose work is used in training Canva’s generative AI tools. This collaboration is part of Canva’s $200 million Content Fund.

I’m eager to see how these strategic decisions will impact Canva’s future growth and competitive positioning in the market. The question remains: are these moves sufficient for Canva to compete with Adobe Firefly and Adobe Stock in the enterprise arena?

Integration is a key data management element, and Informatica’s 2nd quarter 2024 results reflect industry trends with its iPaaS solution and Intelligent Data Management. To quote the company’s earnings release:

  • Cloud Subscription ARR increased 37% year-over-year to $703 million
  • Total ARR increased 7.8% year-over-year to $1.67 billion
  • Results within or above all second-quarter 2024 guidance metric ranges

As technology stacks expand with SaaS applications, managing and understanding your data is essential. Solution providers like Informatica offer tools to analyze and maximize the value of your data, ensuring it supports business goals effectively.

HPE took one step closer to approval of its $14 billion acquisition of Juniper Networks when it received unconditional approval by the European Commission so the deal can go ahead in the EU. This development points to the likelihood of the same happening in the United States and the rest of the world. At least in prospect, the combined companies could become a powerful force, covering cloud, data center, and edge connectivity infused with technology from Juniper’s AI-oriented Mist Systems acquisition four years ago.

Blue Yonder closed its acquisition of One Network Enterprises for $839 million, its third acquisition since Q4 2023. This enhances Blue Yonder’s supply chain platform by enabling real-time data sharing across the supply chain. The integration provides AI-powered supply chain assistants and a unified view of inventory and capacity. This acquisition helps businesses respond faster to market changes, reduce costs, and improve service levels, aligning with Blue Yonder’s strategy to create interconnected supply chain ecosystems.

Popular PC shooter Valorant has finally landed on consoles after being a PC exclusive for years; you can now play Valorant on both Xbox and PlayStation 5. This is great for console gamers, who should have the opportunity to play the game, and it’s a great thing for Valorant’s maker, Riot Games, because the PC market has long been saturated.

This week’s update covers two open-source, industry-standard enabling technologies that will influence IoT and edge product development over the next few years: Thread for device networks and WebAssembly for application containers.

Thread is now ten years old, and it’s time for consumer product companies to take a fresh look at this home automation network standard. Thread is an Internet Protocol-based replacement for Zigbee, Z-Wave, and other specialized low-power, low-bandwidth connectivity schemes. Thread has broad industry support. In fact, it’s probably already in your home, built into products from Amazon, Apple, Google, and many other manufacturers.

These big players adopted Thread ahead of mainstream consumer demand for four reasons. First, the Matter interoperability standard, founded by many of the same big companies sponsoring Thread, supports only Wi-Fi and Thread wireless networks. Second, Thread is an IP network, so the same device commands that work over Wi-Fi also work over Thread without translation. Third, Thread hardware and software are available off-the-shelf from top-tier silicon providers, so the technology is mature and shipping in volume. Fourth, Thread products are already broadly deployed, and the product ecosystem is snowballing. The device connectivity wars are over. Matter and Thread won. From now on, consumer product developers should use Wi-Fi and Thread for wireless LAN device connectivity—Wi-Fi for high bandwidth and Thread for low power.

WebAssembly (Wasm) is a lightweight application container technology that can potentially change how developers build IoT applications. Wasm programmers compile code written in C++, Rust, or other languages into binary instruction code that runs on PC and smartphone browsers at nearly native speed. It’s a faster alternative to JavaScript, and can run alongside it.

But here’s the good news for IoT. The client-side runtime can be standalone rather than browser-hosted. In this case, WebAssembly is a lightweight way to deliver applications to IoT devices, even battery-powered products. This approach offers significant performance advantages over Docker or Kubernetes containers, potentially allowing IoT platforms to support a wide variety of applications. Instead of building custom hardware and software stacks for each IoT product, developers can use off-the-shelf platforms common to many products and customize only the container-based app software. The process would be comparable to building and deploying phone apps.

Containerized apps accelerate IoT product development, improve security, simplify long-term support, and reduce costs. I’m watching Atym, a startup that’s sponsoring Ocre, an LF Edge open-source project that aims to realize this vision. Commercialization is still a long way off, but industrial IoT product architects should monitor the project—and maybe contribute.

Qualcomm’s new Snapdragon 4s Gen 2 is the chip that the company claims will enable 5G in devices under $100 and could help expand 5G access to 2.8 billion people. I believe that this is good for both consumers and operators because it will push the industry further towards 5G standalone (5G SA), which is the version of 5G that will deliver the use cases people have been expecting from 5G.

Ookla’s new 5G SA report shows the significant performance improvements that 5G SA can deliver, as well as which regions and carriers are doing well versus which ones are lagging behind. India and China are leading the charge with the U.S. close behind thanks to T-Mobile, but Europe is lagging, as are places that don’t really have 5G yet, including much of Africa and Latin America.

RingCentral reported strong second-quarter 2024 results, exceeding expectations with a 10% year-over-year increase in subscription revenue and total revenue. This growth reflects the company’s ongoing focus on cloud communications and AI innovation. The company remains optimistic about its future prospects, highlighting a solid pipeline and continued investments in strategic areas. A couple of highlights from the report include: RingCentral total contact center business stands at $390 million of ARR, up 90% YoY, and Cox Communications has chosen RingCentral as its partner for upcoming UCaaS and CCaaS solutions, expected to be rolled out later this year. RingCentral continues to perform in a very competitive environment, and its investments in AI look to be paying off.

Microsoft’s Q4 2024 earnings show solid early adoption of 365 Copilot, including a surge in seats and large deployments. This is particularly evident in CX and contact center, where Microsoft sees substantial cost savings due to AI.

On the earnings call, Microsoft CEO Satya Nadella said he believes that Copilot for Microsoft 365 is transforming how knowledge and frontline workers approach their tasks, similar to the impact of GitHub Copilot on software engineering. He sees Copilot enabling a new design system for work, where tasks can be broken down into steps such as issue identification, planning, specification, and execution. Businesses have adopted this Copilot workflow approach across various functions, including marketing, finance, sales, and customer service.

Also notable in this quarter’s earnings was the skyrocketing adoption of Microsoft Teams Premium, which now has over 3 million seats in use, a YoY increase of roughly 400%. Major organizations such as Eli Lilly and Ford are opting for the advanced features that Teams Premium offers, including end-to-end encryption and real-time translation.

There have been lots of discussions and memes around Logitech’s desire to turn the mouse into a subscription product. This has created a lot of pushback from the industry, which has traditionally monetized on hardware but not on software. That said, I do believe that companies such as SteelSeries, which makes gaming peripherals, are doing a better job of monetizing on both fronts and creating added value rather than taking away value just to resell it.

It is becoming apparent that the flubbed CrowdStrike EDR update that triggered the feared Microsoft Windows blue screen of death (which I analyzed here) could equate to billions of dollars in lost revenue for many of its customers. Delta Airlines’ CEO recently disclosed that his company suffered $500 million in operational costs because of the CrowdStrike fiasco. From my perspective, the losses for Delta and other businesses could be far greater than that in terms of reputation damage, lost customers, and future revenue. I believe that CrowdStrike can recover, but it will be incumbent on the embattled cybersecurity solution provider to be transparent in its efforts to shore up deficiencies within its developer operations.

It was a busy week for earnings, with Arm, Qualcomm, AMD, Intel, Apple, and T-Mobile all issuing reports. Arm’s earnings beat expectations while its guidance aligned with investors’ expectations, yet the stock still slipped. Qualcomm’s earnings showed some of the same strength in the mobile market that Arm’s did, but with additional strength in automotive as well as improving conditions in IoT, which created a beat-beat-raise scenario that jolted Qualcomm’s stock upward. AMD also had strong earnings, especially with the consumer business showing 50% higher revenue and datacenters continuing to be a bright spot with Instinct GPUs for AI.

Intel’s earnings were disappointing, even though its consumer business keeps the company afloat. Intel has a strong future ahead of it, but laying off 15% of its workforce—as it just announced—makes this a tough stretch in that journey. Meanwhile, Intel is giving its 13th and 14th Gen desktop processor customers an additional two years of warranty as a result of the increased failure rates of those CPUs due to issues that have been identified recently by PC OEMs and reviewers.

T-Mobile had very strong earnings, adding 777,000 net postpaid customers and generating considerable profits thanks to its leading 5G network.

NVIDIA is facing two antitrust probes from the U.S. Department of Justice connected to its market dominance, which should surprise nobody considering the position that it holds. That said, it will be interesting to find out whether there have been any specific actions or tactics the company has taken to maintain its leadership position that might get it in hot water.

T-Mobile has launched its “Friday Night 5G Lights” program, which aims to bring enhanced 5G network infrastructure to high school football fields across rural America. This initiative complements T-Mobile’s efforts to improve connectivity in large venues such as MLB stadiums (something I just wrote about in connection with the MLB All-Star Game). I appreciate the company’s efforts to ensure high-quality mobile experiences for customers in both urban and rural areas. This initiative will also help establish brand affinity with a high-school demographic likely to be in the market for a mobile carrier provider soon—assuming they will either be first-time buyers or taken off of their parents’ plan. I am also a big proponent of reliable connectivity in places where increased capacity would be needed in the case of an emergency with a large crowd.

Qualcomm showed off its front-of-jersey sponsorship with Manchester United last week as the Snapdragon Stadium hosted the Snapdragon Cup featuring Manchester United vs. Real Betis. I had insightful conversations with Qualcomm CEO Cristiano R. Amon and CMO Don McGuire about the vast potential of this partnership, including how it could revolutionize the fan experience through enhanced connectivity and innovative digital offerings beyond the game itself.

While it’s challenging to quantify the precise ROI of such a high-profile sponsorship, the strategic alignment between Qualcomm and Manchester United is undeniable. Man U.’s global fanbase aligns exceptionally well with key Snapdragon markets, presenting a unique opportunity to showcase Qualcomm’s tech worldwide. I’m eager to see how Qualcomm continues to leverage this partnership in ways that are tangibly tied to the bottom line.

Last week, Game Time Tech (Melody Brue, Robert Kramer, and Anshel Sag) hit the road to take a closer look at how Qualcomm is influencing the world of sports technology.

Visit to Petco Park: Qualcomm has enhanced Petco Park’s operations and fan experience. Snapdragon-powered sensors collect real-time data to improve efficiency. Fans enjoy an AR experience in the Padres Hall of Fame and benefit from high-speed Wi-Fi throughout the park, enabling them to stay connected, share updates, and access real-time game statistics, creating a more connected and engaging experience.

Visit to Snapdragon Stadium: Qualcomm hosted a great event to kick off its historic Snapdragon sponsorship of the front of Manchester United’s jersey. Effective sponsorships go far beyond logo placement, integrating technology, data, and experiences to create meaningful connections. Data is leveraged to understand fans better and provide valuable information that enhances the overall fan experience. Qualcomm’s use of its Snapdragon technology demonstrates how sponsors can integrate their products directly into the event experience, transforming venues into smart stadiums.

Coming soon: Our podcast with Qualcomm CMO Don McGuire to discuss Snapdragon sponsorship of the Man United jersey.

I believe that the opportunity for mobile network programmability may finally come to fruition. Ericsson may have been ahead of the market, given its near-complete writedown of its $6 billion acquisition of Vonage, but rumors of a consortium of communications service providers backing the Vonage platform could breathe new life into the effort. Nokia also offers its network-as-code platform and recently announced an initiative with a new ecosystem partner to focus on application development that leverages the power of 5G for healthcare and utilities. It could ultimately turn into a two-horse race, but competition in the technology industry typically fosters innovation.

Citations

Chips & AI / Patrick Moorhead / Yahoo Finance
Patrick Moorhead shares his insight on developments in the tech sector, AI, and what it all means for Big Tech during this earnings season.

Crowdstrike / Anshel Sag / Marketwatch 
Anshel Sag comments on CrowdStrike’s potential reputational damage despite its limited financial liability after the huge outage it caused.

Crowdstrike / Anshel Sag / Morningstar
Anshel Sag talks about how the CrowdStrike outage tarnished its reputation. 

Microsoft / Patrick Moorhead / Fierce Network
Patrick Moorhead believes there will be an “industry-wide computing challenge for AI in the next six months.”

Qualcomm / Anshel Sag / TechNewsWorld
Anshel Sag believes that having more affordable devices in the market will motivate operators to deploy standalone 5G.

Intel / Patrick Moorhead / The Motley Fool
Patrick Moorhead says Intel is prioritizing batches of Meteor Lake processors at the cost of slowing down everything else and making the overall fab less efficient.

Intel / Patrick Moorhead / Fierce Network
Patrick Moorhead says Intel was forced to make cuts because of lower demand for the second half of 2024 and into 2025.

Intel / Patrick Moorhead / Barron’s
Patrick Moorhead said Intel appears to have a yield issue, meaning it is producing more defective chips than expected.

Intel / Patrick Moorhead / The Washington Post
Patrick Moorhead says Intel’s layoffs are bigger than he expected and will be targeted rather than spread evenly throughout the company.

Intel / Patrick Moorhead / CNBC
Patrick Moorhead joined ‘Closing Bell Overtime’ on August 1 to discuss Intel earnings; he said that Intel’s forecast is ‘most concerning’ to him.

Intel / Patrick Moorhead / Tom’s Guide
Patrick Moorhead says that yield issues for Meteor Lake processors apparently dragged down Intel’s gross margins.

Intel / Patrick Moorhead / Wired
Patrick Moorhead believes it is a positive sign that Intel’s proposed layoffs appear to be targeted and not across the board.

New Gear or Software We Are Using and Testing

  • Motorola Razr+ (2024) — In T-Mobile Exclusive Hot Pink (Anshel Sag)

Events MI&S Plans on Attending In-Person or Virtually (New)

Unless otherwise noted, our analysts will be attending the following events in person.

  • Black Hat, August 3-8, Las Vegas (Will Townsend)
  • AI Innovation through AWS Workplace, August 12 — virtual (Jason Andersen)
  • Google’s Made By Google Event, August 13, Mountain View, CA (Anshel Sag)
  • VMware Explore, August 26-29, Las Vegas (Matt Kimball, Will Townsend)
  • GlobalFoundries Analyst Event, August 26-28 (Matt Kimball)
  • IBM SAP Analyst and Advisory Services Day & US Open, August 29, New York (Robert Kramer)
  • Black Hat, August 3-8, Las Vegas (Will Townsend)
  • AI Innovation through AWS Workplace, August 12 — virtual (Jason Andersen)
  • Google’s Made By Google Event, August 13, Mountain View, CA (Anshel Sag)
  • VMware Explore, August 26-29, Las Vegas (Matt Kimball, Will Townsend)
  • GlobalFoundries Analyst Event, August 26-28 (Matt Kimball)
  • IBM SAP Analyst and Advisory Services Day & US Open, August 29, New York (Robert Kramer)
  • IFA Berlin, September 6-11, Berlin, Germany (Anshel Sag) 
  • Oracle Cloud World, September 9-12, Las Vegas (Melody Brue, Robert Kramer)
  • Connected Britain, September 11-12, London (Will Townsend)
  • JFrog swampUP 24, September 9-11, Austin (Jason Andersen)
  • Salesforce Dreamforce, September 17-19, San Francisco (Robert Kramer)
  • Intel Innovation, September 23-26 (Matt Kimball)
  • Meta Connect, September 25, San Jose (Anshel Sag)
  • Verint Engage, September 23-25, Orlando (Melody Brue)
  • Infor Annual Summit, September 30-October 2, Las Vegas (Robert Kramer)
  • LogicMonitor, Austin, October 2-4 (Robert Kramer)
  • Teradata, October 7-10, Los Angeles (Robert Kramer)
  • Zoomtopia, San Jose, October 8-9 (Melody Brue)
  • MWC Americas, October 8-10, Las Vegas (Will Townsend)
  • AWS GenAI Summit, October 9-10, Seattle (Jason Andersen, Robert Kramer)
  • AdobeMAX, October 14-16, Miami (Melody Brue)
  • Lenovo Global Analyst Summit & Tech World, October 14-17, Bellevue, WA (Matt Kimball, Paul Smith-Goodson, Anshel Sag)
  • IBM Analyst Summit, October 16-18, New York City (Matt Kimball, Robert Kramer)
  • Snapdragon Summit, Maui, October 20-24 (Will Townsend)
  • WebexOne, October 21-24, Miami (Melody Brue)
  • Cisco Partner Summit LA October 28–30, 2024 (Robert Kramer)
  • SAP SuccessConnect, October 28-30 – virtual (Melody Brue)
  • GitHub Universe, October 29-30, San Francisco (Jason Andersen)
  • 5G Techritory, October 30-31, Riga (Will Townsend)
  • Dell Tech Analyst Summit, early November, Austin (Matt Kimball)
  • Apptio TBM Conference, November 4-5, San Diego (Jason Andersen)
  • IBM, November 6-8, New York City (Paul Smith-Goodson)
  • Fyuz, November 11-13, Dublin (Will Townsend)
  • Veeam Analyst Summit, November 11-13, Scottsdale, AZ (Robert Kramer)
  • Box Analyst Summit, November 12-13, San Francisco (Melody Brue)
  • Microsoft Ignite, November 18-22, Chicago (Robert Kramer – virtual, Will Townsend)
  • Super Computing, November 18-22, Atlanta (Matt Kimball)
  • AWS re:Invent, December 2-6, Las Vegas, (Robert Kramer, Will Townsend, Jason Andersen, Paul Smith-Goodson)
  • Marvel Industry Analyst Day, December 10, Santa Clara (Matt Kimball)

Subscribe

Want to talk to the team? Get in touch here!

The post MI&S Weekly Analyst Insights — Week Ending August 2, 2024 appeared first on Moor Insights & Strategy.

]]>
Ep.27 of the MI&S Datacenter Podcast: Talking CrowdStrike, AI, AMD, HPE & Juniper, Quantinuum, Arm https://moorinsightsstrategy.com/data-center-podcast/ep-27-of-the-mis-datacenter-podcast-talking-crowdstrike-ai-amd-hpe-juniper-quantinuum-arm/ Mon, 05 Aug 2024 19:35:15 +0000 https://moorinsightsstrategy.com/?post_type=data_center&p=41366 The Datacenter team talks CrowdStrike, AI, AMD, HPE & Juniper, Quantinuum, Arm

The post Ep.27 of the MI&S Datacenter Podcast: Talking CrowdStrike, AI, AMD, HPE & Juniper, Quantinuum, Arm appeared first on Moor Insights & Strategy.

]]>
Welcome to this week’s edition of “MI&S Datacenter Podcast” I’m Patrick Moorhead with Moor Insights & Strategy, and I am joined by co-hosts Matt, Will, and Paul. We analyze the week’s top datacenter and datacenter edge news. We talk CrowdStrike, AI, AMD, HPE & Juniper, Quantinuum and more.

Watch the video here:

Listen to the audio here:

3:00 CrowdStrike IT Outage Post Mortem
11:37 AI Immunity Warriors
18:07 AMD Crushes The Datacenter With EPYC and MI300
26:11 HPE Achieves EU Unconditional Regulatory Approval For Juniper Acquisition
33:37 A New Quantum Toolbox
37:49 The Secret Weapon Of The Datacenter

CrowdStrike IT Outage Post Mortem

https://x.com/WillTownTech/status/1818628352749580549

AI Immunity Warriors

https://www.news-medical.net/news/20240731/AI-reprograms-glioblastoma-cells-into-dendritic-cells-for-cancer-immunotherapy.aspx

AMD Crushes The Datacenter With EPYC and MI300

https://www.linkedin.com/feed/update/urn:li:activity:7224395591462612992/

HPE Achieves EU Unconditional Regulatory Approval For Juniper Acquisition

https://www.networkworld.com/article/3480325/eu-clears-hpes-14-billion-juniper-acquisition.html

A New Quantum Toolbox

https://www.quantinuum.com/news/introducing-quantinuum-nexus-our-all-in-one-quantum-computing-platform

The Secret Weapon Of The Datacenter

https://moorinsightsstrategy.com/research-notes/is-arm-neoverse-the-datacenters-secret-weapon/

Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.

The post Ep.27 of the MI&S Datacenter Podcast: Talking CrowdStrike, AI, AMD, HPE & Juniper, Quantinuum, Arm appeared first on Moor Insights & Strategy.

]]>
Mistral NeMo: Analyzing Nvidia’s Broad Model Support https://moorinsightsstrategy.com/mistral-nemo-analyzing-nvidias-broad-model-support/ Mon, 05 Aug 2024 18:47:16 +0000 https://moorinsightsstrategy.com/?p=41123 The promise of AI in the enterprise is huge—as in, unprecedentedly huge. The speed at which a company can get from concept to value with AI is unmatched. This is why, despite its perceived costs and complexity, AI and especially generative AI are a top priority for virtually every organization. It’s also why the market […]

The post Mistral NeMo: Analyzing Nvidia’s Broad Model Support appeared first on Moor Insights & Strategy.

]]>
Nvidia and Mistral AI have partnered to build the Mistral NeMo model, which aims to make AI deployments more efficient and more effective for businesses of all sizes.  SHUTTERSTOCK

The promise of AI in the enterprise is huge—as in, unprecedentedly huge. The speed at which a company can get from concept to value with AI is unmatched. This is why, despite its perceived costs and complexity, AI and especially generative AI are a top priority for virtually every organization. It’s also why the market has witnessed AI companies emerge from everywhere in an attempt to deliver easy AI solutions that can meet the needs of businesses, both large and small, in their efforts to fully maximize AI’s potential.

In this spirit of operationalizing AI, tech giant Nvidia has focused on delivering an end-to-end experience by addressing this potential along with the vectors of cost, complexity and time to implementation. For obvious reasons, Nvidia is thought of as a semiconductor company, but in this context it’s important to understand that its dominant position in AI also relies on its deep expertise in the software needed to implement AI. This is why Nvidia NeMo is the company’s response to these challenges; it’s a platform that enables developers to quickly bring data and large language models together and into the enterprise.

As part of enabling the AI ecosystem, Nvidia has just announced a partnership with Mistral AI, a popular LLM provider, to introduce the Mistral NeMo language model. What is this partnership, and how does it benefit enterprise IT? I’ll unpack these questions and more in this article.

Digging Deeper On Mistral NeMo Technical Details

As part of the Nvidia-Mistral partnership, the companies worked together to train and deliver Mistral NeMo, a 12-billion-parameter language model in an FP-8 data format for accuracy, performance and portability. This low-precision format is extremely useful in that it enables Mistral NeMo to fit into the memory of an Nvidia GPU. Further, this FP-8 format is critical to using the Mistral NeMo language model across various use cases in the enterprise.

Mistral NeMo features a 128,000-token context length, which enables a greater level of coherency, contextualization and accuracy. Consider a chatbot that provides online service. The 128,000-token length enables a longer, more complete interaction between customer and company. Or imagine an in-house security application that manages access to application data based on a user’s privileged access control. Mistral NeMo’s context length enables the complete dataset to be displayed in an automated and complete fashion.

The 12-billion-parameter size is worth noting as it speaks to something critical to many IT organizations: data locality. While enterprise organizations require the power of AI and GenAI to drive business operations, several considerations including cost, performance, risk and regulatory constraints prevent them from doing this on the cloud. These considerations are why most enterprise data sits on-premises even decades after the cloud has been embraced.

Many organizations prefer a deployment scenario that involves training a model with company data and then inferencing across the enterprise. Mistral NeMo’s size enables this without substantial infrastructure costs (a 12-billion-parameter model can run efficiently on a laptop). Combined with its FP-8 format, this model size enables Mistral NeMo to run anywhere in the enterprise—from an access control point to along the edge. I believe this portability and scalability will make the model quite attractive to many organizations.

Mistral NeMo was trained on the Nvidia DGX Cloud AI platform, utilizing Megatron-LM running 3,072 of Nvidia’s H100 80GB Tensor Core GPUs. Megatron-LM, part of the NeMo platform, is an advanced model parallelism technique designed for scaling large language models. It effectively reduces training times by splitting computations across GPUs. In addition to speeding up training times, Megatron-LM trains models for performance, accuracy and scalability. This is important when considering the broad use of this LLM within an organization in terms of function, language and deployment model.

It’s All About The Inference

When it comes to AI, the real value is realized in inferencing—in other words, where AI is operationalized in the business. This could be through a chatbot that can seamlessly and accurately support customers from around the globe in real time. Or it could be through a security mechanism that understands a healthcare worker’s privileged access level and allows them to see only the patient data that is relevant to their function.

In response, Mistral NeMo has been curated to deliver enterprise readiness completely, more easily and more quickly. The Mistral and Nvidia teams utilized Nvidia TensorRT-LLM to optimize Mistral NeMo for real-time inferencing and thus ensure the absolute best performance.

While it may seem obvious, the collaborative focus on ensuring the best, most scalable performance across any deployment scenario speaks to the understanding both companies seem to have around enterprise deployments. Meaning, it is understood that Mistral NeMo will be deployed across servers, workstations, edge devices and even client devices to leverage AI fully. In any AI deployment like this, models tuned with company data have to meet stringent requirements around scalable performance. And this is precisely what Mistral NeMo does. In line with this, Mistral NeMo is packaged as an Nvidia NIM inference microservice, which makes it straightforward to deploy AI models on any Nvidia-accelerated computing platform.

The Real Enterprise Value

I started this analysis by noting the enterprise AI challenges of cost and complexity. Security is also an ever-present challenge for enterprises, and AI can create another attack vector that organizations must defend. With these noted, I see some obvious benefits that Mistral NeMo and NeMo as a framework can deliver for organizations.

  • Operational Agility — With the Nvidia NeMo development platform, enterprises can quickly and easily build and customize AI models that drive efficiency. Whether improving internal processes through intelligent automation or developing new AI-driven products and services, NeMo can be the tool that makes AI real.
  • Operational Efficiency — Mistral NeMo’s high accuracy and performance can maximize the efficiency of enterprise applications. For example, customer service chatbots powered by Mistral NeMo can handle complex, multi-turn conversations, providing precise and contextually relevant responses. This reduces the need for human intervention, streamlining customer support workflows and improving response times.
  • Multilingual Capabilities — One of Mistral’s standout features is its depth of multilingual support. This support is critical in a world where the smallest of organizations can have customers from around the globe. In what seems like a recurring theme, Mistral NeMo enables organizations to achieve this level of support quickly, easily and cost-effectively.
  • Security and Compliance — The most valuable data is often the most sensitive. Many enterprises operate under strict security and compliance regulations. Mistral NeMo, deployed via the Nvidia AI Enterprise framework, ensures enterprise-grade security and support. This includes dedicated feature branches, rigorous validation processes and comprehensive service-level agreement support.
  • Cost-Effective Scalability — The ability to run Mistral NeMo on cost-effective hardware like the Nvidia L40S, Nvidia GeForce RTX 4090 or RTX 4500 GPUs makes it accessible to organizations of all sizes.

Closing Thoughts

As an ex-IT executive, I understand the challenge of adopting new technologies or aligning with technology trends. It is costly and complex and usually exposes a skills gap within an organization. As an analyst who speaks with many former colleagues and clients on a daily basis, I believe that AI is perhaps the biggest technology challenge enterprise IT organizations have ever faced.

Nvidia continues to build its AI support with partnerships like the one with Mistral by making AI frictionless for any organization, whether it’s a large government agency or a tiny start-up looking to create differentiated solutions. This is demonstrated by what the company has done in terms of enabling the AI ecosystem, from hardware to tools to frameworks to software.

The collaboration between Nvidia and Mistral AI is significant. Mistral NeMo can become a critical element of an enterprise’s AI strategy because of its scalability, cost and ease of integration into the enterprise workflows and applications that are critical for transformation.

While I expect this partnership to deliver real value to organizations of all sizes, I’ll especially keep an eye on the adoption of Mistral NeMo across the small-enterprise market segment, where I believe the AI opportunity and challenge is perhaps the greatest.

The post Mistral NeMo: Analyzing Nvidia’s Broad Model Support appeared first on Moor Insights & Strategy.

]]>