Melody Brue
https://moorinsightsstrategy.com/author/melody-brue/
MI&S offers unparalleled advisory and insights to businesses navigating the complex technology industry landscape.Tue, 28 Jan 2025 18:18:02 +0000en-US
hourly
1 https://wordpress.org/?v=6.7.1https://moorinsightsstrategy.com/wp-content/uploads/2020/05/cropped-Moor_Favicon-32x32.pngMelody Brue
https://moorinsightsstrategy.com/author/melody-brue/
3232MI&S Weekly Analyst Insights — Week Ending January 24, 2025
https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-january-24-2025/
Tue, 28 Jan 2025 00:19:22 +0000https://moorinsightsstrategy.com/?p=45339MI&S Weekly Analyst Insights — Week Ending January 24, 2025. A wrap up of what our team published during the last week.
Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.
I’m not surprised that 2025 has started right where 2024 left off, with AI dominating conversations across tech. The Trump administration has only added to this with the announcement of the $500 billion Project Stargate for AI, which our analysts Paul Smith-Goodson, Matt Kimball, and Will Townsend evaluate from different angles in this week’s updates. Look for more about Project Stargate from us in the days to come.
Cisco CEO Chuck Robbins at the opening session of the Cisco AI Summit (Photo by Will Townsend)
The team published a lot of research last week and did some travel as well. While I was in Davos, Anshel attended Samsung Galaxy Unpacked in San Jose and the MIT Reality Hack in Boston. This week, Robert is in Las Vegas for Acumatica Summit and then NYC for Microsoft’s AI Tour. February, March and April are already shaping up to be busy travel months for the MI&S team. Look for our thoughts on these events in upcoming installments of the MI&S Weekly Analyst Insights.
This past week, MI&S analysts have been quoted in top-tier publications such as Network World, Yahoo Finance, and Venture Beat with our thoughts on Databricks, Intel, Samsung, and Starlink. Robert was a guest on the WBSRocks Analysts Gone Wild Podcast to discuss enterprise software.
When I was spending time with a client last week, it really stood out to me how far we’ve come lately in clarifying AI offerings. Thanks to the rampant pace of AI development over the past couple of years, sometimes the fundamentals of product management and product marketing have been deprioritized. For example, plenty of product messaging and feature descriptions have been released when they’re still at a notoriously high level across the board. But that is starting to change. I take this as a reminder that early adopters are far more tolerant of a product’s rough edges than the general market. It’s a positive sign that we are now seeing more maturity and wider adoption of enterprise AI software products. I believe it’s also a sign that things will slow down somewhat as pilots and prototypes move towards production.
I have been spending more time with NVIDIA agentic blueprints, and I can say that the detail and effort taken to document how to get the blueprints up and running is pretty impressive. This also stands out because these agents can be deployed on-premises or in a cloud. By contrast, most agentic efforts so far have been limited to a specific cloud or a SaaS platform. I am hoping that this level of deployment choice is a sign of things to come, rather than an exception.
2025 has started off with a lot of AI investing. Whether it’s ServiceNow’s acquisition of Cuein, new VC rounds at AI startups, or even the massive commitment from the Trump administration for U.S. AI data centers, it seems like investors are finally getting off the sidelines. That’s a good thing in general, but it will signal a shift in priorities for existing product teams. If you are a developer, I would expect more focus on ease of use, consumability, and samples versus net-new innovations and APIs.
Lastly, while this news is a bit old, I did want to address the hoopla that was made when Satya Nadella of Microsoft made his comments about the future of SaaS. It seems that in the tech world there is nothing we like more than declaring things dead. However . . . that’s not what he said. What will really happen is that the business-logic and user-experience layers of SaaS will be massively changed by agents. But the overall value proposition of SaaS platforms will likely remain intact.
When I was spending time with a client last week, it really stood out to me how far we’ve come lately in clarifying AI offerings. Thanks to the rampant pace of AI development over the past couple of years, sometimes the fundamentals of product management and product marketing have been deprioritized. For example, plenty of product messaging and feature descriptions have been released when they’re still at a notoriously high level across the board. But that is starting to change. I take this as a reminder that early adopters are far more tolerant of a product’s rough edges than the general market. It’s a positive sign that we are now seeing more maturity and wider adoption of enterprise AI software products. I believe it’s also a sign that things will slow down somewhat as pilots and prototypes move towards production.
I have been spending more time with NVIDIA agentic blueprints, and I can say that the detail and effort taken to document how to get the blueprints up and running is pretty impressive. This also stands out because these agents can be deployed on-premises or in a cloud. By contrast, most agentic efforts so far have been limited to a specific cloud or a SaaS platform. I am hoping that this level of deployment choice is a sign of things to come, rather than an exception.
2025 has started off with a lot of AI investing. Whether it’s ServiceNow’s acquisition of Cuein, new VC rounds at AI startups, or even the massive commitment from the Trump administration for U.S. AI data centers, it seems like investors are finally getting off the sidelines. That’s a good thing in general, but it will signal a shift in priorities for existing product teams. If you are a developer, I would expect more focus on ease of use, consumability, and samples versus net-new innovations and APIs.
Lastly, while this news is a bit old, I did want to address the hoopla that was made when Satya Nadella of Microsoft made his comments about the future of SaaS. It seems that in the tech world there is nothing we like more than declaring things dead. However . . . that’s not what he said. What will really happen is that the business-logic and user-experience layers of SaaS will be massively changed by agents. But the overall value proposition of SaaS platforms will likely remain intact.
In the opening days of his new term, President Trump announced a massive $500 billion AI project called Stargate. Supercomputers and AI datacenters will be built by key partners that include OpenAI, Oracle, SoftBank, and MGX. The project will use technology created by Arm, Microsoft, NVIDIA, Oracle, and OpenAI. SoftBank is the lead financial manager, OpenAI will handle model development and training, and Oracle will manage the data aspect. The project’s objective is to maintain U.S. leadership in AI and to create advanced AI in the form of artificial general intelligence. AGI will be able to perform a wide array of tasks with human-like intelligence, and potentially revolutionize fields like material science, medicine, and environmental science.
With an initial investment of $100 billion, construction has already begun in Abilene, Texas, where Microsoft is building an AI supercomputer. The investment is slated to ramp up to $500 billion by 2029. The plan is to establish 10 datacenters of 500,000 square feet each, with intentions to expand to another 10 across the U.S. once further site evaluations are complete. This project should create significant economic and security benefits for the U.S., especially because it emphasizes national and military security, aiming to enhance capabilities in data analysis, surveillance, and cybersecurity to safeguard against strategic threats.
To truly evaluate the project, we need more details. It appears that Microsoft initially planned to build a supercomputer exclusively for OpenAI with a $100 billion price tag, and that plan morphed into the much larger $500 billion national plan complete with supercomputer and multiple datacenters. We will need more information to fully understand the plans, how they evolved, and how they are being implemented.
Brightcove is smartly leveraging Amazon Q to address the complex technical queries from its global client base. (If you need a refresher on Amazon Q, take a look at this writeup we did last year.) By using Amazon Q, Brightcove aims to empower its support team to reduce research time and significantly improve the customer service experience, particularly for intricate issues like video embedding.
While acknowledging the potential of generative AI, Brightcove wisely emphasizes a cautious approach with rigorous testing and expert scrutiny to ensure accuracy and build trust. This focus is increasingly important in today’s AI landscape, where many companies get caught up chasing cost cuts and automation. Brightcove seems to recognize that the true potential of generative AI often lies in augmenting human capabilities and fostering deeper customer understanding.
Equipping its support team with Amazon Q will likely speed up response times and enable more effective problem-solving and stronger customer relationships. This is a strong way to leverage AI for a competitive edge. Brightcove’s strategy, with its emphasis on accuracy and human-centered implementation, highlights how accuracy, trust, and scalability can be key to maximizing the benefits of this technology to improve CX.
Adding on to my colleague’s contributions about Project Stargate, here are a few of my thoughts:
It didn’t take long after this historic announcement for the questions to raise. If Stargate is going to be an AI venture, what exactly is the product or service being offered? Is it an AI cloud? Purely an R&D platform? Ten separate 500,000-square-foot datacenters racked with AI-specific infrastructure sounds an awful lot like a cloud to me, but maybe it isn’t.
Is this more about creating 100,000 high-paying jobs, with the understanding that the market will find use for these datacenters in quick order? While Oracle’s Larry Ellison spoke at a high level about being able to create cancer therapies and vaccines, I’m a little confused about how this plays out specifically.
While the investment of $500 billion over five years is incredible, what is the projected time until the first customer, partner, user, or consumer is actually using Stargate? It seems likely that the ROI on this investment will be a bit further out.
While we know that Oracle, OpenAI, and Softbank are working in partnership with NVIDIA and Microsoft — what does this environment actually look like?
What does Stargate mean for the cloud market? Anything? Is Stargate reserved only for the largest of large use cases that would typically require an on-premises cluster? Or is the net being spread wider to enable a real incubation across the spectrum — from the largest-of-large to the smallest-of-small companies?
I am a big fan of the government recognizing the need for the United States to stay far ahead of its competitors in any area of technology, and certainly AI is hugely important. As a country, we are already investing far more than the rest of the world combined. However, as a person who is inherently skeptical of anything the U.S. government does in the longer term, I would like to understand better the how, why, what, when, and where of Project Stargate.
When HPE launched silicon root-of-trust back in 2017, it was a game-changer in the server market. By examining and responding to the millions of lines of code that execute before a server even boots an operating system, the company provided what was the most secure server in the industry. And by integrating this with its integrated lights out (iLO) management, HPE created servers that could not only detect malware at the lowest levels, but also take corrective actions to mitigate the impact.
Since 2017, the threat landscape has evolved considerably. Quantum lockers and AI-driven malware kits create a new set of challenges that require a new way of securing platforms. Here’s the question: is the infrastructure supporting our most critical workloads evolving to meet these challenges? We’ve seen the silicon vendors respond. Now as server platforms prepare to refresh with the newest CPUs, I’ll be curious to see whether the baseboard management controllers and hardware-based security mechanisms deliver the required protection.
Customer data platforms (CDPs) are like superheroes for enterprises looking to delight their customers. They collect data from various sources and organize it in a centralized location, ensuring that everyone has access to the same insights about each customer. This collaborative approach provides better teamwork and deepens customer understanding. Armed with this comprehensive data, marketing teams can deliver personalized campaigns that make customers feel valued and encourage repeat business. CDPs serve as treasure troves of customer data, simplifying data sharing and utilization for teams.
But CDPs aren’t without their challenges. They can be tricky to scale up, especially for big companies with lots of data. And if the data isn’t accurate, it can mess up everything. Many CDPs don’t have the best analytics tools, which can make it hard to figure out what’s working and what’s not. And integrating CDPs with old systems can be a real pain.
Despite these challenges, the CDP market is growing fast. It’s expected to reach $72 billion by 2033, which is a huge increase from $7.82 billion in 2024. This growth is happening because businesses want to do a better job of engaging with their customers and making them feel valued. They also want to be able to use data to make smart marketing decisions. And they’re starting to use AI, automation, and machine learning to make CDPs even better.
If you’re a business considering CDPs, consider your needs and how you’ll use them. Ensure the CDP can handle your data, integrate with your systems, and provide analysis tools for smart decisions. CDPs are promising solutions for business growth and success. Look for my upcoming Forbes article on the state of enterprise data, which will highlight CDPs.
Ericsson recently integrated large language models into its NetCloud management platform. Thanks to this, AI agents can process network data and technical documentation to generate configuration recommendations. What is unique is that the system performs this functionality without exposing sensitive information to third parties, and thus — by design — provides a higher degree of security and control for datacenter deployments. It is also worth noting that the architecture is agentic in nature and employs multiple agents to solve complex tasks, including troubleshooting connectivity issues, automating infrastructure provisioning, and translating business intent and requirements into network policies. Such tasks have required manual intervention in the past, and if Ericsson can successfully execute in this area, it could lead to incremental enterprise networking revenue opportunities for Ericsson’s customers.
It looks likemanufacturers are getting serious about AI in 2025. They’re increasing their AI budgets to become more efficient and competitive. The good news here is that they’re mainly focused on using AI to help their employees, not replace them, aligning with the principles of Industry 5.0. To leverage AI, I suggest manufacturers modernize their ERP systems, improve their data management strategies, and upgrade management processes.
Transforming your business to cloud-based ERP systems is a key for taking advantage of AI. By modernizing, manufacturers can optimize their investments and reap the benefits of the new technology.
Last week I published a case study that highlights this ERP–AI connection and the importance of modernization for making it work: “Hearst Corporation Modernizes Oracle ERP with Strong Change Management and Data Management Practices.” As enterprises adopt AI-driven solutions, it’s crucial to balance the technology advancements with addressing the human and organizational aspects of transformation. Two key pillars — change management and data management — are essential for achieving actionable outcomes. Change management focuses on organizational and human factors, while data management ensures data completeness and quality, enabling accurate and timely insights. Without both, enterprises may struggle to modernize, integrate workflows, or make informed decisions.
For this case study, I had the chance to sit down with David Hovstadius, senior vice president of finance operations at Hearst Corporation, who emphasized the importance of these principles during Hearst’s transition to Oracle Cloud ERP some years ago — which continues to pay dividends as the company embraces generative AI today. By prioritizing change management and data management, the company laid a foundation that not only facilitated its ERP implementation, but also enabled continuous technological and process improvements as AI technologies emerged. For more details, check out the article linked above.
The new Samsung Galaxy S25 smartphone launch happened last week, and it demonstrated how Google, Samsung, and Qualcomm are working in lockstep not only in mobile but also in XR with Project Moohan. Witnessing the interplay of Gemini with the depth of the Moohan experience clearly demonstrates how the three companies are working together to deliver the best AI experience in mobile. For more context on this partnership, see my coverage of last month’s launch of the Android XR spatial OS.
NXP has announced the EdgeLock A30 secure authenticator. It’s a standalone chip compatible with many MCUs and MPUs, including NPX’s MCX and i.MX products. Its minuscule size (“smaller than a grain of rice”) and standard I2C interface make it easy to fit into small devices, and NXP’s comprehensive EdgeLock 2GO certificate services ease the commissioning process. Developers need integrated solutions that conform with new and upcoming security and privacy regulations — and customer concerns. For example, the EU’s Batteries Regulation (2023/1542) requires using a Digital Product Passport by 2027, including supply chain provenance, and the EdgeLock A30 is the basis for a scalable solution. The chip has a RISC-V processor and 16 kB of NVM for credential storage, is Common Criteria EAL6+ certified, and is available now.
Last July, I reported that IBM acquired two Software AG properties – StreamSets and webMethods. Software AG’s streamlining continues with the sale of Alfabet and a management buyout of Cumulocity, the company’s IoT division. Cumulocity, founded in 2012, is once again independent after eight years under Software AG. Founder and CEO Bernd Gross told WirtschaftsWoche, “We are moving towards independence as a scale-up,” and “The big IoT boom is still to come.” I’m expecting strategic changes that better align the company with physical AI trends, making it more of a solution enabler than a solution provider.
Verizon’s new AI strategy leans on its strengths in 5G with mobile edge compute (MEC) and fiber. This creates an opportunity for businesses and even cloud providers to move their AI applications as close to the edge as possible using available compute for inference and low-latency applications. I like to see Verizon leaning in this direction because the company has struggled to differentiate its offerings from those of AT&T and T-Mobile.
Cybersecurity researchers at Sophos have discovered that threat actors have exploited Microsoft Teams to spread malicious links and files, potentially leading to ransomware infections. These attackers use AI for social engineering, making the attacks harder to detect. Microsoft has acknowledged the issue and is working on a solution. While these findings highlight specific threats to Teams, they serve as a broader warning about the increasing risk of similar attacks across all collaboration platforms. The problem is likely not isolated to Microsoft and emphasizes the need for heightened vigilance and robust security measures across the board.
My review of the RTX 5090 graphics card found that NVIDIA continues to innovate in AI. While the 5090 is a very large and power hungry card compared to the 4090, its performance is also considerably higher in 4K with DLSS 4 and 4x frame generation turned on. I also found that the AMP (AI management processor) is a RISC-V core, which is programmable and shows the allure of RISC-V for such applications.
For the first time, all the subsystems necessary to implement universal and fault-tolerant quantum computation have been combined in a photonic architecture. Xanadu has created a photonic quantum computer named Aurora that is a scale model for universal, fault-tolerant quantum computing. Aurora incorporates 35 photonic chips, 84 squeezers, and 36 photon-number-resolving detectors.
The system achieves 12 physical qubit modes per clock cycle, which means it can handle 12 qubits for each processing step, and it has synthesized a cluster state with 86.4 billion modes — reflecting the vast number of different ways 12 qubits can interact with each other. For error correction, it uses a distance-2 repetition code with real-time decoding. Aurora’s architecture is divided into three stages:
Preparing photons to create quantum states
Adjustment of quantum states or entangling the qubits
The QPU performs the computations
The Aurora operates at room temperature and uses fiber-optic networking, which facilitates scalable quantum computing. Xanadu’s design is focused on fault tolerance and scalability. Compared to other photonic quantum computing efforts from makers such as PsiQuantum and Photonics, Aurora stands out with its comprehensive system design, error correction, and scalability. Shared challenges among photonic platforms remain optical loss and high qubit error rates.
Last week the Trump administration instructed the Department of Homeland Security to disband all advisory committees within the agency, including the Cyber Safety Review Board. The CSRB was created under the Biden administration in 2022 and, interestingly, played a role in investigating China-sponsored cyberattacks against U.S. telecom providers. The clean sweep of all advisory committees may simply be a resetting of the guard and a change in policy direction, but it will be interesting to see whether it impacts cyber defense negatively in the short or long term.
Something subtle that I think has been mostly overlooked: Qualcomm’s Snapdragon 8 Elite for Samsung comes with more than just a frequency bump; it also includes customizations in the Qualcomm DSP for some of the new Samsung imaging features, plus an integrated display controller on the SoC for lower power consumption. This is something probably only Samsung could achieve, but it still clearly grows out of Qualcomm’s understanding that Samsung needs something different and custom.
Technology continues to transform sports in more and more ways. One example is the TGL indoor golf league, a tech-enhanced golf leaguecofounded by Tiger Woods, Rory McIlroy, and Mike McCarley in partnership with the PGA Tour. Recently launched after a year-long delay due to storm damage at its SoFi Center facility in Palm Beach Gardens, Florida, TGL combines virtual and traditional golf. Matches feature six teams of four players competing in a mix of simulator-based and on-course play, including a morphing 3,800-square-foot green. The league’s unique format includes nine-hole team matches, head-to-head play, and overtime closest-to-the-pin contests, with scoring determining playoff seeding. Matches will air live on ESPN and ESPN+.
This is a pretty interesting way to showcase golf with advanced simulators, mechanically altered greens, and innovative visuals. I believe that TGL does a good job of bringing technology together while creating a unique spectator and player experience.
Meanwhile, other sports continue to try out new tech, such as soccer using semi-automated offside technology (SAOT) to make video assistant reviews (VAR) for offsides clearer and faster. But fans aren’t always on board with these changes; VAR in particular has created significant concerns among fans of the Premier League and other top leagues about transparency and how the technology affects the flow and fairness of the game. As I’ve said before, it will always be important for sports to integrate new tech while keeping important traditions alive.
5G mobile and fixed wireless access could play a pivotal role within Project Stargate, the ambitious AI effort announced in the early days of the new Trump administration. As covered elsewhere in this update, the initiative aims to invest $500 billion in infrastructure to build out AI datacenters in the United States. As gen AI becomes more hybrid from the cloud to network edges, mobility could become instrumental in the processing of smaller language models hosted in smaller edge data nodes. 5G has been searching for its killer application beyond fixed wireless access consumer services, and given 5G’s low latency, fast throughput, and massive device support advantages — it may have found it in the rollout of AI.
]]>The Enterprise Applications Podcast, Episode 2: Salesforce, Microsoft, SAP, IBM, & More
https://moorinsightsstrategy.com/enterprise-app-podcast/the-enterprise-applications-podcast-episode-2-salesforce-microsoft-sap-ibm-more/
Fri, 24 Jan 2025 21:43:54 +0000https://moorinsightsstrategy.com/?post_type=enterpriseapp&p=45324This week on the MI&S Enterprise Apps Pod, hosts Melody Brue and Robert Kramer talk Salesforce, Microsoft, SAP, IBM, Active Directory and AI Agents ERP.
Salesforce has halted engineer hiring, sparking discussions about the impact of AI on software development jobs.
Microsoft is boosting workplace productivity with its new AI-powered assistant, Microsoft 365 Copilot Chat.
SAP and IBM are teaming up to help businesses transition to cloud-based ERP systems.
Commvault is making it easier for organizations to recover from Active Directory disasters with their new cloud-based solution.
Watch the video here:
Listen to the audio here:
Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show, its contents should not be taken as investment advice.
Transcripts:
Robert Kramer: Hello and welcome to this week’s edition of Moor Insights and Strategy, the Enterprise Apps. Actually, we’re on episode two. I’m Robert Kramer, joined by Melody Brue, both vice presidents and principal analysts of the company. I handle ERP and enterprise data. Mel handles modern work HCM CX, and we cover everything else under the sun that relates to data, the glue that holds everything together. So just for disclaimer purposes, this show is for information and entertainment purposes only. Why we will discuss publicly traded companies on the show. Its content should not be taken as investment advice. So let’s get started. Mel, we had really a busy week last week. The year’s been off to a huge start. We’re going to talk about some great things in the first one. We’re going to kick it off with really in your neck of the woods with Salesforce, and we’re going to cover a bunch of other things. We’ll kind of touch on those as we go through. But what do you think?
Melody Brue: Yes, so it has been like a whirlwind kickoff to the year. Everybody just went from having a quick break to boom. Everything’s happening all at once. So this one’s interesting and I think this is going to start a very interesting conversation throughout sort of the thread of what we’re going to talk about today, which is one, how AI is being adopted, how it’s being priced, and just how people are looking at it in terms of jobs. At Salesforce, Mark Benioff, CEO recently just announced a pause in hiring any software engineers suggesting that AI could automate a growing portion of development tasks. So this is really motivated by potential cost savings and the productivity gains with their Salesforce agent force AI tool. So I think this raises a lot of questions. There’s the evolving role of tech jobs and how companies might manage this AI augmented workforce.
And then also, is this too soon? Can AI really replace human engineers? It also brings up this really important key challenge, which is who’s going to manage this digital labor? Is it IT or is it HR? And Cisco just had their big AI summit and they had, I think it was actually the CEO of Box, Aaron Levie was talking about this challenge of it becoming sort of an HR role when it comes to these AI agents, which is they’re not only responsible for the procurement, the hiring, the hiring of these ag agentic ai, meaning the creation of them, but they’re also responsible for their outcomes. So that creates a really interesting responsibility beyond what HR is typically responsible for in terms of like, okay, if you run marketing and you say, I need somebody to run this campaign for me, the HR department is responsible for finding that person and placing that person in that role. They’re not responsible for the output of what that person, what they create, who that person reports to is. So who do these agents agentic AI report to? Is it IT HR who’s managing this digital labor? That’s another point.
My last point on this is Salesforce showing the example of their AI is so good, they’re using it internally and Microsoft is doing the same thing. It’s like eating your own dog food that they’re highlighting, basically promoting their own AI adoption, demonstrating their own ROI through internal cost savings. So you want to sell something, show how well it’s working for you, right? I mean, Microsoft’s doing the same thing. So while I do think it’s really early to start saying that AI is going to be able to replace developers, I think it’s a smart strategy to show that Salesforce is using its own agentic AI to write code to really boost their own productivity. But I still think that we’ve got this digital work, digital labor management problem to solve.
Robert Kramer: Well, we definitely have that. What’s the setup required for these companies to be able to take advantage of this to see success? And like you said, the pricing is a big deal. How is that done? And then what’s the success that they’re actually having so far at Salesforce with not hiring those types of people and letting the agentic AI take advantage of consuming those roles? So I think it’s not put together yet. It’s not able to see the success or failure. My guess is it’s a bold comment from Benioff and if he can back it up, we’d all love that because companies would like to take advantage of how the agents can, I think they’re doing strategically handling tasks, not replacing, and he’s making an assumption that they’re going to be replacing. I don’t think, like you said, who’s managing these people? How is human resources interacting with it? This is not played out. There’s no governance around it. So there has to be some guardrails to show us how it’s all going to work.
Melody Brue: And I haven’t really seen anything. I know the World Economic Forum is going on right now, Pat’s in Davos, there’s been so much going on there. There’s usually a lot of conversation around this type of forward thinking stuff that happens there. He made a lot of bold statements a couple years ago about this digital labor and digital workforce. So we’ll see how this goes on. But interestingly enough, I was on, you may have been on this too, they didn’t show other participants, but Microsoft had a briefing this morning and gave some public facing ROI from case studies from their customers, which we don’t always get to see. We get sort of more anecdotal ROI, but they gave some kind of concrete numbers about marketing teams time to reduce content by 67%, seven months of work completed in seven weeks. They had sellers increased outbound calls by 40% due to process efficiencies, 50 million in annual revenue gains. So these are various companies, various ROI reports from using copilot, Microsoft’s 365 copilot in the Fortune 500. I appreciate them giving those concrete numbers and ROI because we do hear like, oh, it’ll make you more protective and all of this. So good to see those. I’m sure a lot of it’s, but still if these companies are seeing the value from it, that’s good to see that it’s doing what’s as promised.
Robert Kramer: Well, I think that the copilot, Microsoft is really weighing a lot of cards on that and specifically within enterprise data like you just mentioned, but also within the ERP world. And they’re utilizing that to actually filter the AI and AI agents to go through the ERP operational transactions that it can help with. And so this is a big deal and it’s something that they’re very emphatic about that it’s going to be kind of a game changer for them to help out with some of these tasks. And obviously the data has to be set up, but it can help with transactions within the financial area, within the inventory area, within the supply chain area to automate certain things to give the person who’s used to doing things on a spreadsheet, get them out of the loop because that’s a hazard to be more of a transaction helping situation with the agents. 20 years ago, these ERP systems, and not to get on a tangent, but there’s always people or a person who is doing something outside of the system. And when that happens, then that person feels they’re more reliable or they’re going to hinder the system in some way when they’re not there or something happens to that person. But at the same time, it’s not a global real time system because of the fact that it’s not present tense. So these agents, if it’s present tense, actionable data, these agents can really make a difference for these type of systems because of the fact that they’re helping these with the strategy. At the same time the nuance transactions are being done. So I’m still worried about the quality of the data, but if that is in gear, these types of copilot systems and automation purposes should be a big deal.
Melody Brue: Yeah, I think what I heard from Microsoft this morning on the briefing is just the agent versioning and updates that it can manage the lifecycle of the agents and update them over time. So it is going to be, this isn’t like a set it and forget it thing. There is going to be a lot of work that needs to be done to maintain them. Probably eventually it will be like a self-cleaning oven where the agents will be able to maintain themselves with the right amount of guardrails and the provisions. But I do think that there is probably with the right training, less risk of the human error that we have just from people entering, think if you enter from one thing to another, even copy paste, you can miss something, things can get transposed. So I like the oversight of humans on agents and the agent oversight on humans, but it goes both ways. It can’t just be one way or the other.
Robert Kramer: Yeah, I agree. And I also like that the agents can continuously learn and analyzing these workflows to up their logic to help out more as they get familiar with what those are so that the humans can interact a little bit better and trust the agents.
Melody Brue: Yeah, So on that, both Microsoft and Google made announcements to changes in their pricing for their AI. Microsoft added the consumption-based pricing, which we’ve talked about on our last podcast. And I think that as we were talking about, it’s just going to become necessary and it’s certainly necessary for people who aren’t going to be the power users that are going to be using AI all the time every day, but to get them to start using AI and start trying it out, I didn’t really see it. I saw that they’re doing kind of these batch pricing, but it didn’t see, and I don’t know if there was an answer to this of what happens when that batch runs out, if it can be topped up or what happens next. But I do like the controls on that because as we talked about on the last podcast, there has to be with this consumption based pricing, there has to be the ability to plan for how much you’re going to be spending on AI.
And that’s the real unknown when you have this consumption-based pricing. So I think the way that they’re pricing this allows for that planning, but it really, I think both with what Microsoft is doing and with what Google’s doing, either giving it for free, including it in certain level plans or consumption-based pricing is answering to two really big issues with AI adoption, which is one, adoption, getting it into the hands of people, and two, the cost barriers. So it’ll be really interesting to see how much usage we’ll see across enterprises and if those numbers on those case studies that they’ve shared, if those go up when you see it beyond just these test cases in particular lines of business.
Robert Kramer: I agree. I had a conversation about six months ago with John Case, he’s the CEO of Acumanica and he does consumption-based pricing for all of his customers. That’s all they do. And I asked him what was his method? They basically look and interview the teams that they work with at the customer and based upon what the use that they feel they’re going to get out of it and the users. And then they have a formula that they’re able to put together that has, this is what we are going to bill you and what you don’t use carries over. If you go over, we’ll bill you on that, but there’s expectations based upon those amounts that you gave us. If you over order, then you pay more. If you under order, you get it back. So the honesty helps the situation.
Melody Brue: So then each party knows what to expect. I mean, companies need to know what their billings are going to be too, and then the customer needs to know what their bill is going to be like. I think one last thing on this, and then we’ve got your Forbes article and big brainy stuff to talk about, Active Directory. I was like, wait, I need to study up on this before I can have an intelligent conversation with you on this. But I do also think on the pricing on this for both copilot and then Google’s workspace offerings is the parody of what’s available for the both free and consumption based versus a full subscriber plan in comparison, like Google is just throwing all of their AI offerings into their workspace offerings. I think that eventually what I would like to see is for customers to choose, do they want the full offering?
They want AI system all of it and be able to pay for it as they like and not have to have the full subscription? I like the flexibility of both plans, but I also think that Microsoft should allow for consumption-based pricing if they get people using it. Some of it is included in this consumption-based Microsoft Chat, copilot chat, but not all the full functionality is available to the new consumption based pricing. So I think eventually they might get to that point where they just say, okay, yeah, we can make all of it available because I think people are going to find that it’s really useful and they want it, but maybe not every single person in the organization needs that full subscription plan of 365.
Robert Kramer: So for your company, if you had to go one way or another, do you think consumption would be better for your company if you were the owner? You like the other-
Melody Brue: I think that most companies probably have different users in different groups of the business that are going to be kind of power users that you would say, yeah, give them the full, I want subscription. But these other ones, they may only have four or five tasks per month that they really need it for. So give them the consumption base, but they still want the full functionality. So that’s where I think there might be. The last thing I’ll say on this is I think in any scenario, giving people access, whether it’s the consumption based, subscription free, however they give it to them, having it in this controlled environment so people aren’t using their own ai, like using their own subscription to chat GPT or something that’s not controlled with their enterprise data is the best way to go. That is adding a security level beyond the security level that Microsoft already has, that Google already has to the enterprise so that they’re not having this bring your own AI to work because you know that employees are doing it. I mean, we saw it happen early on when companies were banning people using AI at work because information was getting leaked to outside models. So companies should be allowing, they should be turning this on immediately.
Robert Kramer: I agree.
Melody Brue: All right. That said, if you have nothing to add on that,
Robert Kramer: I’m good.
Melody Brue: Tell me about Active Directory.
Robert Kramer: Well, it’s a little technical, but at the same time it makes sense. So Active Directory is the core piece that really handles your authentications and your access to really your assets, your databases, your security, your apps, even the way that you walk through a door at the company that somebody works at. All those access controls are done through Active Directory. So this is a way that a lot of companies are getting breached because the attackers are getting in there at some point long time ago. I think that the Starwood hotels that breach with Marriott, they’re together a company now, but that whole was I think four or five years ago, but it started a dozen years ago. That was from an Active Directory breach. And Commvault has come with an announcement that introduces a backup and recovery for Active Directory. And this makes a lot of sense to actually put the pieces back together where it’s simplified, where it’s structured, where it’s not just a mayhem of day of time and confusion to put it together, which could take days. This is a automated process. So I wrote a Forbes article on it, and you can see that and the details around it, but I’m seeing a lot of activity within the data protection security industries because of the fact this is really a big problem. This is one of ’em. So that’s a really interesting subject, and I think a lot of the companies that are really involved in that are being innovative, and I think automation and AI is a big key in some of these companies in this field.
Melody Brue: So would you say that AI is hurting or helping companies in terms of their cyber risks?
Robert Kramer: I think AI is helping them because of the fact of what we talked about with the task and automating certain components and able to bring information quicker to the company and anomalies that are being pointed out at the time. Yeah, at the same time, I do think it’s all the technology in the world has its pros and cons. I mean, we didn’t have phones 30 years ago and it was probably an easier life, but the cyber attacks weren’t as prevalent, and AI is definitely pointing out those problems and correcting them a lot easier. The agents are pointing a role. The companies that are on the forefront are bringing automation, AI, and agents into the picture to bring things together quicker and more efficient just because of the bad actors are everywhere and they’re able to bring companies down to their knees pretty quickly. So Active Directories where a lot of it can start.
Melody Brue: Don’t you think it’s kind of just crazy when you see how quickly, and even just things like deep fakes, how quickly AI has evolved to be able to fool people. And I mean, you can upload a picture of somebody and the AI can make their mouth move and it looks like they’re saying something,
Robert Kramer: You don’t know if it’s real or not.
Melody Brue: Yeah. It’s actually kind of creepy how quickly that’s evolved from being used to be the little JibJabs where you could see it’s so clearly fake and now it almost looks real. I mean, it was, I don’t know, 30 minutes after the chiefs game the other night, and there were memes of, it was really bad, but memes of Mahomes making out with the refs, they were saying they were just favoring him so much with his flops and all the roughing, the quarterback calls and everything. But I mean, it’s crazy how I just feel like you’re not safe from anything with this.
Robert Kramer: No, they can really change the way things look very quick. I am a Kansas City Chiefs fan, just for the record, but I do like Buffalo a lot and would love to see Buffalo win, but Mahomes is a winner. I mean, you can’t be jealous of a winner, even though the refs might’ve been slightly in his favor, but I would say the same for Tom Brady. What’s the difference?
Melody Brue: Yeah, there was some meme about this with a dog talking about Patrick Mahomes, and he was like, he’s my homie too. He’s my homie too. I like him. But there was some, we can talk about this on a game time tech conversation, but those are the calls that are very, AI is not going to fix that for you, right? When a player people were accusing him of flopping and all of that, AI’s not going to fix that and say, oh, yeah, that was a flop, or was he really pushed hard enough or it was reffing the quarterback. But yeah. Anyway, we digress.
Robert Kramer: So AI is making a big influence on ERP and supply chain as well. And the reason why it is is because the companies that, and I’ll talk about SAP and IBM continuing their longstanding 50 year relationship with the new partnership, but companies need to modernize off of these on-premise. As I’ve talked about for a long time, ERP and IBM and SAP are solidified a new partnership to shift SAP S/4 HANA from on-premise to the cloud. So to facilitate this migration, the two of them are partnering on a way to move. There’s about 10,000 SAP customers that are on IBM power servers. So they’re collaborating. How do we get and modernize these customers to the SAP cloud S/4 HANA? But this is all to take advantage of what we just were talking about, which is AI and agents and automation. You can’t take advantage of that fully by being on premise because you’re on an older system, which is antiquated, which is not going to be set up for the modernization. And IBM is also, sorry to cut you off, but they’re also gearing up their consulting practice even more by purchasing, it’s called AST Application Software Technology, which is an Oracle Cloud House to help with all the migrations that deal with Oracle’s ERP. So you’re seeing this kind of a shift that these companies need to modernize to take advantage of this great technology that all the companies are talking about.
Melody Brue: What do you think? So we’ve seen SAP kind of deliver on a lot of the promises that they made more than a year ago. And I think actually it’s worth probably going back to the article that we co-wrote together along with Jason Anderson about all of Rise and Grow with SAP to kind of see where they were then, where they are now. Some of the things we brought up about continuing to support the on-prem customers, and there has been a bit of talk around about people staying on-prem or returning repatriating and why they would do that. And with Edge Computing becoming more powerful, why would they not? I mean, I understand the advantages to being in the cloud, but for say a smaller to midsize company, that’s a really expensive and long journey for them. Why would they not stay if they can and it’s still being supported and they can still have those same capabilities edge. Is there any reason why they really just should not?
Robert Kramer: I don’t think they’re, well, I don’t think it’s black and white. I think the companies are going to stay on premise. They have proprietary customizations that they’ve made, they will lose and they think they’re difference makers. I think that the companies that do move take advantage of modern technology, which is a big deal for long-term success. The companies do have an option, which is a hybrid approach where you can kind of keep some of those modules and then hybrid IT with the cloud modules and then have the best of both worlds. And the RISE program is a structured approach to the migration. I think it is a successful program, it’s just a matter of the companies getting on board. Like I said in the past, the companies that are on the SAP system are the complex ones, not necessarily SAP. So the companies have to be supported internally and externally because I believe SA going to cut the support off at some point. They talked about 2027, they’ve moved it out a little bit, but at the same time, lots of companies don’t want change and the change involves a gigantic approach of transformation. Your question was more or less, why would they modernize?
Melody Brue: No. Is there any reason why they wouldn’t?
Robert Kramer: The customizations are huge and they have their workflows and their processes and their proprietary ways that has to all go away. And the reason that the success of these companies like Infor or an SAP or an Oracle or Microsoft, they’ve made their systems industry specific. So for example, SAP might have a couple thousand chemical manufacturers on there. All those forms, all those workflows, all the ways that they’re doing business are inherently already in that system from the other ones. So you’re gaining knowledge by going to the new system, but you’re going to have to give up what you were on before. So if it’s proprietary, you might not want to do that, but unless it’s patented, I’d say, Hey, go. Because at the same time that you have to modernize, you’re stuck in time. And I think in 20 years ago, companies felt this was the way to do business.
Now you’re being passed up because you don’t have any of these new features and functionality. And I think it just starts with what we’re talking about. There’s enormous amount of capabilities within the supply chain within sustainability that we’ve talked about with iot getting this data from all over the place and be able to use that real time as I think it’s a huge difference maker. And I don’t know why you wouldn’t do it unless you didn’t have the resources internally to gear up and get it done. So there’s major benefits and they outweigh the ones that not staying on-prem, but you can go hybrid first to get there initially.
Melody Brue: Yeah. Alright,
Robert Kramer: Well no answer.
Melody Brue: We’re going almost at 30 minutes now, so we probably should wrap it up, but I think you’ve got a couple of trips coming up. I’m forgetting I saw some things on the calendar. I’m going to definitely see you at Zoho Day. What do you have before then?
Robert Kramer: So I’m going to Acumanica next week and Microsoft has an AI tour. Then after that I’ll be at Zoho and then going to NetSuite the same week. And then the following week I’ll be at SAP. So it’ll be a great time for us to revisit the discussion on SAP and then to write something new, but some great trips ahead and I’m excited. What about you? Where are you going besides Zoho?
Melody Brue: Right after Zoho, I actually don’t have my calendar in front of me, so I have to do this by memory. But I will be at, RingCentral is having their analyst event. Adobe Summit is coming up, enterprise Connect is coming up. There’s a bunch of other things that I’m not looking at, but a lot of travel and some good stuff. But I’m looking forward to Zoho Day, having some conversations with some customers and some executives there doing a couple of videos and looking at some new products. So we will have more on that after that trip. But for everybody who has joined us, thank you so much for joining us. This was great, great show. Lots of good information and thank you for everybody who joined us. If there’s anything that you’d like to hear us talk about in future episodes, please hit us up on X or LinkedIn. Let us know what you want to hear about. Also give us some comments. We really love feedback. We want to hear what you liked, what you didn’t like. And if you did like it, please hit that subscribe button and please join us for next time.
]]>MI&S Weekly Analyst Insights — Week Ending January 17, 2025
https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-january-17-2025/
Tue, 21 Jan 2025 15:52:36 +0000https://moorinsightsstrategy.com/?p=45152MI&S Weekly Analyst Insights — Week Ending January 17, 2025. A wrap up of what our team published during the last week.
Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.
This week I’m in Davos, Switzerland, at the World Economic Forum, meeting with business leaders from around the globe. Moor Insights & Strategy is also co-sponsoring a special session on “Protecting Press Freedom and Democracy,” moderated by Axios Media. While we at MI&S are not journalists, we rely on the technology press for good information, and in turn are often quoted in press outlets as we contribute our own viewpoints to the public discourse on events unfolding in the tech world.
A free press — definitely including social media — is crucial for a thriving tech ecosystem because it ensures that individuals, businesses, and policymakers have access to the information they need to make informed decisions in an increasingly complex technological landscape. We’re proud to support this event as an expression of our deep-rooted commitment to maintaining the freedom of information flow in the tech sector and beyond.
If you or your company executives will be in Davos and you’d like to connect there, please reach out — we’d love to hear from you.
This past week, MI&S analysts have been quoted in the press about Biden’s AI restrictions, Google drones, international malware security issues, intelligent content management, and as usual, AI. Our insights were included in Fierce Network, Yahoo Finance, Ciso2Ciso, The Deccan Herald, and The Straits Times.
I think everyone realizes the impacts AI is having on a wide range of business activities. So it should not be a surprise to anyone that The World Economic Forum’s 2025 Future of Jobs Report projects that almost 90% of companies expect that AI will redefine company operations by 2030.
AI is reshaping workplace dynamics. It is expected to create a net increase of 2 million jobs, resulting in 11 million new jobs while displacing 9 million. It is not surprising that the titles with the greatest job growth will be data specialists and AI/ML technologists.
What about people being let go from companies because they don’t have the necessary AI skills? It’s not as bad as expected because 75% of companies plan to upskill current employees for AI collaboration. That shows a focus on adapting to AI advancement instead of replacing staff. And 70% of companies plan to hire people who already have AI expertise. So, most companies will do a little of both actions. Along with those statistics, 50% of businesses say they will reorganize around AI opportunities, and 40% will use workforce reductions to handle AI expansion.
Just reading the news daily will tell you how quickly AI is being adopted. It’s an instance of the old “early bird gets the worm” saying: many companies believe that those who integrate AI fast and first will have a competitive advantage over those that don’t.
The message is clear: AI is coming, and it’s coming fast. It is expected to cause the largest workplace shift in decades. Management should establish AI priorities and get ready to implement them as necessary. Go AI, and go fast.
I think everyone realizes the impacts AI is having on a wide range of business activities. So it should not be a surprise to anyone that The World Economic Forum’s 2025 Future of Jobs Report projects that almost 90% of companies expect that AI will redefine company operations by 2030.
AI is reshaping workplace dynamics. It is expected to create a net increase of 2 million jobs, resulting in 11 million new jobs while displacing 9 million. It is not surprising that the titles with the greatest job growth will be data specialists and AI/ML technologists.
What about people being let go from companies because they don’t have the necessary AI skills? It’s not as bad as expected because 75% of companies plan to upskill current employees for AI collaboration. That shows a focus on adapting to AI advancement instead of replacing staff. And 70% of companies plan to hire people who already have AI expertise. So, most companies will do a little of both actions. Along with those statistics, 50% of businesses say they will reorganize around AI opportunities, and 40% will use workforce reductions to handle AI expansion.
Just reading the news daily will tell you how quickly AI is being adopted. It’s an instance of the old “early bird gets the worm” saying: many companies believe that those who integrate AI fast and first will have a competitive advantage over those that don’t.
The message is clear: AI is coming, and it’s coming fast. It is expected to cause the largest workplace shift in decades. Management should establish AI priorities and get ready to implement them as necessary. Go AI, and go fast.
Salesforce CEO Marc Benioff recently announced a pause in software engineer hiring for the company, suggesting that AI could automate a growing portion of development tasks. This move, likely motivated by potential cost savings and reported productivity gains via Salesforce’s Agentforce AI tool, raises questions about the evolving role of tech jobs and how companies might manage an AI-augmented workforce.
While some praise Salesforce’s innovative approach, many remain cautious about AI’s ability to completely replace human engineers soon. This decision also highlights a key challenge: Could existing departments like IT or HR oversee this new workforce, or will companies create new roles specifically to manage this digital labor?
Salesforce’s strategy serves as an interesting example of how AI might reshape business operations. It remains to be seen whether other companies will adopt similar strategies and how these trends could impact the tech job market over time. This potential shift in the tech landscape underscores the growing potential of AI to reshape industries and redefine workforce needs, with Salesforce highlighting a key strategy for companies promoting AI adoption: demonstrating its ROI through internal cost savings.
What to make of the outgoing administration’s restrictions on AI chips and models? There are so many different angles to consider. However, the sharing of AI model weights and export controls on semiconductors are the two biggies. While the U.S. government has billed this as diffusing AI innovation, it is at the same time restricting innovation of a couple of players that are on the leading edge of AI development.
Does this stifle AI innovation in the United States? I don’t believe so. Perhaps it reshapes some of our collaborative efforts on a global basis, but the semiconductor, hardware, and software ecosystems are going to continue to accelerate at seemingly exponential rates. I just don’t see that slowing down.
Here’s an interesting take: The primary target of these restrictions — China — has been leveraging open-weight models and is using these to try and gain a global footprint. Models like Alibaba’s Qwen have been showing good performance relative to what we have here in the U.S. — especially in multilingual support. And Qwen has found traction in many countries outside of the U.S. and western Europe. Just as Huawei pivoted after its U.S. blacklisting and gained such a strong global footprint in telecom, Alibaba and others can (with Huawei) deliver their own AI factories.
One of the questions we have to ask is whether we are truly protecting the U.S. and its allies with these protectionist measures. Or are we accelerating investments from adversary governments into AI that perhaps pay out in the longer term? It’s a tough question to answer.
The big news of the week was Lenovo’s announcement that it will acquire high-end storage provider Infinidat. While Lenovo has long been strong in the low end of the storage market, it has struggled to find a foothold in the high end of the market. Adding Infinidat to the portfolio solves for this challenge. However, it will take a bit of rationalization across product, marketing, and sales to find success and compete in the enterprise.
I believe that one of the most important assets to consider in this acquisition are the people who have developed, marketed, and sold Infinidat’s solutions to date. The high-end segment they have sold into — and that Lenovo desires to capture — works differently from the volume/transactional markets (commercial enterprise, SMB) where Lenovo has made its mark in storage. I think those same developers and go-to-market professionals will be essential for the success of this business combination. For more details, check out my full analysis of this deal on Forbes.
Active Directory (AD) is a core piece of enterprise IT, as it handles authentication and access to many important IT assets such as apps, databases, and security systems. Unfortunately, its importance also makes it a prime target for cyberattacks. That makes AD recovery after an attack a high priority, but that’s been a function in need of more innovation. “Recovering Active Directory is foundational to maintaining continuous business after a cyberattack, yet traditional methods are too complex and prone to error,” said Pranay Ahlawat, Commvault’s chief technology and AI officer.
To address this issue, Commvault has recently introduced Cloud Backup & Recovery for Active Directory Enterprise Edition, which aims to make AD forest recovery much simpler and more automated. Read more about this in my latest Forbes article.
Microsoft has introduced Microsoft 365 Copilot Chat, a new AI service for businesses that blends free chat features with consumption-based access to AI agents. This offering leverages AI technology to help users with tasks like document analysis and process automation. Costs vary depending on the complexity of the task, with simple web searches being free and more complex actions involving company data costing more. This flexible approach allows organizations to dip their toes into AI without a hefty upfront investment, scaling their usage as needed.
Hewlett Packard Enterprise Aruba Networking recently announced a portfolio of products tailored to brick-and-mortar retailers. The company’s retail portfolio includes a cellular bridge, a smaller-form-factor switch, and wireless access points that can support more sensors and devices — ultimately providing broader coverage. HPE is also partnering with retail device leaders including Zebra Technologies to ensure an ecosystem approach to its solution delivery. There is a tremendous opportunity in this market to delight customers with automated shelf replenishment and online-like experiences as well as to improve operational efficiency tied to better logistics and reduced shrinkage. From my perspective, the company’s retail portfolio and its AI-infused HPE Aruba Networking Central management console is well positioned to deliver value to retailers and customers alike.
SAP and IBM are continuing their 50-year relationship with a partnership to support the shift of SAP S/4HANA from on-premises to the cloud. This offering looks to facilitate the migration of SAP S/4HANA workloads from on-premises IBM Power Systems. For context, SAP has 10,000-plus customers running SAP on IBM Power servers. The collaboration of SAP and IBM focuses on helping organizations modernize their ERP environments and support AI-powered business processes. The RISE with SAP program provides a structured approach to cloud migration, offering outcome-driven services and platforms to assist organizations in reimagining their operating models.
The longstanding familiarity between SAP and IBM makes the shift less daunting, though adoption will depend on factors such as a given customer’s current SAP setup, budget, and readiness for cloud migration. I’ve talked a lot about modernization and the importance of change and data management, which will be key areas to address during these transitions. Transitioning systems isn’t easy, and any change can add complexity. Still, modernizing is crucial for businesses using ERP systems to stay competitive. This is a good opportunity for companies to make the most of their IBM Power server investments and use this collaboration to bring their ERP systems up to date.
IBM Consulting has announced plans to acquire Applications Software Technology LLC. AST brings expertise in Oracle Cloud applications, specifically with public-sector organizations in government and education and companies in manufacturing, energy, and CPG. AST specializes in implementations of Oracle ERP, HCM, Configure, Price, Quote (CPQ), Oracle Cloud Infrastructure (OCI), JD Edwards ERP, and NetSuite. This move fits with IBM’s strategy and builds on its recent acquisition of Accelalpha, which offers Oracle Cloud consulting services. My thought is that this year is the perfect time for ERP modernizations, especially with the AI craze. In that context, IBM Consulting has set itself up to help businesses transform and succeed.
Epicor Prism is bringing AI agents to the supply chain, making it easier for users to gain relevant insights. Integrated with Epicor Kinetic ERP, Prism uses AI agents to handle tasks such as data analysis, demand prediction, scheduling, inventory optimization, and updates. This should allow supply chain teams to save time and cut down on routine manual tasks so they can spend more time on strategic work. This is part of Epicor’s push to modernize its ERP systems in 2025 and could make life easier for businesses using Epicor in manufacturing, distribution, and retail. Definitely something to keep an eye on.
The Nintendo Switch 2 is precisely the device that I expected Nintendo would launch. It’s a combination of generations-old hardware with significantly improved user experience and UI. I think Nintendo understands clearly that it needs to hit the right balance between a certain price point and a certain game experience, which is what the Switch is all about. I think the people who expected the new model to be like a PC gaming handheld are living in an alternate reality. The handheld gaming market will always have the Switch at the entry level, while PC handhelds are distinctly premium products.
AT&T announced a fiber and wireless guarantee that compensates customers for downtime. I’m watching this strategy from my edge/IoT point of view because of its potential applicability in industrial IoT. Specifically, the business case for private 5G adoption rests on delivering reliable, predictable, scalable connectivity in enterprise and industrial settings. However, unlicensed spectrum alternatives (Wi-Fi et al.) are “good enough” for many use cases — at substantially lower costs. 5G’s advantages must deliver quantifiable ROI to justify the higher cost, and service-level guarantees help make the case for buying more 9s of guaranteed reliability.
The recent Sonos fiasco teaches a valuable lesson about what can go wrong with long-term support for complicated mashups of device firmware, cloud services, and phone apps. In this case, the company released a major app rewrite last May, resulting in usability issues and a cascade of serious bugs. Sonos could not simply revert to the old apps because upgrades to firmware and cloud services broke backward app compatibility. Among other consequences, this fiasco led to the departure of the company’s CEO.
Here’s my take from an edge / IoT perspective: Software-defined products, including vehicles (SDVs), create technical debt that extends throughout the product’s lifetime. Regression tests aren’t sufficient to catch real-world bugs and usability problems. (Last year’s Crowdstrike outage is another example of a catastrophic testing failure.) The lessons are simple:
Don’t bet the farm on internal tests. Experiential tests on deployed products with real users must be part of the plan.
Avoid forklift updates. If unavoidable, budget for significant testing, roll the update out slowly, and have a rollback strategy ready to go.
In another firmware-related incident affecting a software-defined product, Tesla is recalling more than 239,000 vehicles for a condition where a computer circuit board short circuit causes problems, including loss of the rearview camera image. The fix is a software update that alters the power-up sequence to avoid a potential reverse-voltage situation that causes the short. (Transistors hate reverse voltage.) This is an excellent example of how SDVs can simplify maintenance because the fix is an OTA update that is transparent to the customer.
After 18 months of preparation, the FCC announced the launch of the U.S. Cyber Trust Mark label for IoT consumer devices. The voluntary security and privacy testing program requires eligible products to pass compliance testing by accredited (FCC-recognized) labs. “Voluntary” is the operative word here. Consumers will only look for the mark if it becomes widely used on mainstream products. That might happen, but I’m not holding my breath.
AT&T’s new service guarantee will fundamentally change how carriers operate over time as consumers start to expect actual service-level agreements with their carriers — and compensation when things go wrong. I expect that Verizon and T-Mobile will follow suit if AT&T’s move successfully retains customers, or takes customers away from competitors.
Samsung is teasing its next-generation smartphone — the Galaxy S25 line — this week. It will be really interesting to see how Samsung’s new flagship devices perform in the latest benchmarks against the iPhone as well as other Android phones with Snapdragon 8 Elite processors. I am excited to see what new AI features Samsung introduces to differentiate itself from the other Android OEMs and even Apple.
Microsoft has introduced a new consumption-based pricing model for its 365 Copilot Chat alongside its existing subscription-based option. This model allows organizations to experiment with and scale AI usage according to their needs and budget. The consumption-based pricing facilitates controlled experimentation and proof-of-concept projects. However, potential inconsistencies in functionality and updates across the two models in this tiered system could create user experience disparities.
This disparity may be a strategic move by Microsoft to incentivize customers to upgrade. Still, the consumption model’s flexibility could also attract customers who desire the full feature set of Copilot along with consumption-based pricing. The flexible pricing strategy could potentially drive wider AI adoption, but ensuring a consistent and valuable user experience across both models will be crucial for Microsoft. The offering addresses many of the barriers to AI in the enterprise, including cost and adoption, and it promotes better security for companies by discouraging BYOAI.
Google has announced changes to its Workspace offerings, integrating AI capabilities into its Business and Enterprise plans without requiring paying for additional add-ons. Effective last week, this update includes AI assistance in various Workspace applications such as Gmail, Docs, Sheets, and Meet. The new features encompass Gemini Advanced for complex tasks and NotebookLM Plus for research assistance. By incorporating these AI tools directly into existing plans, Google appears to be lowering barriers to entry for businesses interested in AI. This approach, similar to Microsoft’s recent consumption-based model, could facilitate wider AI adoption and allow Google to demonstrate the value of its AI to customers. The strategy may encourage users to engage more readily with AI features within familiar applications, potentially leading to increased productivity and improved work quality. Google states that it has implemented security measures and compliance certifications for these AI features, addressing potential concerns about data protection and information access control.
CES 2025 has been over for a week now, and it’s quite clear looking back that lots of PC OEMs refreshed their lineups to take advantage of the latest chips from AMD, Intel, and NVIDIA, almost all of which focus on AI performance and experiences. It remains unclear whether AI applications will actually take hold this year, but it’s quite clear that they did not in 2024.
Miami University and Cleveland Clinic have created a partnership that will strengthen Ohio’s efforts to become a leader in quantum computing. The partnership will create Ohio’s first college quantum computing degree program. The collaboration will integrate Miami University with the Cleveland Clinic’s on-site IBM Quantum System One, the first quantum computer fully dedicated to healthcare. (Readers with long memories may recall the Forbes article I wrote a couple of years ago about the debut of that computer.)
Miami University will develop bachelor’s, master’s, and doctoral programs in quantum computing. Cleveland Clinic, in turn, will offer internships and research opportunities for Miami students. Aligning a quantum curriculum with actual healthcare applications will open a pipeline that will probably boost Ohio’s economy.
On a personal note, I’m especially glad to see Miami University move into quantum because my three grown daughters all graduated from that institution.
Last week, Microsoft launched its Quantum Ready program to alert business leaders that quantum computing has made significant progress over the past few years — and that they should get ready to take advantage of that progress. In 2024, several significant quantum breakthroughs and important pieces of research moved the technology forward. The field has gone from theoretical mathematical concepts to an emerging technology on the cusp of making major breakthroughs in multiple modalities. These modalities include superconducting, trapped ions, neutral atoms, photonics, and topological quantum computing.
Other factors have also helped improve quantum. One important advance is that quantum processors have improved significantly over the past five years. Current quantum computers have higher-quality qubits, allowing computations that weren’t possible five years ago. Microsoft’s initiative urges business leaders to get ready to harness the transformative potential of quantum computing coupled with AI.
Microsoft’s commitment extends to the global stage. This year, it is partnering with the United Nations, the American Physical Society, and others to celebrate the 2025 International Year of Quantum Science and Technology. This initiative commemorates a century of quantum innovation while fostering awareness of how quantum applications will revolutionize industries. By leading these efforts, Microsoft aims to empower organizations and communities worldwide to embrace the quantum future effectively.
Cisco recently announced its AI Defense platform, which is slated to be generally available in March. One of the challenges associated with securing algorithmic models is that they are not deterministic and can be easily compromised. As modern AI workloads move from the cloud to network edges, attack surfaces will be greatly expanded, making safety and security more difficult. Time will tell if Cisco’s approach is effective, but I believe that AI Defense has the potential to address AI security at scale with automated validation techniques that can dynamically adjust guardrails to an ever-changing threat landscape.
Nokia is making progress towards its goal of becoming an enterprise network services provider. The company has had challenges broadening its reach beyond the cellular market, but its innovation in delivering autonomous networks has great promise. Last week I published my insights on this topic in a Moor Insights & Strategy research paper.
]]>Cisco Desk Pro Review: A Collaboration Device For Hybrid Work
https://moorinsightsstrategy.com/cisco-desk-pro-review-a-collaboration-device-for-hybrid-work/
Tue, 21 Jan 2025 15:00:11 +0000https://moorinsightsstrategy.com/?p=45049This all-in-one desktop video collaboration device is designed to boost productivity in hybrid work environments. Although pricey, it delivers on that promise.
With the explosive growth of hybrid and remote work models in recent years, videoconferencing has become an indispensable tool for businesses. As employees and clients are increasingly dispersed, reliable and user-friendly video collaboration is essential. Companies have embraced a diversified approach to videoconferencing, with the majority employing three or more platforms. This multiplatform strategy allows organizations to leverage the strengths of different tools, but critically, it demands the ability to seamlessly transition among the different platforms on a single device for optimal flexibility and efficiency.
In this context, the Cisco Desk Pro is an all-in-one desktop video collaboration device designed to enhance productivity and connectivity in hybrid work environments. Its adaptability empowers organizations to choose the meeting platforms that best suit their needs while maintaining the flexibility to connect users on different services. With a connected calendar, the Desk Pro makes joining meetings on any platform incredibly easy. Users can join Webex, Microsoft Teams, Google Meet and Zoom meetings with one click.
The Desk Pro can also be deployed with a Webex-centric approach, taking advantage of the “Cisco Rooms” functionality to optimize Webex Meetings and provide advanced cognitive collaboration features. Alternatively, the Desk Pro can be configured as a dedicated Microsoft Teams Rooms system, delivering a native Teams experience with optimized features and interface. While this configuration prioritizes Microsoft Teams, it still allows users to join Webex meetings with full functionality.
How does the Cisco Desk Pro stack up to the needs of today’s hybrid worker? In this review, I will examine the Desk Pro’s features, design, performance and overall value, focusing on its suitability for today’s distributed workforce.
Technical Specifications
Display: 27-inch 4K (3840 x 2160) touchscreen with anti-glare coating
Camera: 4K Ultra HD with 12 MP resolution, automatic framing and presenter tracking
Audio: Integrated speaker system with three directional speakers and a subwoofer, plus an advanced noise-canceling microphone array
Other features: Integrated whiteboard, digital signage capabilities, proximity sensor
Design And Aesthetics
The Desk Pro has a sleek design suitable for both individual desks and conference rooms. The 27-inch touchscreen is prominent, providing ample space for video conferencing, content sharing and digital whiteboarding. The all-in-one design minimizes cable clutter and simplifies setup. Indeed, it’s clear that the system was designed with easy setup in mind: from opening the box to being fully set up took less than ten minutes once my company’s Webex administrator provided me with an activation code.
In my office, the Desk Pro serves as a third screen alongside my Apple MacBook Air and a Lenovo Qreator 4K UHD 27-inch monitor. With this setup, I can easily participate in meetings on the Desk Pro while accessing other applications and workflows on my other screens. I can also multitask directly on the Desk Pro, which is especially handy for in-meeting collaboration on whiteboards.
The Cisco Desk Pro (left) in the author’s home office Melody Brue
User Experience And Performance
The Desk Pro offers a very smooth user experience. The touchscreen is responsive, and the Webex OS provides a familiar interface for organizations already using Cisco’s collaboration ecosystem. The 4K camera delivers excellent video quality, and the automatic framing ensures that all participants are visible. When the auto-brightness feature is engaged, the camera can be slow to adjust to sudden or drastic changes in lighting. However, this feature can also be toggled off, which I recommend if you are using key lights or bright lighting for a presentation or podcast. One standout feature is the camera’s ability to track the user’s movement. This is particularly useful on a long call or when presenting. If I move around within my office, the camera tracks me, allowing me to get up and stretch or stand during a presentation without adjusting the camera. Camera settings can also be adjusted manually for pan, tilt and zoom. A self-view screen in the top right corner allows users to make any adjustments before joining a meeting.
During the WebexOne event in October, I sat down with Snorre Kjesbu, senior vice president and general manager of collaboration devices at Cisco. He explained that the tracking feature minimizes fatigue during long meetings, allowing presenters to stand up and move around naturally. “The Desk Pro reflects our focus on user experience,” Kjesbu said. “It’s designed to help people work more effectively. The minimalist design keeps the technology out of the way, allowing users to stay focused. We believe the Desk Pro is a great example of how thoughtful design can improve the modern workspace.”
With its three directional speakers and subwoofer, the Desk Pro delivers excellent audio quality, making it easy to distinguish voices in large meetings. The advanced noise-canceling microphone array effectively captures voices and minimizes background noise, which is particularly useful in busy environments. Microphone settings include neutral noise removal to eliminate background noise and an “optimize for my voice” feature to remove background noise along with background speech—a handy setting in an open-seating environment or shared workspace.
Collaboration Features
I found that the Desk Pro excels in its collaboration capabilities. For instance, the integrated whiteboard does a good job of facilitating real-time brainstorming. In my experience, this feature significantly improves engagement during online brainstorming sessions compared to traditional screen sharing.
The wireless screen-sharing feature is particularly convenient, and the ability to share content from a laptop or mobile device also enhances productivity during meetings. The Desk Pro also supports digital signage, allowing the display to show company information or announcements when not in use for video conferencing.
IT Management And Security Features
The Desk Pro integrates with Cisco Control Hub, simplifying device management for IT administrators. This allows for remote monitoring, configuration and troubleshooting across multiple locations, streamlining deployment for IT staff and ensuring consistent performance for end users.
The Cisco Desk Pro incorporates robust security measures to protect data and ensure a secure collaboration environment. It encrypts data transmitted during Webex and Microsoft Teams meetings to safeguard information. Access control features, including single sign-on and role-based access control, restrict unauthorized use and maintain privacy. The Desk Pro also benefits from Cisco network security measures such as firewalls and intrusion detection systems, providing an additional layer of protection.
The Desk Pro Value Proposition
The Cisco Desk Pro is a premium device with a price tag to match. However, its features, performance and all-in-one design make it a strong option for businesses investing in high-quality meeting and collaboration tools. Its integrated display sets it apart from alternatives such as the Logitech MeetUp and the Poly Studio X30, which require users to purchase and connect a separate display. This means that users considering one of those alternatives need to factor in the cost of a compatible screen.
I also happen to use the Poly Studio X30 with a television that I already had in my office. I find the Studio X30 is really useful for participating in and viewing larger-scale meetings. And, there are advantages to having a separate screen. For instance, users can select a display that perfectly fits their space requirements and budget, whereas the Cisco Desk Pro comes the way it comes—and with a price tag of $5,495. (The smaller 24-inch Cisco Desk starts at $2,995, and the 15-inch Desk Mini starts at $1,995; the Cisco website details the full feature sets for each device.) The best choice depends on whether the user prefers the convenience of an all-in-one solution like the Desk Pro or the flexibility of a modular system.
The intuitive interface and all-in-one design of the Cisco Desk Pro make it a useful tool for improving productivity and communication. Unsurprisingly for a device from an enterprise IT vendor of Cisco’s stature, the security functions and the simplified management offered through Cisco Control Hub make life significantly easier for IT departments. Although the price may be steep, I believe the Desk Pro’s comprehensive capabilities will make it a worthwhile investment for many organizations that want to support a productive and connected hybrid workforce.
]]>MI&S Weekly Analyst Insights — Week Ending January 10, 2025
https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-january-10-2025/
Mon, 13 Jan 2025 22:05:05 +0000https://moorinsightsstrategy.com/?p=44920MI&S Weekly Analyst Insights — Week Ending January 10, 2025. A wrap up of what our team published during the last week.
Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.
It’s no surprise that my colleagues and I spent much of last week focused on CES. In particular, Anshel Sag—who’s a heck of a device reviewer, besides being a savvy industry analyst—will be publishing a number of pieces this week covering the big PC OEMs, chip makers, and players in the XR industry. Many of my own thoughts from CES made it into Friday’s installment of The Six Five Podcast.
The Cisco Desk Pro (left) is a slick — albeit somewhat pricey — tool for getting more out of your video meetings. Photo: Melody Brue
Plenty of the announcements at CES are about eye-popping (or wannabe eye-popping) consumer devices, but Melody Brue’s review of the Cisco Desk Pro last week is a good reminder of the difference that high-quality enterprise tech can make for individual productivity. This reality is only going to be reinforced by the increasing adoption of AI agents in 2025 to augment the work of corporate employees, from the shop floor to the C-suite.
If you have a piece of new technology that’s changing the way you or your team work in 2025, I’d love to hear about it. What’s your favorite new gadget that’s moving the needle?
This week, Will is at the Cisco AI Summit in Palo Alto and Mel is attending Zoom’s virtual Work Transformation Summit. The rest of us are busy writing, researching, and advising clients. If there is anything we can help you with to start your year off strong, please reach out.
This past week, MI&S analysts have been quoted in multiple syndicated top-tier international publications including CIO, Computerworld, Fierce Electronics, Fierce Networks, InfoWorld, MIT Technology Review, TechTarget, Wired, and others. The media wanted our thoughts on AWS, CES, Dell, HPE, IBM, Intel, Nvidia, Oracle, WordPress, and of course AI and some 2025 predictions
I was quite intrigued by the agentic blueprints that NVIDIA announced last week at CES. But it was not necessarily the use cases—which were pretty commonplace—that were the real story. Much more compelling was the vision of what agentic development could be. The first thing that stuck out was that these are partner-driven solutions. This is in contrast to what we have seen so far, which have been siloed and internally developed agentic solutions. With those, you basically have to use a homogeneous stack of technology to realize the value. And, yes, there is still a need to use the NVIDIA AI Enterprise platform to deploy the blueprints that were announced, but the simple thought of co-development is good to see—and hasn’t been highlighted enough.
Second, I am very interested in how NVIDIA is thinking about agentic AI in the physical world. That is what I consider a second leap from what we are seeing so far. Today’s agents are very much bound to a cloud or a platform. The first leap I am hoping to see is a leap to the on-premise compute world. This could mean collaboration between the AIs on devices such as an AI PC or an iPhone and the cloud (edge AI, so to speak). The second leap is the same idea but to physical devices and robotics. Again, it’s refreshing to see NVIDIA paint a picture of the agentic world that is so visionary.
Over the past month I have been researching AI development platforms, and I have some research coming out on that very soon. But the deeper I have gotten into the topic, the more I’ve realized that each platform regards different user roles with different priorities. It is almost as if each vendor started development from a completely different place, yet they all ended up close enough to each other that we now have a new category of solution. This is a very good thing early in a product lifecycle. By having a broad base of solutions to choose from, the market will have a better opportunity to judge what ends up being the best use of the technologies.
To that point, I want to mention that the newly released Azure AI Foundry from Microsoft is, to my mind, the first of these platforms to really take on the IT management aspects of the problem set. And while it may not have all of the very coolest developer features we see from the competition, it does highlight an under-represented set of requirements that will be essential for enterprise deployment and success.
Last week a member of the media reached out to me to discuss a topic that got me thinking—how value is being redefined in the age of generative AI. Here’s an example: for as long as we can all remember, having lots of data was critical to drive decision making. And if you had good and exclusive data, that was highly valuable. But now that AI can assemble and infer data so quickly and so well, has the value moved away from mere data possession towards reasoning and prediction? Or will people now go to greater lengths to hoard the best data? I am leaning towards reasoning winning the day, but I do think it’s a great topic for reflection. (I’ll let you know when the article comes out.) At the very least, I expect that we will see a more distinct break between data and reasoning in the business world—like we already do between training and inference within AI.
I was quite intrigued by the agentic blueprints that NVIDIA announced last week at CES. But it was not necessarily the use cases—which were pretty commonplace—that were the real story. Much more compelling was the vision of what agentic development could be. The first thing that stuck out was that these are partner-driven solutions. This is in contrast to what we have seen so far, which have been siloed and internally developed agentic solutions. With those, you basically have to use a homogeneous stack of technology to realize the value. And, yes, there is still a need to use the NVIDIA AI Enterprise platform to deploy the blueprints that were announced, but the simple thought of co-development is good to see—and hasn’t been highlighted enough.
Second, I am very interested in how NVIDIA is thinking about agentic AI in the physical world. That is what I consider a second leap from what we are seeing so far. Today’s agents are very much bound to a cloud or a platform. The first leap I am hoping to see is a leap to the on-premise compute world. This could mean collaboration between the AIs on devices such as an AI PC or an iPhone and the cloud (edge AI, so to speak). The second leap is the same idea but to physical devices and robotics. Again, it’s refreshing to see NVIDIA paint a picture of the agentic world that is so visionary.
Over the past month I have been researching AI development platforms, and I have some research coming out on that very soon. But the deeper I have gotten into the topic, the more I’ve realized that each platform regards different user roles with different priorities. It is almost as if each vendor started development from a completely different place, yet they all ended up close enough to each other that we now have a new category of solution. This is a very good thing early in a product lifecycle. By having a broad base of solutions to choose from, the market will have a better opportunity to judge what ends up being the best use of the technologies.
To that point, I want to mention that the newly released Azure AI Foundry from Microsoft is, to my mind, the first of these platforms to really take on the IT management aspects of the problem set. And while it may not have all of the very coolest developer features we see from the competition, it does highlight an under-represented set of requirements that will be essential for enterprise deployment and success.
Last week a member of the media reached out to me to discuss a topic that got me thinking—how value is being redefined in the age of generative AI. Here’s an example: for as long as we can all remember, having lots of data was critical to drive decision making. And if you had good and exclusive data, that was highly valuable. But now that AI can assemble and infer data so quickly and so well, has the value moved away from mere data possession towards reasoning and prediction? Or will people now go to greater lengths to hoard the best data? I am leaning towards reasoning winning the day, but I do think it’s a great topic for reflection. (I’ll let you know when the article comes out.) At the very least, I expect that we will see a more distinct break between data and reasoning in the business world—like we already do between training and inference within AI.
Sam Altman, CEO of OpenAI, made a very interesting post on his personal blog about when he believes OpenAI could achieve artificial general intelligence (AGI), an advanced level of AI that can perform at human levels. It was only two years ago that OpenAI made the historic launch of ChatGPT. Just two months after its release, ChatGPT had 100 million active users, heralding AI’s potential as one of the most powerful technologies ever created. Beyond that, the launch transformed OpenAI from a small research lab into a major AI player. Today, an advanced version of the GPT platform handles more than one billion queries daily.
In the blog post, Altman shared personal anecdotes, including his unexpected firing from the company and the governance issues that followed, offering lessons learned in leadership and company management. The rapid growth of OpenAI required him to build the company culture and infrastructure almost from scratch. That led to both successes and setbacks. Altman admitted to his own failures in governance, particularly around his firing. In another lesson learned, he emphasized the importance of having diverse and experienced board members.
Now, here’s the most interesting part of his post. Altman predicts that AGI will be achieved in 2025 in the form of AI agents that will impact workforce productivity. Altman said, “We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.” Moving beyond AGI, Altman talks about focusing on superintelligence, envisioning a future where AI could dramatically enhance human capabilities and societal prosperity.
From my perspective, the trajectory towards more incremental and advanced AI capabilities looks doable. However, AGI needs a level of human-like reasoning and adaptability that can be applied over a wide range of tasks. That is a very complex goal. We are not there yet for the strict definition of AGI, but AI agents in the workforce with a limited form of AGI might be doable this year.
I believe that superintelligence is not possible at this stage or at any time within several decades, if ever. While AI has made tremendous advancements over the past decade, superintelligence involves numerous unknowns, including abstract reasoning, creativity, conscious thoughts, and problem-solving at the highest level. None of those is on the horizon yet—let alone all of them.
Oracle has launched the Exadata X11M data management platform, with a focus on driving extreme performance across three key workloads—AI (vector search), online transaction processing (OLTP), and analytics. Exadata is a combination of tuned hardware and software that enables organizations to accelerate performance of these key workloads while enabling greater levels of consolidation in the datacenter.
I like what Oracle is doing. For decades, Oracle Database has been the data management platform of the enterprise (97% of Fortune 500 companies run Oracle). It only makes sense that the company would take its IP and better enable core workloads that power the enterprise. The numbers are quite compelling across the board: Oracle claims vector search performance increases of up to 55% on storage servers and 45% on compute servers, along with 25% faster OLTP and analytics performance relative to the X10M platform. And this performance is delivered on-prem, in any major cloud, or in hybrid environments.
It can be very difficult for legacy infrastructure companies to pivot and maintain relevance as the market shifts around them. Oracle is unique in how it has smartly pivoted and taken full advantage of its footprint in enterprise data.
The data protection software market grew in 2024, driven in no small part by advanced cyberthreats and stricter regulations. Unsurprisingly, AI tools became much more important for automating governance, ensuring compliance, and detecting threats. Vendors such as Cohesity, Commvault, Rubrik, and Veeam Software improved their market presence through acquisitions, partnerships, going public, and adding new features to their platforms. Observability tools also progressed, integrating system monitoring with data protection for proactive solutions. Read more in my latest Forbes article about what I see ahead for data protection in 2025.
Zoho Analytics has grown into a full-fledged, AI-driven business intelligence platform. Its September 2024 release included more than 100 updates, with a big emphasis on expanding access to data analysis across different job functions. Considering its advancements in AI and machine learning, Zoho Analytics now competes with established BI solutions, enabling a broad range of users in different industries to make more informed decisions. Check out the recent MI&S Research Brief about Zoho Analytics from Melody Brue and me for more.
Extreme Networks launched its Extreme Platform One in early December. Platform One is positioned to allow IT professionals to manage and secure networks faster and more efficiently. The company claims that the offering has been developed based on customer feedback and aims to unify connectivity experiences with a single composable workspace and high degrees of AI-powered automation, and to deliver a simplified licensing structure. I believe that when Platform One becomes available in the second half of 2025, it will allow Extreme to compete with the likes of Cisco, HPE, and others more effectively given its historic focus on providing commodity connectivity infrastructure.
At the National Retail Federation’s show, SAP rolled out some new features for the retail industry. These include the SAP S/4HANA Cloud Public Edition, designed for retail, fashion, and related businesses, as well as an AI-powered shopping assistant. The company also shared plans for a loyalty-management solution for retailers and consumer goods companies, which is set to launch in late 2025. The updates are geared toward helping retailers work more efficiently and better connect with their customers.
In 2025, AI agents are expected to change the game in retail by enabling personalized customer experiences, flexible shopping options, and sustainability initiatives. At NRF, Microsoft highlighted tools such as Copilot and Dynamics 365 ERP agents that can handle routine tasks, improve operations, and make real-time decisions. This gives employees more time to focus on what matters most while improving efficiency, reducing costs, and helping build relationships with customers and suppliers.
Key statistics from Adobe’s 2024 Holiday Shopping Report reveal significant trends in online retail during the holiday season. Online retail spending reached a record $241 billion, representing an 8.4% increase compared to 2023. Additionally, spending on buy now, pay later (BNPL) options grew substantially, exceeding $18 billion and peaking at $993 million on Cyber Monday, which set a new single-day record. Mobile revenue accounted for 53.2% of online shopping, totaling $128 billion. This shift towards mobile spending and the increasing popularity of BNPL options highlight changing consumer preferences in digital payments and financing.
Indeed has released its 2025 U.S. Jobs & Hiring Trends Report, which includes interesting data points and trends for the workplace and workforce in the new year. Two significant trends stood out to me as poised to reshape the workforce in 2025, presenting challenges and opportunities for businesses and workers alike.
1. Demographic shifts and labor shortages: The U.S. is experiencing a decline in its prime working-age population, a trend with profound implications for labor supply. This demographic shift suggests that future workforce growth could hinge on immigration, potentially leading to persistent labor shortages across various sectors. Companies may need to reevaluate their talent acquisition strategies to focus on upskilling existing employees, embrace remote work to access wider talent pools, and implement aggressive retention initiatives.
2. The dual nature of AI: Artificial intelligence is rapidly transforming the workplace, potentially automating existing jobs while creating new roles. While estimates suggest that AI could automate millions of jobs, it’s also projected to generate millions of new positions requiring a blend of technical expertise and uniquely human skills such as empathy, creativity, and critical thinking. This duality underscores the growing importance of adaptability and continuous learning for workers at all levels.
The convergence of these trends presents a complex landscape. A shrinking workforce may accelerate AI adoption to address labor shortages, potentially increasing productivity but also raising concerns about job displacement. To thrive in this evolving environment, businesses and individuals must proactively adapt, embrace learning, and cultivate a workforce equipped for the demands of the future.
After last year’s CES, I predicted that Matter would reach its tipping point in 2025, becoming the preferred connectivity standard for new smart home product designs. I’m doubling down on that prediction this year because Matter ecosystems (platforms) are maturing, and consumer adoption is finally taking off.
1. Matter ecosystems: Certifying products and developing product-specific apps is becoming much easier.
Easy product certification — “Works with” compatibility programs from Apple, Google, and Samsung agreed to accept Matter interoperability testing. Apple is already accepting these lab results, and Google and Samsung plan to do the same later this year. What about Amazon? Stay tuned. My take: This announcement validates Matter’s “universal interoperability” brand promise and encourages more device makers to get on board. Meanwhile, the economics are compelling—one interoperability test replaces three or four.
Easy app development — As promised earlier this year, Google is opening up Google Home as a developer platform. The company just announced a new set of Matter Home APIs for Android developers, with iOS support coming in a few months. These APIs link partner apps with Google Home hubs to control devices and automation experiences. The apps connect directly to the Google Home runtime package on local, on-premises hubs. The runtime controls Matter devices without a round-trip to the cloud, reducing latency while improving reliability and privacy. Google Home’s installed base is over 40 million hubs, including Nest, Chromecast, Google TVs, and some LG TVs. My take: This is a big deal. Today, Matter standardizes connectivity, but CE manufacturers often require product-specific features at the ecosystem level. Creating new ecosystems is complicated and costly, and consumers don’t want a separate ecosystem for each product, so Google is on the right architectural path here. Google Home hubs connect local devices without round trips to the cloud, and APIs let partners extend the ecosystem with product-specific features and experiences. Other ecosystems (Apple, Amazon, and Samsung) already have comparable APIs, and could add on-premises control logic to their hubs. I hope ecosystem companies consider standardizing APIs or at least using similar design patterns.
2. Matter products: The tech news outlets will review all the new Matter products that debuted at CES, but here are my short takes on a few that caught my eye.
Resideo (Honeywell Home) announced the Honeywell Home X2S Matter-enabled smart thermostat ($79.99 MSRP). My take: The low price point proves that adding Matter is cost-effective.
GE unveiled two new wall-mounted Matter-based “Cync” dimmer switches with several interesting innovations, including single-device three-way circuits ($44.99 and $25.99). My take: It’s great to see major consumer brands support Matter with innovative, mainstream, reasonably priced products.
LG’s over-the-range microwave oven with a 27-inch touchscreen and full Matter support created considerable interest at CES. It’s a smart TV, Matter hub, Thread border router, and home control panel. Oh yeah, it also microwaves food and has three cameras to show it cooking. This product is part of an industry trend to use touchscreens as the UI for appliances, from light switches to washing machines. My take: Some analysts dismiss this trend as silly, but we should take it seriously. LCD panels are inexpensive peripherals for smart appliances, so the question isn’t whether to use them, but how to use them. For instance, I see a rough road ahead for CE companies that envision these screens as advertising billboards or sales tools.
Aqara is going all-in with dozens of Matter products and variants. Examples include control panel hubs, dial-based touchscreen controllers, touchscreen switches, light switches, dimmer switches, presence sensors, climate sensors, a doorbell camera, and a Matter hub. My take: Aqara is beating established brand names to the punch with a broad Matter product portfolio.
Locks — Several companies introduced innovative Matter-enabled smart locks. Schlage’s first Matter product is the Sense Pro Smart Deadbolt. It uses UWB for hands-free unlocking. ULTRALOQ’s Bolt Fingerprint and Bolt Mission locks have Matter support, and the latter has UWB spatial awareness. My take: I’m pleased to see house locks get some of the great features we’ve had in car locks for years. I’m also bullish on UWB.
3. Industrial automation: There were hundreds of industrial announcements at CES. Here are three examples.
NVIDIA — For IIoT and edge tech, the quote of the week was from Jensen Huang: “The ChatGPT moment for general robotics is right around the corner.” He defined three kinds of robots that require no special accommodations to put them into service—agentic (because they’re information workers), self-driving vehicles (because roads are already in place), and humanoid (because they fit directly into our world). I think enterprise and industrial operations technology is a fourth AI embodiment. Industrial IoT systems are increasingly autonomous and adaptive but lack the uniform connectivity and interoperability needed to act “robotic.” This is the new definition of industrial IoT—enabling robotic physical infrastructure.
NXP — The company has agreed to acquire TTTech Auto in an all-cash transaction valued at $625 million. NXP plans to upscale TTTech MotionWise in its CoreRide software, accelerating the shift from hardware-based designs to software-defined vehicles (SDV). TTTech stands for time-triggered technology, a set of techniques for synchronizing and scheduling events across distributed systems. My take: This savvy acquisition ensures the CoreRide platform can use standard networks for tightly timed, safety-related distributed automotive applications. CoreRide and TTTech technologies could also apply to manufacturing and other industrial applications, but NXP hasn’t confirmed that.
Ceva-MediaTek collaboration — Imagine wearing VR headgear, turning your head, and the audio space remains fixed relative to the 3-D video space. Ceva’s RealSpace immersive spatial audio integrates this and other advanced audio processing techniques into MediaTek’s Dimensity 9400 mobile chipset. My take: Locking the audio space to the virtual visual world is very cool, and not just for gaming. For instance, spatial audio adds realism to Industrial digital twins.
IonQ recently completed its acquisition of Qubitekk, a quantum networking firm. The acquisition provides IonQ with advanced networking technology and a large number of new patents, bringing IonQ’s portfolio to over 600 patents. It also acquired an important networking asset in Qubitekk’s EPB Quantum Network, the first commercial quantum network in the U.S. That will enhance IonQ’s quantum networking capabilities and remote ion-ion entanglement. The integration of Qubitekk’s technology will likely provide IonQ with faster quantum network deployment, which will enhance its secure communications and distributed computing capabilities. This should push IonQ into a leadership position in quantum networking, which could result in new partnerships and/or contracts. Combining IonQ’s quantum expertise with Qubitekk’s networking experience could result in significant advancements in security and computational power. This acquisition is strategic for IonQ, given its dependence on networking for how it plans future scaling of qubits.
Recent negative news related to the Palo Alto Networks Expedition firewall migration tool may be overblown. The tool was offered as a free utility to migrate configurations from third-party firewalls to Palo Alto Networks’ next-generation firewall platform, but it was never intended for production deployments. There is no evidence of active exploitation, but even though the company retired the tool last year, patches have been issued and production migration tools have been provided to its customers.
In 2025, sports tech is sure to keep evolving as part of the ongoing transformation of how fans experience games and connect with their teams. For example, the platform Cosm and Meta’s Xtadium app are bringing sports into virtual reality. Meanwhile, streaming services—as we saw in 2024 with Peacock during the Summer Olympics, Netflix with boxing and the NFL, and AWS with the NFL—are expected to expand with AI features that include personalized highlights and real-time stats. This tech is also branching into other areas of entertainment and music, with AI shaping everything from songwriting to virtual concerts and even influencing events like the Grammys. We can expect platforms like TikTok and YouTube to continue blending sports, music, and entertainment, giving creators and fans new ways to connect and engage.
Last week, Dell announced substantial upgrades to its AI PC portfolio, highlighting enhancements in performance and sustainability. This initiative reflects the increasing importance of sustainability in business, a trend expected to continue influencing industry strategies through 2025. Dell’s approach includes implementing circular design principles, such as modular components and greater use of recycled materials, to extend product lifecycles and minimize e-waste. The company’s initiatives to improve energy efficiency, battery life, and repairability likewise underscore its commitment to addressing environmental concerns while catering to the performance demands of the AI PC market.
AT&T recently announced a customer guarantee for consumers and small businesses that use its wireless and fiber networks. Any customer who experiences a fiber outage of 20 minutes or more or a wireless outage of 60 minutes or more will receive compensation in the form of a billing credit. Additionally, the company is setting a goal for its customers to reach a call-center technical expert within five minutes or receive a callback at a chosen time, as well as a commitment to send a field technician the same day or next day for unresolved issues. AT&T Guarantee is a significant move for the operator, given that it’s the first of its kind for consumers, and I expect many of AT&T’s competitors will respond with similar commitments.
]]>MI&S Weekly Analyst Insights — Week Ending January 3, 2025
https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-january-3-2025/
Mon, 06 Jan 2025 18:00:42 +0000https://moorinsightsstrategy.com/?p=44775MI&S Weekly Analyst Insights — Week Ending January 3, 2025. A wrap up of what our team published during the last week.
Welcome to the annual tech trends edition of our Analyst Insights newsletter. 2024 was a year of rapid advances and unexpected developments across the technology landscape, and 2025 promises to bring even more surprises. As we embark on a new year, we’ve gathered insights from all of our MI&S analysts across their specialty areas to provide you with an overview of the key trends that shaped the past year and what our experts anticipate for the year ahead.
It was a pleasure working with you to navigate the transformative trends of 2024, and we look forward to providing valuable insights into the forces shaping the technology landscape in 2025 and beyond.
As always, if there is anything you would like to discuss as you plan for the year ahead, please reach out. Many of us will be in Las Vegas for CES next week—we’d love to connect with you there!
Since our last newsletter, MI&S analysts have been quoted in top-tier international publications including OpenTools and Yahoo Tech. Reporters wanted our thoughts on AWS, Google Pixel 9, Nvidia, and smartwatch and wearable trends in 2025.
My biggest surprise of 2024 was learning how developers have embraced AI assistance. Developers are a smart and often skeptical group of people. But time and again, I heard stories about devs paying out-of-pocket for assistant technologies to speed up their work. I expected more cynicism about AI’s ability to help with coding — which tells me that the technology must be pretty good.
I have two predictions for 2025:
Agentic development will continue to be big in the first half of the year, especially since we now have some highly viable agentic development platforms including Bedrock (AWS), AI Foundry (Azure), and Agentspace (Google). I also expect to see non-cloud competitors to these platforms this year. (Red Hat, can you hear me??)
AI governance and controls will be a massive challenge. We are already seeing technologists grapple with the implications of AI usage and apps. But once line-of-business professionals get comfortable with pervasive AI use, we will see IT and legal departments flex their muscles in a meaningful way.
My biggest surprise of 2024 was learning how developers have embraced AI assistance. Developers are a smart and often skeptical group of people. But time and again, I heard stories about devs paying out-of-pocket for assistant technologies to speed up their work. I expected more cynicism about AI’s ability to help with coding — which tells me that the technology must be pretty good.
I have two predictions for 2025:
Agentic development will continue to be big in the first half of the year, especially since we now have some highly viable agentic development platforms including Bedrock (AWS), AI Foundry (Azure), and Agentspace (Google). I also expect to see non-cloud competitors to these platforms this year. (Red Hat, can you hear me??)
AI governance and controls will be a massive challenge. We are already seeing technologists grapple with the implications of AI usage and apps. But once line-of-business professionals get comfortable with pervasive AI use, we will see IT and legal departments flex their muscles in a meaningful way.
I believe that in 2025, 5G will become an accelerator for AI and gain more prominence as a key component for enabling AI. While many vendors talk about edge AI and running models on devices, the reality is that many models simply cannot run on the device and that hybrid AI will remain in the future for a long time. The only way for hybrid AI to work effectively is with an always-on connection; this is really easy for smartphones but more challenging for PCs, and we might actually see 5G PCs grow as a result of that this year. Additionally, XR is an excellent interface for AI, and—conversely—AI is an accelerant for XR capabilities and growth. I believe we will see the new Android XR spatial OS as a proof point for that interconnection in both MR and AR products and solutions.
AI is a snowball that gathers larger amounts of material and grows bigger and more capable every day. In fact, it is accelerating in functionality and scope. Every morning when I open my inbox, it is filled with more new information about larger models, new features, larger funding, new funding, new startups, better reasoning, and so on.
I was curious about how much information is distributed about AI on a daily basis. I thought Google’s Gemini search might give me a general idea, but after spending a few paragraphs explaining why it couldn’t offer a hard number of publications about AI, Gemini said, “However, I can offer some informed speculation: Considering the immense volume of information Google indexes, the widespread interest in AI, and the constant stream of new content, it’s safe to say the number of publications is extremely large. We’re likely talking about millions, perhaps even tens of millions, of pages.”
So without using any specific numbers, here’s how I see AI’s growth. Most everyone has seen videos of the earth in comparison to the size of other objects in the universe. It starts like this: A basketball-sized Earth is initially shown next to a stadium-sized Sun. Then, the giant star Betelgeuse appears on screen, dwarfing the sun and the earth. Betelgeuse is as big as a city block. But Really Big is yet to come. When the massive star called VY Canis Majoris appears, Betelgeuse shrinks in comparison. What was once a giant star is now an insignificant sandbox compared to an entire beach. Finally, a supermassive black hole covers the screen. Relative to it, the Earth and Sun are nearly invisible specks.
Today, AI is like Earth in the video, but it will grow to the size of the bigger stellar objects over time. At least that’s how I envision its long term growth—AI of today is a speck compared to what it will likely become in 25 or 50 or 100 years. Let’s hope humanity has the wisdom and ability to use it wisely.
In 2024, despite the rise of AI, customer service saw a surprising trend: a renewed emphasis on human interaction. 77% of customers said they preferred an immediate connection with a person, and 81% would rather wait for a live agent than interact with a bot. While businesses strategically blend AI with human agents to enhance efficiency, customers overwhelmingly prefer connecting with real people for a more nuanced and practical experience. In 2025, AI-powered voice data analysis will become crucial, enabling hyper-personalized experiences by detecting emotions and predicting needs in real time. While omnichannel remains necessary, companies must prioritize voice interactions and leverage AI to extract valuable insights from this channel.
Meanwhile, CRM trends in 2024 revealed a shift towards user-friendly, self-service solutions, empowering businesses of all sizes. This trend will continue into 2025, with “CRM à la carte” and low-code/no-code platforms allowing for easier customization and simplified data entry. To combat data silos, companies are increasingly unifying teams under a single CRM system, streamlining communication, reducing errors, and enhancing data-driven decision-making.
2024 saw a dynamic in the compute silicon space that somewhat parallels the storage market: a bifurcation of silicon along AI versus non-AI lines. Bespoke silicon for bespoke workloads and functions has existed since semiconductors have been in existence. However, AI is different. The needs of AI have led to a renewed focus on semiconductors and startups such as Cerebras, Tenstorrent, Untether AI, and so many others. Further, the market has accelerated growth in the custom silicon space as companies like Broadcom and Marvell have benefited greatly from the needs of hyperscalers, which have very specific computational and power requirements around training, inferencing, moving, and securing data. So while NVIDIA has commanded the AI silicon market overall, it has been somewhat surprising to see the amount of VC funding that has gone into the silicon startup space.
I believe that 2025 will see this trend continue. AI inference will take center stage alongside AI training, with increasing focus on the many startups serving this market. Additionally, smaller functions along the AI data journey that currently add significant latency will spawn a new wave of silicon innovation to drive better performance and security. As in recent years, I expect to see significant VC funding going to seed startups that help in the collection, storage, preparation, and movement of data in the AI pipeline.
As in the silicon market, 2024 saw somewhat of a bifurcation of the storage market as high-performance storage vendors such as VAST Data, Weka, and DDN pivoted to address the AI data pipeline and data management. While storage is a critical element of the AI equation, gathering, cataloguing, and readying enterprise data is where the real complexity of AI becomes real for business and IT leaders alike as projects move from conceptual to operational. The early-mover status achieved by VAST and other high-performance computing storage players is logical, as these companies have been focused on the more advanced functions of storage systems for the sake of accelerating workload performance. This is why we saw VAST’s valuation skyrocket through 2024, at the same time the profiles of Weka and DDN rose considerably.
I believe 2025 will see the storage market shift in both technology and messaging as these upstarts continue to increase awareness, share, and valuation. We have already seen NetApp begin its evolution, and both Dell and HPE have quietly made moves that better position their respective portfolios. While AI-washing in terms of messaging is no surprise (because every vendor tries to exploit market trends), the investment in technology being made by these companies is the real tell; it signals that they see AI as fundamentally shifting enterprise IT organizations, in terms of both operations and technology consumption.
One partial outlier in this equation is Pure Storage. While the company continues to broaden its support for enterprise AI through its portfolio, it has not lost sight of the enterprise storage needs that exist outside of this one significant workload. However, the company seems to be taking a more measured approach in terms of allowing the market to come to it and meeting customers where they are. I believe it is this approach that has led to the company to regularly recognize roughly 10% year-over-year growth in its quarterly financial reports.
The enterprise application market experienced significant growth from 2023 to 2024, with the market size increasing from $335 billion to as much as $363 billion, depending on the source. This represents an approximate growth rate of 8.4% year over year. However, despite this growth, customer dissatisfaction with enterprise software vendors rose in 2024. This dissatisfaction primarily arose from perceived unfair pricing strategies and a lack of clear value delivered by vendors. This indicates a market shift in which customers are demanding greater transparency and a better ROI from their software purchases.
In 2025, I expect deeper integration of AI within application ecosystems. At the same time, customer companies will prioritize trust and demonstrable value, seeking clearer ROI and more flexible pricing models from vendors. Unlike the 2024 emphasis on feature expansion, in 2025 we will see more focus on efficiency, interoperability, and user-centric design. This shift reflects a maturing market where vendors must adapt to needs of discerning customers that have multiple buying personas and significant budget constraints.
ERP systems got a shake-up in 2024 with AI and vendor modernization efforts. But what really stood out was the shift in mindset—enterprises realized adopting ERP isn’t just about new tech. It’s also about getting their data organized and making sure their teams are ready for change. Functionality matters more than features. If anything, 2024 made it clear that ERPs aren’t just about keeping the lights on—they’re a critical tool for businesses to grow, adapt, and stay competitive.
The payoff for those who got it right was obvious. Modern ERPs centralize data across departments, automate routine tasks, and deliver sharper insights with improved analytics. Cloud technology—pushed hard by vendors—has made these systems more flexible, mobile, and user-friendly, while also being easier for vendors to support. Even so, with an estimated two-thirds of enterprises still using on-premises setups, there’s an emphasis on moving to hybrid or fully cloud-based systems.
Looking ahead, I see data strategies and ERP systems shaping up to be even more important in 2025. Cloud adoption will keep growing because it’s flexible, cost-effective, and lets enterprises keep their locations connected while enabling their people to work from anywhere. Managing data will still be a big deal, with businesses focusing on keeping the data clean, secure, and well-governed, with better tools for protecting sensitive info.
In many industries, supply chain management will also stay front and center. With IoT increasing in functionality and especially getting better at providing data, ERPs will get better at real-time tracking and analytics, making it easier to handle inventory, logistics, and demand planning. (See Bill Curtis’s “IoT and Edge” entry in this newsletter for more on how this part of the data landscape is changing.) I expect pricing to move toward consumption-based models (versus user-based) to make it easier to bring more employees onto each system.
We’ll also continue to see ERP systems designed specifically for different industries. But here’s the thing: all this technology only works if businesses manage change well. Teams need support to adapt to new workflows, or it won’t stick. And finally, sustainability will be a bigger part of the picture, with ERPs helping businesses track environmental goals and ethical sourcing.
In 2024, operational data was the unsung hero of digital transformation. While LLMs, generative AI, and agentic AI captured headlines, operational technologies (OT) quietly emerged as critical enablers of enterprise digital transformation. Enterprises with significant physical assets discovered that fusing OT and IT data into a company-wide, multimodal, real-time data estate transforms AI-enhanced ERP, SCM, and BI applications from reactive to proactive. (Robert Kramer’s entry on ERP and SCM elsewhere in this newsletter gives more perspective on trends affecting those software vendors and their customers.) This profound change upgrades decision-making, enhances process efficiencies, and provides a holistic context for advanced industrial automation. The rapidly growing ROI for OT-IT integration projects creates insatiable demands for OT data.
However, despite compelling integration business cases, most OT data remains inaccessible due to the complexity, cost, and security risks of connecting OT systems with mainstream enterprise applications. This is the OT-IT gap—the chasm between the uniform, managed world of IT and the heterogeneous, chaotic world of industrial IoT (IIoT).
Motivated by AI-driven demand for operations data, enterprise software suppliers that are scrambling to find more efficient ways to bridge the OT-IT gap are adopting a straightforward “data first” approach. Instead of trying to manage devices from end to end, just grab the data. Replace complicated, costly, hard-coded, application-specific device-to-cloud connectivity and device management solutions with simple cloud interfaces for data, events, and status. This approach provides immediate access to IIoT machine data and enables OT software to evolve independently from cloud-native IT systems. Multimodal AI applications can use many types of unstructured IIoT data as is, further reducing device-side software complexity.
Recent announcements from AWS, Google, Honeywell, Microsoft, Qualcomm, and other major cloud frameworks and ERP suppliers confirm this trend. The goal is clear: feed the rapidly growing market for AI-enhanced business transformation with massive amounts of OT data via standard protocols and simple APIs. In other words, simplify getting OT data from IIoT devices.
For 2025, I’m watching three enterprise edge trends and one consumer trend.
CSP and ERM frameworks simplify and accelerate OT data collection, processing, and correlation for AI-powered enterprise applications. AI is now IIoT’s “killer app.”
IIoT devices transition from customized, end-to-end mashups to scalable platforms supporting multiple enterprise frameworks via simple interfaces.
Middleware companies fill the gaps, providing industry-specific connectivity, data, edge analytics, and device management services.
For “smart home” consumer applications, 2025 is the year Matter reaches its tipping point, with significant design wins and increased adoption. Other vertical industries are carefully watching Matter’s standardization efforts, learning from its successes—and mistakes.
In 2024, platforms such as Zoom, Microsoft Teams, and Webex evolved into essential all-in-one communication and collaboration business tools, integrating features including whiteboarding, collaborative documents, and project management functions. This trend towards unified business platforms will accelerate in 2025, combining previously separate tools. Expect deeper integrations, such as what we’ve seen this year with Adobe Express within Box and the ability to create Jira tickets in the Grammarly extension.
Inclusivity will also be a significant focus in 2025, with accessibility features such as real-time translation and closed captioning becoming standard. Companies such as Ava are leading the way with tools specifically designed for individuals who are deaf or hard of hearing, while companies such as Google continue to prioritize accessibility.
As hybrid work persists and collaboration becomes more complex, security concerns remain paramount. Organizations will demand collaboration tools with robust security features, including end-to-end encryption and compliance with evolving data protection regulations.
With Qualcomm’s introduction into the PC market as a chipset vendor, we’ve seen new levels of competition in the space—something that I don’t think we’ve seen in probably the last 25 years. While the introduction of Copilot+ PCs with Qualcomm’s Oryon-based Snapdragon X Elite processors wasn’t necessarily the smoothest (lots of Arm app compatibility needed to get worked out), it did present an alternative offering that pushed the incumbents to accelerate their roadmaps and improve their execution; as a result, the PC market is now far more competitive and faster paced. I expect that this trend will continue to accelerate in 2025 as PC OEMs continue to negotiate with the chip vendors for better products and pricing, which I believe will ultimately benefit the consumer and accelerate the uptake of the AI PC.
Over the past five years, quantum computing has made significant progress—with 2024 as a big contributor to that progress. IonQ has become a public company. IBM has created a roadmap with corresponding technologies to push superconducting qubits past the 1,000 mark. Atom Computing has firmed up its neutral atom technology and has begun pushing aside barriers with its own 1,000-qubit machine. Quantinuum’s H-2 quantum processor has an unbelievably high quantum volume. Microsoft and Quantinuum are both advancing topological computing. Finally, the ecosystem has made several breakthroughs in quantum error correction. In fact, Google’s latest Willow chip actually reduces the error rate as more qubits are added.
In 2025, the trajectory of quantum computing will continue to be shaped by technological breakthroughs, increased investment, and the integration of quantum into broader technological ecosystems. We’ll also see IonQ begin to network its quantum processors together for increased power and scaling. IBM will continue to move forward with advancements in post-quantum cryptography. (More on that in this article.)
I also expect to see some early movement in using quantum for financial applications, such as applying QAOA (the Quantum Approximate Optimization Algorithm) for portfolio optimization and possibly some real-time analysis. JPMorgan Chase has a large portfolio of financial operations where quantum computing could replace parts of classical systems.
Meanwhile, PsiQuantum and Photonic are well on their way to creating photonic quantum computers. We will also see the beginning of real supercomputers that integrate AI, Quantum, and HPC.
Overall, I expect 2025 to be a year of early proofs-of-concepts.
The CrowdStrike global IT outage in 2024 was a seminal moment not only for the cybersecurity industry but also for developer operations in general. The million-dollar question—more likely a billion-dollar question, given the collateral damage—is what could have prevented such a devastating occurrence. Modern continuous integration and continuous delivery/deployment pipelines coupled with test environments are designed to provide a failsafe mechanism that catches bad code and allows rollback before catastrophe strikes. Integrations will continue among software platforms to provide the highest levels of endpoint security. I believe the CrowdStrike incident will serve as a learning experience for other IT solution providers.
Cybersecurity in 2025 will be defined by its ability to adapt to a rapidly evolving threat landscape. Identity access solutions will embrace zero trust architectures, automation, and seamless integration. Endpoint security will rely on AI-powered analytics and lightweight architectures. Cisco’s ongoing momentum with its security cloud, plus recent innovations from Microsoft, Okta, Palo Alto Networks, and others, demonstrate an industry move toward unified, scalable, and AI-enhanced platforms. Organizations must stay ahead of bad actors by investing in modern cybersecurity infrastructure, employing a culture of security awareness, and adopting an integrated approach to cyber defense that facilitates improved security outcomes.
2024 saw both progress and contradictions in tech sustainability. While green datacenters and energy-efficient AI emerged, the industry’s footprint remained significant. “Greenhushing” highlighted the need for transparency as companies became more cautious about publicizing their environmental efforts. In 2025, sustainability will shift from an optional good deed to a core business imperative driven less by a sense of virtue and much more by the energy demands of advanced technologies, regulatory pressures, and investor scrutiny. Companies must integrate sustainability into all operations, as it will become a key differentiator, separating leaders from laggards.
]]>RESEARCH PAPER: Evaluating the New Zoho Analytics
https://moorinsightsstrategy.com/research-papers/research-paper-evaluating-the-new-zoho-analytics/
Mon, 06 Jan 2025 16:17:46 +0000https://moorinsightsstrategy.com/?post_type=research_papers&p=44758This report explores the evolution of Zoho Analytics into a sophisticated, AI-powered business intelligence (BI) platform.
]]>Since its 2009 launch as Zoho Reports, Zoho Analytics has evolved into a sophisticated, AI-powered business intelligence (BI) platform. Its September 2024 release marks a significant leap forward, showcasing Zoho’s commitment to democratizing data analysis through over 100 enhancements. By heavily investing in AI and machine learning, Zoho has become a strong competitor in the BI market, challenging established players and empowering users to make more informed, strategic choices.
Click the logo below to download the research paper and read more.
]]>Microsoft Fabric Simplifies Data Management For AI
https://moorinsightsstrategy.com/microsoft-fabric-simplifies-data-management-for-ai/
Fri, 27 Dec 2024 22:25:12 +0000https://moorinsightsstrategy.com/?p=45000Microsoft has introduced new AI-driven features within the Microsoft Fabric data platform to accelerate application development and improve other enterprise functions
]]>Microsoft has introduced new AI-driven features within the Microsoft Fabric data platform to accelerate application development and improve other enterprise functions. Microsoft and Robert Kramer
As 2025 approaches, managing data effectively is becoming increasingly important for unlocking the potential of generative AI and AI agents. Over the past year, I’ve written extensively about the need for clear data strategies to make AI practical and impactful. At the 2024 Ignite event, held November 19 to 21, Microsoft shared updates on its work with data and AI, providing insights into how it addresses these challenges.
The event highlighted Microsoft’s efforts to integrate AI into its tools, tackle data management challenges, enhance security, expand cloud functionality and make its systems more practical for enterprise teams and frontline workers. Particularly important from a data perspective is the newly expanded Microsoft Fabric, and here I want to dig into Fabric’s role in data management, especially to enable faster and better AI development.
Microsoft Fabric offers new ways to manage and use data, helping streamline workflows, enhance productivity and improve AI integration. Microsoft
Key Components And Features Of Microsoft Fabric
Microsoft Fabric is a data platform that provides tools covering the entire data lifecycle, from integration and engineering to storage, analysis and reporting. Arun Ulag, corporate vice president for Azure data at Microsoft, described Fabric as “Office 365 for data,” highlighting the company’s intent for Fabric to offer a unified, comprehensive approach to solving data challenges.
Fabric combines several components into a cohesive platform. Data Factory is a tool for data ingestion, integration and orchestration, enabling the creation of data flows and data pipelines from multiple sources. Data Engineering focuses on preparation and transformation of that data to make it more usable. Scalable storage is handled through the Data Warehouse module, and the Data Science component allows users to perform various data science workflows, including data exploration and the creation of machine learning models. Real-Time Intelligence enables working with live data streams from IoT devices, applications and logs. (Databases is addressed in detail below.) Microsoft Fabric also provides industry solutions across sectors such as manufacturing, healthcare, financial and retail—with specialized functions available for sustainability as well. Customers can also use Power BI for robust data visualization and reporting.
Underlying these components, OneLake offers a universal data lake, enabling users to access and query data from various sources without needing to migrate it. It supports open data formats such as Apache Parquet, Delta Lake and Iceberg, which ensures compatibility and avoids vendor lock-in. Microsoft has also added the OneLake catalog within Fabric to simplify data management and governance. It includes key features such as the Explore tab, which helps users find and access data, and the Govern tab (slated to launch in preview soon), which provides tools for data owners to manage and protect their data. The catalog integrates with Microsoft 365 apps such as Excel and Teams, allowing users to access data directly within frequently used tools. Its connection with Microsoft Purview adds more governance capabilities, including global data catalogs, policy enforcement and data quality management, which are essential for managing diverse data sources effectively.
All in all, these capabilities simplify data discovery and governance, particularly for organizations that already use Microsoft tools. Arun Ulag stated, “Fabric is designed to meet the customers’ needs, to support their data journey.” Overall, Fabric is equipped to address data management challenges by improving data connectivity and reducing unnecessary complexities.
Microsoft Fabric Databases — Enabling AI Developers
The release of Microsoft Fabric Databases addresses a major pain point for developers: the complexity of integrating operational data with AI capabilities. Essentially, it makes it easier to work with data and AI models within the same platform, which should simplify the process of building AI applications. The integration of key functionality—such as native vector support, API connections to various AI models and compatibility with development environments such as Visual Studio Code and GitHub—means developers can focus more on building and less on managing infrastructure. The initial offering in Fabric Databases is Azure SQL, with plans to include Azure Cosmos DB and Azure Database for PostgreSQL in the future.
From a strategic standpoint, Microsoft is looking to reduce friction for developers by providing a unified environment where they can easily access both data and AI services. This also brings security into the fold with automatic features like cloud authentication and encryption, so developers don’t have to worry about setting up those aspects separately.
In terms of market impact, I think Microsoft is positioning itself to attract a broader audience by making AI development more accessible, even for those without deep expertise in data management. The idea is to lower the technical barriers to AI adoption and, in the process, build tighter integration into the Azure ecosystem. As developers can now replicate data from different sources directly into OneLake, it reduces the complexity of managing multiple data platforms, which could be a significant advantage for organizations looking to scale AI-driven applications more efficiently.
Ultimately, Microsoft seems to be betting on a more seamless, end-to-end solution that could drive adoption of both its AI tools and the Azure infrastructure. If successful, this approach could set the company apart from competitors by providing a smoother, integrated experience for developers working across both data and AI.
Microsoft Fabric Addresses Business Challenges
Microsoft Fabric is a valuable tool for organizations that are incorporating AI into their operations. Its integration with Azure AI Foundry and Copilot Studio supports the development of AI agents and applications, enabling developers to transform data into automated workflows. Fabric’s tools also simplify data preparation tasks such as classification, summarization and extraction with minimal code, making AI development faster and more accessible to a broader range of developers.
Fabric addresses key data and AI challenges businesses often face. OneLake can be used to eliminate data silos by providing access to data from various sources without requiring migration. The platform also simplifies complex data pipelines, further easing integration and transformation tasks. Its serverless compute model optimizes resource use, reducing infrastructure costs. Fabric also helps businesses gain insights faster by accelerating data analysis. In one example of its value provided by Ulag, a U.K. consumer goods company reduced its data spending from $165 million to $45 million after transitioning 15 products to Fabric over a year.
Microsoft plans to enhance Fabric with expanded database integrations, improved AI tool alignment and better scalability and governance—all of which should make it even more capable of addressing data management challenges and improving AI development processes. Staying informed about new features as they are released will be crucial for organizations and developers to utilize the platform fully.
Competitive Landscape
From a high-level perspective of the data-to-AI pipeline, Microsoft and its peers among the cloud service providers are positioning themselves to offer integrated platforms that enable enterprises to make the most of their data for driving AI-powered applications and insights. Microsoft Fabric is a critical piece of Microsoft’s data and AI strategy. It integrates a data lake house, data engineering, machine learning, analytics and business intelligence into a unified platform that connects important tools and services both inside and outside the Microsoft ecosystem.
AWS maintains a strong presence in this space with its own suite of services, including the Redshift cloud data warehouse for analytics, S3 for storage and the newest generation of SageMaker, which brings together machine learning and analytics capabilities. The new SageMaker Lakehouse unifies S3 and Redshift data to enable rapid AI development. Customers can also draw upon the AI modeling capabilities within AWS Bedrock.
Google Cloud has built its own ecosystem around the BigQuery data warehouse, complemented by tools such as Dataflow for streaming analytics and Dataproc for data processing, as well as the Vertex AI development platform. Meanwhile, IBM’s data fabric solutions provide a platform for managing and integrating data with AI capabilities. Cloud Pak for Data serves as the core, supporting data solution development and deployment. The platform also includes DataStage for data integration and transformation, Db2 for creating and managing data lake tables, and Watson Knowledge Catalog for organizing and governing data. Guardium Data Protection enforces security, while watsonx facilitates AI-driven analytics and machine learning. These components work together to ensure that data is accessible, secure and prepared for AI applications across hybrid cloud environments.
Microsoft Fabric also competes with smaller, more specialized vendors. One of these is Databricks, which offers a data lakehouse platform with ETL and governance features to support enterprise AI efforts. It enables collaboration between data scientists and engineers and provides tools for big data processing and advanced analytics. Another competitor, Snowflake, provides a cloud-native data platform with an architecture that separates storage and compute, enabling flexibility and scalability for large-scale analytics and AI applications. Snowflake also emphasizes data sharing and governance features, catering to businesses managing and analyzing extensive datasets across cloud environments.
Cloudera provides enterprise data cloud solutions designed for hybrid and multi-cloud deployments. Its platform supports the entire data lifecycle—from ingestion and processing to analysis and AI model deployment—and helps customers manage complex data environments across cloud and on-premises infrastructures. Informatica offers a distinct approach by providing generative AI blueprints for platforms such as AWS, Databricks, Google Cloud, Microsoft Azure, Oracle Cloud and Snowflake. This strategy aims to simplify and accelerate the development of enterprise-grade generative AI applications, with a strong emphasis on data integration and management.
The choice of platform ultimately depends on an organization’s specific needs. Factors such as the complexity of the data environment, machine learning requirements and the preference for integrated tools will guide decision making. While Microsoft Fabric could be ideal for many organizations already using Microsoft data products, other platforms may better address specialized or diverse use cases.
Harnessing Data To Streamline Enterprise AI
Microsoft Fabric addresses critical challenges that organizations face as they try to harness their data for meaningful uses of AI. By unifying data workflows and integrating tools such as OneLake and Fabric Databases, Microsoft seeks to simplify data access and management, enabling the creation of AI-driven solutions while reducing operational complexity. By integrating with tools like Power BI and Azure services, Fabric also offers a cohesive environment for data engineering, analytics and business intelligence. Its low-code/no-code interface broadens accessibility, allowing teams with diverse technical expertise to participate in data projects. The platform’s serverless computing model also helps manage costs effectively, making it an appealing choice for organizations already using the Microsoft ecosystem.
Microsoft has begun addressing multi-cloud use cases through initiatives such as Azure Arc, which extends Fabric’s reach to hybrid and multi-cloud environments. However, Microsoft could further enhance Fabric’s appeal by expanding support for native integrations with other major cloud providers and offering more versatile multi-cloud features.
For organizations considering Fabric, it’s crucial to evaluate how its capabilities align with existing systems and long-term AI goals. While the platform offers significant cost and workflow efficiencies, its reliance on the Azure ecosystem might not meet the needs of businesses requiring broader multi-cloud compatibility. Microsoft’s ongoing enhancements in this area could make Fabric an even more compelling choice in the future.
]]>RingCentral And Verint Partner To Elevate CX With AI
https://moorinsightsstrategy.com/ringcentral-and-verint-partner-to-elevate-cx-with-ai/
Wed, 25 Dec 2024 21:31:16 +0000https://moorinsightsstrategy.com/?p=44967The new collaboration aims to enhance RingCentral’s contact center solutions by using Verint AI bots injected into customer service workflows.
]]>This new partnership aims to enhance RingCentral’s contact center solutions by using Verint AI bots injected into customer service workflows. BiancoBlue (ID 324922337) via Dreamstime.com
RingCentral and Verint have partnered to integrate Verint’s AI-powered workforce engagement management and CX automation products with RingCentral’s RingCX platform. This collaboration aims to enhance RingCentral customers’ contact center interactions and improve operational efficiency by using AI bots injected into workflows.
AI is increasingly critical in customer engagement solutions as businesses face pressure to meet customer expectations for personalized and efficient experiences. RingCentral’s offerings are designed to enhance customer support, increase productivity and reduce costs. RingCentral founder, chairman and CEO Vlad Shmunis said the integration with Verint is a “key component of delivering a complete customer experience platform” for RingCentral. Shmunis noted his company’s strong market traction and said that he sees the partnership as an opportunity to “further enhance the value proposition of RingCX in the mid-sized and enterprise segments.”
What Do Verint Bots Do?
Verint’s AI-powered bots are part of the Da Vinci AI and Analytics solution and are designed to automate routine tasks and support customer interactions across multiple channels. These bots use natural language processing and machine learning to respond to customer inquiries in various channels, including voice, chat, e-mail, and social media; this helps companies provide consistent responses across different platforms.
Each bot performs one specific task and is injected directly into workflows. Contact centers can use bots for individual tasks or layer them to weave together multiple workflows. Each bot is tied to an outcome and incorporates specific features, including contextual understanding and escalation capabilities. Bots can interpret customer intent and sentiment, which enables them to provide responses that align with the customer’s needs. If a query is too complex, the bots can transfer the interaction to a human agent and supply relevant context for a smooth handoff. Through machine learning, Verint bots adapt and refine their responses over time, learning from each interaction to improve performance.
Key Applications Of The RingCentral/Verint Partnership
Based on what the companies have said about this partnership and my own background in enterprise software and HRM tools, I see several areas where the newly combined capabilities could be useful for customers. These include:
Workforce Engagement Management — Verint’s extensive suite of bots includes tools for quality management, interaction analytics, forecasting and scheduling. Quality management features could be used to automatically score agent interactions based on predefined criteria, helping identify top performers and areas for improvement. Interaction analytics can help detect trends in customer inquiries, allowing managers to address recurring customer issues proactively. One of my favorite Verint WEM bots is the TimeFlex Bot, which gives contact center agents unlimited flexibility for schedule changes without going through approvals.
AI Automation — Routine administrative tasks such as updating customer profiles or processing simple refund requests can be automated, reducing the burden on human agents. In many cases, AI can also categorize and route inquiries to the appropriate department or agent based on message content. Verint’s Interaction Wrap-Up Bot, for example, can handle post-call summarization, freeing up agents to focus on more complex tasks.
Intelligent Virtual Agents — Verint’s IVA, powered by its Digital and Voice Containment Bots, could handle basic account inquiries, such as reviewing balances or recent transactions, without involving a human agent. Customers could also use an IVA to schedule appointments or make reservations outside regular business hours. For e-commerce businesses, an IVA could guide customers through the steps to track an order or initiate a return.
Knowledge Management —Verint’s Knowledge Automation Bot could help customer serviceagents quickly access a centralized knowledge base during customer calls, potentially improving response times and accuracy. Using a single search bar, agents could instantly access information across multiple knowledge sources, reducing or eliminating the need for multiple time-consuming searches. The bot can then use generative AI to analyze and summarize search results, delivering a single, concise answer curated explicitly for the contact center environment.
The Bot(tom) Line
The success of this partnership will depend on several factors, including ease of integration and demonstrable improvements in customer satisfaction and operational efficiency. Successfully integrating AI often requires significant changes to workflows and processes. While AI adoption is increasing rapidly, many organizations are still in the early stages of implementation and may lack the expertise to get the most out of their AI investments. A recent survey by IBM found that only 35% of companies currently use AI in their business operations, while another 42% are actively exploring its potential. This suggests that the widespread adoption of AI will require organizational change management, AI skills development and a principled approach to addressing concerns about data security and ethical implications.
]]>HP Study: Why Work Isn’t Working And What Can Fix It
https://moorinsightsstrategy.com/hp-study-why-work-isnt-working-and-what-can-fix-it/
Tue, 17 Dec 2024 20:52:30 +0000https://moorinsightsstrategy.com/?p=44725Despite technology advancements and evolving workplace dynamics, people’s relationship with work remains strained, as shown in HP’s second annual Work Relationship Index.
]]>Despite technological advancements and evolving workplace dynamics, people’s relationship with work remains strained. Fizkes via Dreamstime.com
Is work working? Despite technological advancements and evolving workplace dynamics, people’s relationship with work remains strained. A new worldwide study from HP reveals a disconnect between employees and their jobs despite many companies rethinking workplace tools and offering employees more flexibility. HP’s second annual Work Relationship Index, a survey of over 15,600 individuals across 12 countries, finds that only 28% of knowledge workers report a healthy relationship with work.
The new WRI shows a consistent trend in employee engagement drivers, with scores remaining relatively stable compared to 2023. For example, the scores for workspace (defined as flexibility and trust in where employees work) and tools (the right technology to drive employee engagement) both remained flat at 25 and 26 out of 100, respectively. People-centricity, now at 24, and leadership, at 26, each went up by one point. Skills—meaning building confidence by tapping into the enthusiasm to learn new skills—also increased one point, but its overall score of 32 indicates that it is a stronger driver of a healthy work relationship.
While the stability of these drivers suggests that the fundamental elements of a positive work experience remain the same, it also indicates that not much has changed in the past 12 months regarding building a better worker experience. This is especially clear in the answers to questions about trust in senior leadership and how workers want to be seen as individuals.
The key takeaway? Work isn’t working because the way we work is outdated. Companies haven’t adapted to the needs of today’s workers, who crave individuality and tailored experiences instead of being treated like cogs in a machine. But there’s hope. The WRI identifies two potential solutions for fostering healthier work relationships: personalized work experiences and AI. The need for these solutions is becoming increasingly urgent, as reflected by the nearly three-quarters of leaders surveyed who agreed that it is critical to redefine the world’s relationship with work.
The Personalized Employee Experience
Large majorities of knowledge workers (68%), business leaders (75%) and IT decision makers (76%) said they desire personalized work experiences—including having access to the right technologies and workspaces and the ability to shape a flexible schedule. Well over half of the knowledge workers surveyed stated that having these needs met would make them more invested in their company’s growth, enhance their overall well-being and incentivize them to stay with their current employer. In fact, the WRI indicates that knowledge workers who feel their work experience is customized to their preferences are more than twice as likely to have a healthy relationship with work.
The WRI shows that this issue is important enough that employees are willing to put their money where their mouths are about it. A staggering 87% of workers globally report being willing to forgo a portion of their salary—14% on average—to achieve a more personalized work environment. Gen Z workers report being willing to give up as much as 19% on average to have a customized work environment.
The Rise Of AI And Its Impact On Work-Life Balance
Can AI help with customizing the workplace? The past year has seen an explosive surge in AI adoption, particularly in the workplace. In 2023, 38% of knowledge workers, 68% of business leaders and 63% of IT decision makers had used AI at work. In 2024, as AI started to shift from hype to realized benefits, knowledge workers (66%) and ITDMs (91%) each reported dramatic 28-point increases in AI usage, while business leaders (88%) showed a significant 20-point jump.
Workers who use AI report greater job satisfaction and work-life balance. AI tools automate tasks, streamline workflows and give employees more time and opportunities to enjoy their work—so much so that 60% of WRI respondents who use AI credit it with contributing to a better work-life balance, while 68% say AI opens up new opportunities for them to enjoy their work. This correlation between AI usage and work-life balance highlights an urgency to integrate AI into the workplace, especially considering the growing fear of job displacement among non-AI users.
It’s no accident that HP sponsored this research—and asked so much about AI—given the technology it deploys in the workplace. Like other big PC makers, HP has integrated AI into its latest generation of PCs, with features designed to optimize user experiences, boost productivity and enhance collaboration. For instance, HP offers an AI Companion to help users with tasks such as refining workflows and optimizing PC settings; in theory, this should make customizing work experiences much simpler without IT involvement. For remote collaboration, HP’s Poly Camera Pro leverages AI to enhance webcam capabilities, for example by intelligently blurring and replacing backgrounds on video calls. Features such as these can improve the quality of virtual interactions, which can be crucial considerations for remote workers.
Computer OEMs are counting on rapid enterprise adoption of AI PCs, with many industry watchers predicting major upgrade cycles from now through 2025, and some industry insiders projecting that AI PCs will dominate the PC market by the end of this decade. If this does come to pass, it will be interesting to see if these technology upgrades have an impact on future versions of the WRI.
The Leadership Gap And The Rise Of Female Leaders
While the WRI highlights the importance of strong leadership in cultivating a healthy work relationship, it also reveals a concerning trend: only 44% of leaders express confidence in their human skills (sometimes called “soft” skills), such as empathy and communication. This is concerning because 78% of employees highly value empathy from their leaders, yet only 28% see it consistently. A lack of empathy in leaders can lead to misunderstandings, decreased morale and a decline in overall productivity.
However, this gap in leadership presents an opportunity for positive change. The study found that female business leaders are significantly more confident than their male counterparts in both the “hard” technical skills and the human skills required to succeed at work. On average, women are 10 points more confident in their hard skills and 13 points more confident in their soft skills compared to their male counterparts. This suggests a potential shift in leadership paradigms, one in which more women can rise to leadership positions.
This shift may also present a crucial challenge for men in leadership roles. To thrive in this evolving workplace, men must actively develop and embrace these essential human skills, which could create a more balanced and effective leadership environment for everyone.
Looking Ahead: Embracing AI And Personalization
Regardless of the specific leadership or technology developments that unfold in the future, it is clear that employer and employee expectations have evolved, and that leveraging both smarter management practices and smarter technology is essential to meeting the needs of today’s workforce. In particular, AI will shape the future of work by creating solutions and experiences that foster business growth while supporting personal and professional fulfillment.
The findings of the HP Work Relationship Index serve as a wake-up call for organizations worldwide. By embracing AI, prioritizing personalization and fostering empathetic leadership, companies can cultivate a work environment where both employees and their organizations thrive.
]]>MI&S Weekly Analyst Insights — Week Ending December 13, 2024
https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-december-13-2024/
Mon, 16 Dec 2024 22:42:24 +0000https://moorinsightsstrategy.com/?p=44544MI&S Weekly Analyst Insights — Week Ending December 13, 2024. A wrap up of what our team published during the last week.
Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.
Last week Time Magazine published its big “CEO of the Year” feature about Lisa Su of AMD. I was quoted in the article, which I recommend to you as a good refresher about the state of play in the chip market today—and how far AMD has come in the decade since Su became CEO. To me it’s one more reminder of the accelerated pace of change in the tech industry, with generative AI only adding more fuel to the fire. For more on my own views of Su and AMD, take a look at my deep dive on Forbes from last month.
Moor Insights & Strategy principal analyst Anshel Sag was one of just two industry analysts invited to preview Google’s new Android XR spatial OS, which could help unify the XR industry.
This week I also want to shine a spotlight on the analysis of the brand-new Android XR spatial computing OS written by our own Anshel Sag. Anshel’s been covering the world of spatial computing and XR for more than a decade, and for my money he’s as good as any analyst in the world in this area. Google must think so, too, because he was one of only two analysts invited to preview Android XR. His analysis was published on Forbes last Thursday, a few minutes after Google officially announced the OS.
If you have a CEO we should be talking with or a big new launch you think we ought to know about, please don’t hesitate to let us know.
The holidays are coming soon, but we’re still not quite ready for a break from conference season. Jason, Mel, Robert, and I will attend the Salesforce Agentforce 2.0 virtual event this week. We’re all wrapping up research and advisory sessions and will enjoy a quick respite with our families before CES in January.
Last week, I was in Northern California for the Lattice Developer Conference and then Marvell’s Industry Analyst Day, where Matt joined me. Robert and Jason were in Boston with IBM, and Anshel attended T-Mobile’s Analyst Summit. Will was in New York at HP’s Security Analyst Summit. Mel, Jason, and Robert tuned into the ServiceNow Global Industry Analyst Digital Summit.
Read more about significant tech trends and events in this week’s Analyst Insights.
Over the past week, MI&S analysts have been quoted in multiple syndicated top-tier international publications, including Time, PC World, Tom’s Hardware, Benzinga, and UC Today. The media wanted our takes on AMD, Arm, AWS, and Intel. Mel made an appearance on RingCentral’s State of AI in Business Communications webinar, and her UC musings were listed in UC Today’s Top 10 Predictions for 2025.
Google has just announced Agentspace, which is a no-code-ish environment geared towards personal work productivity. Many of Google’s competitors are also playing in this market, so it’s no surprise Google is getting into it. Each of the major cloud vendors has both an AI development platform (in Google’s case Vertex AI) and associated tools for different personas. Google’s foray into power users is quite interesting in that it leverages both Vertex AI and the viral NotebookLM project. That said, it’s different from many of the other agentic approaches out there. In some ways it may be ahead of its time. Stay tuned for more on this topic soon.
Last week I got to spend some time with Matt Gierhart, who leads the custom app dev practice at IBM Consulting. While we spoke a bit about tools and AI assistants we both like, we then shifted focus toIBM Garage, which is a collaboration space for delivering projects with IBM clients. What stood out most was how IBM can quickly present multiple scenarios for a customer decision-making process—“Why should we do one feature versus another?,” for example. Developing these scenarios often takes time and data, but using gen AI is a way to accelerate the gathering and preparation process.
Finally for now, I think we have hit a point in the maturity of AI platforms where we can start to define and compare them in a meaningful way. What’s interesting is how much the diversity of tools has driven both the awareness of the need for a platform as well as the functionality. In 2025, I predict that competition at the AI platform level will overtake the LLM wars. More to come in this area, particularly looking at Amazon Bedrock, Microsoft Azure AI Foundry, and Google Vertex AI.
Google has just announced Agentspace, which is a no-code-ish environment geared towards personal work productivity. Many of Google’s competitors are also playing in this market, so it’s no surprise Google is getting into it. Each of the major cloud vendors has both an AI development platform (in Google’s case Vertex AI) and associated tools for different personas. Google’s foray into power users is quite interesting in that it leverages both Vertex AI and the viral NotebookLM project. That said, it’s different from many of the other agentic approaches out there. In some ways it may be ahead of its time. Stay tuned for more on this topic soon.
Last week I got to spend some time with Matt Gierhart, who leads the custom app dev practice at IBM Consulting. While we spoke a bit about tools and AI assistants we both like, we then shifted focus toIBM Garage, which is a collaboration space for delivering projects with IBM clients. What stood out most was how IBM can quickly present multiple scenarios for a customer decision-making process—“Why should we do one feature versus another?,” for example. Developing these scenarios often takes time and data, but using gen AI is a way to accelerate the gathering and preparation process.
Finally for now, I think we have hit a point in the maturity of AI platforms where we can start to define and compare them in a meaningful way. What’s interesting is how much the diversity of tools has driven both the awareness of the need for a platform as well as the functionality. In 2025, I predict that competition at the AI platform level will overtake the LLM wars. More to come in this area, particularly looking at Amazon Bedrock, Microsoft Azure AI Foundry, and Google Vertex AI.
Synopsys has become the first silicon company to introduce IP for UALink, the new scale-up specification that can connect up to 1,024 accelerators in support of LLM training, HPC, and other workloads. This is a significant announcement as it gives considerable weight to the recent launch of the consortium’s version 1.0 spec, which we covered in detail last month.
I would expect that this could mean that UALink-ready solutions might hit the market by mid-2026. In this connection, I will be tracking companies like AMD, Intel, Arm, and Astera Labs—along with any developments in NVIDIA’s NVLink connectivity spec.
Is storage cool again? Along with the rush of AI adoption comes an extreme focus on data. And of course, data management is highly dependent on storage. Because of this, the market has seen the arrival of a number of storage companies that index heavily on data management. And we also see a lot of traditional storage players evolving their products and messaging to orient around data management and data protection.
I’ve been in several engagements in this second half of 2024 to discuss and advise storage vendors on everything from product strategy to positioning and messaging. I say this to highlight how much companies are gearing up for the data wars of 2025 and beyond.
Each engagement ends with a similar set of takeaways: remove complexity, drive toward an autonomous state, ensure scale, and consider (and speak to) the full range of enterprise requirements—not just AI. Some companies do this better than others, and we see the results as they continue to grab market share.
Cohesity’s president and CEO, Sanjay Poonen, notes, “This deal combines Cohesity’s speed and innovation with Veritas’ global presence and installed base.” The combined entity will serve over 12,000 customers, including 85 of the Fortune 100, with projected revenues of around $2 billion for the 2025 fiscal year. You can read more in the announcement about the deal’s completion.
Adobe’s 2025 Creative Trends Forecast predicts four major design trends for the upcoming year. “Fantastic Frontiers” emphasizes surreal and imaginative visuals influenced by AI and gaming. “Levity and Laughter” underscores the growing importance of humor in engaging audiences. “Time Warp” blends futuristic and historical elements to create a nostalgic yet modern aesthetic. And “Immersive Appeal” focuses on multisensory experiences that combat screen fatigue and prioritize deeper brand engagement.
These trends reflect consumer desires for both escapism and authentic connection. The predictions are informed by a notable rise in experiential spending, illustrating how these trends resonate with the longing for adventure and genuine experiences. Adobe’s data insights provide a solid foundation for these predictions, and I look forward to seeing how creative trends play out in the new year.
The introduction of CameoX, a new onboarding policy by the fan-connection app Cameo, aims to make it easier for content creators including YouTubers to join the platform. By simplifying the enrollment process to a basic form and identity verification, Cameo hopes to attract a broader range of talent, potentially offering an alternative revenue stream for creators who may not be able to sustain themselves solely on platforms like YouTube, Instagram, or Twitch. However, it remains to be seen whether this will be enough to lure YouTubers away from that platform’s established audiences and revenue streams. The success of CameoX will depend on its ability to provide significant financial incentives and unique engagement opportunities that differentiate it from other popular creator platforms. With over 31,000 new creators joining through CameoX and contributing to millions in earnings, the platform is taking a promising step. Still, its long-term impact on attracting and retaining top talent is yet to be seen.
At Microsoft Ignite 2024, significant updates to Microsoft Fabric stood out for me. These updates improve data management for faster AI development, but that’s just the tip of the iceberg. During Ignite, I had the chance to sit down with Arun Ulag, corporate vice president for Microsoft Azure Data, about how tools such as OneLake and Fabric Databases unify workflows and simplify data access and support AI solutions. Read more of my analysis on Microsoft Fabric in my latest Forbes article.
HPE’s recent Q4 earnings rang in a record for top-line quarterly revenue, and the company’s guidance for total revenue, earnings per share, and free cash flow for the fiscal year are all above guidance. Networking remains an area with opportunities for improvement, but I expect that the completion of the Juniper Networks acquisition—expected in early 2025—will bring material synergies thanks to a fortified engineering effort and combined IP portfolio. If the company can crack the code on delivering more sustainable AI infrastructure across the board, especially by leveraging its applied research with HP Labs and Juniper’s Beyond Labs, it could provide significant tailwinds for future top-line revenue and margin improvement.
I attended IBM’s Strategic Analyst Forum in Boston last week. One of the highlights was how IBM is partnering with competitors such as Oracle, SAP, Microsoft, AWS, and others to help customers achieve IT transformation success. I was particularly impressed withIBM’s Garage methodology, an approach to digital transformation designed to develop solutions that address real business needs. It stresses building systems step by step. By working closely with IBM and its partners, clients can create solutions that effectively address challenges and are ready for real-world implementation. One area I felt could benefit from more focus is effective change management—a topic you can read more about in an article I wrote earlier this year.
One of the most significant 2025 IoT tech trends is the maturation of embedded application development. Traditional embedded developers constructed custom platform software stacks, including board support, OS, I/O, network, security, connectivity, and device management. However, the days of building complete software stacks for IoT devices are ending as silicon suppliers and independent software vendors offer complete platforms that support application development right out of the box. Developers can now begin writing application code immediately after unboxing the development kit. And building those applications using standards-based open-source components further reduces undifferentiated overhead. My advice: “If it’s not differentiated, don’t build it—buy it.”
I see more evidence every week that this “platform-based IoT” trend is accelerating. Here are three examples. First, Synaptics posted a demo of a contextual AI voice assistant application operating entirely on-device with no cloud dependencies. The demo uses an off-the-shelf Synaptics Astra Machina SL1680 development board. Developers access all the software required for this demo via GitHub, including the Yocto Linux OS and all necessary support software. According to the company, a developer can get the demo up and running “in a day.” This is a great example of a silicon supplier providing a complete software stack that lets developers focus on applications immediately and avoid writing undifferentiated code.
Second, Nordic Semiconductor just launched an impressive prototyping platform with a catchy name—the “Thingy:91 X.” This board features the nRF9151 system-in-package cellular module (LTE-M, NB-IoT, DECT NR+, GNSS positioning) with an Arm Cortex M33 system processor. The nRF002 companion chip adds SSID-based Wi-Fi location tracking. Expansion options from Qwiic, STEMMA QT, and Grove plug right in. The battery-powered board comes bundled with preloaded SIM cards from Onomondo and Wireless Logic, so it’ll connect to the nRF cloud right out of the box. Nordic supports developers with a comprehensive SDK and courses from its Nordic Developer Academy. Getting started with IoT cellular development is a rough road, but this “thingy” promises to make it smoother.
Finally, Matthias Bösl, head of hardware engineering at Tado (a home energy management company), posted this insightful comment: “In the past, a solid 30 percent of our development team was occupied with connectivity and the platform alone. The open source concept of Matter and Thread plus standardization ensures that we can concentrate better on things that offer our customers real added value.” Off-the-shelf IoT platforms also reduce the technical debt associated with long-term support, so the total cost savings are probably much higher in the long run.
For the past several years, the MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) framework has provided an analysis of cybersecurity threat actor tactics, techniques, and procedures. In the process, it has measured how well endpoint security solutions detect and prevent cyber threats. This year’s Round 6 focuses on ransomware emulation and MacOS infiltration by a North Korean threat actor profile, including adversary behaviors and defensive capabilities.
Palo Alto Networks performed exceptionally well compared to other participants this year. The company’s Cortex XDR set a record as the first participant to achieve 100% detection with technique-level detail and no configuration changes for a second year in a row. Additionally, it prevented eight out of 10 attack steps while maintaining zero false positives with that newly introduced metric. Those are impressive results, and they reinforce my positive impressions of the company’s cybersecurity platform strategy, execution, and Unit 42 Threat Research Center capabilities after my meetings with the executive leadership team in November.
As I think back about the best uses of technology in sports this year, my mind keeps returning to Intel’s involvement in this summer’s Olympic Games. Intel’s AI technologies played a significant role at the 2024 Paris Olympics, enhancing the event in multiple ways. The Athlete365 platform employed AI to provide real-time, multilingual support for athletes, facilitating communication and knowledge sharing. AI-powered systems also automatically generated personalized highlights for fans, creating a more engaging viewing experience. Furthermore, Intel’s technology enabled the creation of 3-D videos and AR clips, offering interactive media experiences. The company’s processors facilitated 8K live streaming with low latency, ensuring high-quality broadcasting. These innovations showcase the growing potential of AI not only to enhance sporting events but also to transform various aspects of daily life by optimizing workflows, accelerating innovation, and improving performance across diverse fields.
I appreciate Intel’s transparency in showcasing the technology behind these advancements. As Robert Kramer and I often discuss on the Game Time Tech podcast, understanding how technology shapes significant events like the Olympics is essential. With these sports technology partnerships, it’s refreshing to see the story behind the tech portrayed in an accessible and relatable way. This level of transparency also helps educate the public about the increasing impact of AI on our lives.
What are the top sports technology trends to look for in 2025? Throughout this year on our Game Time Tech podcast, Melody Brue and I have explored how the sports industry is adopting new technologies to improve athlete performance, reduce injuries, engage fans, and modernize team management. Looking ahead to 2025, some key developments include AI coaching apps and computer vision for real-time movement analysis and injury prevention, wearable devices that monitor performance data, and AR-enhanced broadcasts that provide real-time stats and multiple viewing angles. Other advancements include improvements to the use of video assistant referees (VARs) for fairer in-game decisions, AI systems such the NFL’s Digital Athlete for injury prevention, data-driven tools for scouting talent, and VR simulations that offer realistic training environments. In 2025, Mel and I will continue discussing how these technologies are shaping both professional and amateur sports.Read more about 2025’s top trends, and be sure to check out ourlatest GTT podcasts.
AMD has shared updates on its progress toward achieving its 30×25 energy-efficiency goal for AI and HPC processors by 2025. The company reports significant advancements in chip architecture, such as 3.5D CoWoS packaging and high-bandwidth memory, bringing it close to reaching its target. The report emphasizes the critical role of software optimizations, particularly through AMD’s ROCm open software stack, in enhancing both performance and energy efficiency.
AMD has adopted a comprehensive energy-efficiency approach that optimizes both hardware and software to advance AI development. Key hardware innovations improve performance and facilitate the use of larger AI models. On the software side, the ROCm open software stack continually optimizes performance and energy efficiency by supporting lower-precision math formats, leading to substantial performance gains. This combined approach results in higher performance, greater accessibility to AI, more efficient training and inference, and a reduced environmental impact. AMD says it is confident that it will surpass its ambitious 30×25 energy efficiency goal—and that it is actively seeking additional improvements at the system level.
Huawei used the Ultra Broadband Forum held in Istanbul this fall to announce its autonomous mobile network platform. The embattled infrastructure provider is positioning it as a level-four offering, analogous to the stage of autonomous driving in which a vehicle can navigate without the intervention of a human driver. It is another example of the company making bold claims with little substantiation. To no one’s surprise, AI factors heavily into Huawei’s claims of a latency-aware topology, but removing human operators from the loop is not a realistic scenario for any mobile network operator deployment.
]]>Six Five On The Road: The Critical Role of High-Capacity Storage in AI
https://moorinsightsstrategy.com/the-six-five/six-five-on-the-road-the-critical-role-of-high-capacity-storage-in-ai/
Fri, 13 Dec 2024 17:35:18 +0000https://moorinsightsstrategy.com/?post_type=six_five&p=44525Dive into the critical role of high-capacity storage in AI's advancement with Solidigm and industry leaders, discussing QLC technology and Solidigm's new 122TB drive
The introduction of Solidigm’s new 122TB drive and the implications of QLC-based high-cap SSDs in the AI realm
Tech transitions over the years from the era of minicomputers to GenAI, focusing on how each shift impacts the compute-memory-storage-networking spectrum
The challenges and solutions surrounding storage and data management for AI, and the critical need for power and space efficiency
How Solidigm and its partners are addressing the surging demand for energy-efficient AI infrastructures and the benefits of high-capacity SSDs and QLC technology
Forward looking thoughts on growth areas in AI, storage innovation, and the role of efficient data centers in sustainable technology advancement
Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Patrick Moorhead: The Six Five is On The Road here in New York City. We are at Solidigm’s 122-terabyte launch event. It is exciting to be here, Mel. And as a recovering product person, I love product announcements.
Melody Brue: It is exciting.
Patrick Moorhead: It is for sure. And as analysts, we have to reflect on the industry and how this announcement fits in. And it is pretty clear that with every major inflection point we’ve seen, whether it’s minis to client-server, the PC revolution, social, local, mobile, generative AI, web, e-commerce, we always put pressure on the subsystems for infrastructure. And whether that’s CPU’s for compute, networking, memory, and storage. We’ve heard a lot about GPU’s with generative AI. We’ve heard a lot about photonics with networking. We’ve heard a lot about HBM with memory on GPUs, but there’s just not a lot of conversation about the storage element. And what really excites me about this announcement, and I think it should everybody out there in the tech world is that it not only brings higher performance. It brings higher reliability. It brings higher sustainability. And with energy out there hitting us big time, that’s important.
Melody Brue: It is important, and we really do have to address both the space and the power efficiencies.
Patrick Moorhead: That’s absolutely right. So without further ado, let’s dive in. We’re going to have a conversation. You’ve heard what we think about this, and you’ve heard what Solidigm thinks about this. Let’s go in and talk to some of their partners. Yeah, why don’t we start with Chloe. Noticed everybody I think is in alphabetical order here. Just maybe give your name, who you work for, what your company does. I know a lot of people know what your company does, but just for the sake of the audience and we are getting this on video.
Chloe Jian Ma: Sure. My name is Chloe Jian Ma and I run go-to-market for Arm’s IoT and Embedded business. Strangely, actually, from our line of business, we cover the storage sector. Arm, we actually went public in New York about a year ago. September 14th, 2023, we became a public company on NASDAQ. And Arm is a compute platform company. And our route is really in embedded and edge computing. I don’t know about you but I cannot live without my phone. And if you are using one of these smartphones, 99% of the chance, you are on Arm. But we have made a lot of progress from edge to basically the core and data centers. And most recently, since about 2018, you probably heard about AWS Graviton from gen 1 to gen 4, Microsoft Cobalt, Google Cloud’s Axion, these are all hyperscalers, self-developed, Arm-based server CPUs. And in the AI era, NVIDIA’s Grace Hopper, Grace Blackwell, that Grace CPU is also based on Arm architecture.
So we’ve been making a lot of progress from edge to cloud. Arm is the most pervasive compute architecture. And we’re not only from cloud to edge, but even within cloud and data center, we’re in compute, and we’re in networking and interconnect like the NVIDIA DPU. The BlueField DPU has arm processors inside. And we’re also in storage. A lot of storage controllers are based on Arm. I think that can offer a lot of imagination. And this pervasiveness of Arm allows data to be processed where data gravity dictates.
Patrick Moorhead: Excellent. We’re just going to right down the line. Roger.
Roger Cummings: Hi, everyone. My name’s Roger Cummings. I’m the CEO of PEAK:AIO. We are helping enterprises be successful in AI and HPC workloads. How we do that, and you folks know this is you invest in the GPU infrastructure. Well then, you find out that your legacy storage infrastructure isn’t keeping up with that GPU investment. So we have a software-defined layer that turns every common utility node into a supercharged AI and HPC server for you to take full advantage of that investment. We’ve had some great… We center on three things. Performance, density, and power. And it really correlates to Solidigm’s message as well. You see the physical footprints getting smaller and smaller. We live at that edge. We provide intelligence at that edge. We can not only run your models but run your inference associated with that where we have some great QLC technology that we offer as well. We’ve got some wonderful use cases with Solidigm, and we have many more to come, so I look forward to working with you guys.
Jacob Yundt: My name is Jacob Yundt. I’m the director of Compute Architecture at CoreWeave. I usually introduce myself as the server guy, so if you see me around, the server guy. But if you’re not familiar, CoreWeave is a specialized cloud service provider. We focused on accelerated compute. I can kind of think of it as HPC as a service. But right now, we’re focused on building the biggest baddest training clusters the world has ever seen. Also focusing a bunch on other types of accelerated compute like inference. But right now, similar to the messages that we’ve been hearing today that we’re focusing on power efficiency, scaling, and the story of this high-cap QLC drive is tied to that.
Sophie Kane: And I am notably not Dylan Murphy.
Melody Brue: I was going to say, that’s a lot to go after and also you’re not Dylan.
Sophie Kane: Yes. Yes. Dylan couldn’t be with us tonight. He got stuck on a train on the way from Boston, but I’m happy to step in. So I’m Sophie Kane and I’m the director of Growth Marketing and Business Development for Ocient. And Ocient is a data analytics software solutions company. And we specialize in providing the analysis or analysis for always-on compute-intensive workloads for both data and AI. And we do that by taking advantage of what we’re going to talk about today, which is by putting the compute next to the storage. And on average, we typically decrease cost, energy, footprint size by 50 to 90%.
Patrick Moorhead: All right, Renen.
Renen Hallak: I’m Renen, founder and CEO of VAST. VAST is eight years old now. We built a new type of data platform that has extreme levels of capacity, performance, resilience, cost, ease of use, and primarily we are used, as you may have guessed for AI workloads these days. We built a data store which is for unstructured data, file, and object, the database for structured data, and then we’re now adding a data engine for the compute aspects of it. It’s that software infrastructure layer that sits above the hardware and underneath the application.
Patrick Moorhead: All right. Great introductions there. Let’s go.
Melody Brue: All right. Chloe, we’re going to start with you. You already kind of talked a little bit about your pervasive footprint from cloud to edge. But I want to talk a little bit more about the energy-efficient foundations and the importance of power-efficient hardware such as compute and storage in addressing those types of challenges.
Chloe Jian Ma: Well, first, I want to thank Solidigm for quoting Arm’s CEO about the data center power consumption and the urgent need for us to design more power efficient AI infrastructure. Basically, for the last 20 years or so, data center and cloud infrastructure have become more efficient. And in terms of data center power consumption, for the last 10 to 20 years, it kind of stayed flat because the PUE, the measurement of data center infrastructure efficiency has been improved so we’re not consuming a whole lot more data center power. But that’s going to change when the latest round of Gen AI ever since the LLM was born. And so currently, data center as a whole is consuming about 460 terawatt-hour of power. So that’s about equivalent to the power consumption of Germany as a country. But it’s going to increase significantly.
For example, I think Meta is building a 100,000 H-100 based cluster to train its Llama 4. And the power consumption, it’s about 370 gigawatt-hour and that’s equivalent to powering about 34 million of American households. So we have to think about new ways to make data center more efficient. I saw some stats in one of the energy-related conference. So out of the AI data center power consumption, about 40% is on AI compute. And then the other 40% is on liquid cooling to just cool the AI compute. And then the rest, maybe around 20% is in power, no, is in networking and storage. So seemingly, maybe storage doesn’t consume a lot of power to start with. But in the AI engine, so the GPUs and the AI compute, they’re like the engine. And the storage and networking, they’re actually feeding the fuel, the data into this engine, so you don’t want to keep the engine idle, and you want the engine to be running at its maximum efficiency. So that’s why storage and networking, they’re all very, very important to making the AI infrastructure more efficient. I’m very excited to see this new launch of this biggest SSD ever.
Patrick Moorhead: Yeah, Chloe, a great history lesson too. And I’ve been chronicling Arm’s move into the data center. And there was a day that people said Arm would never make it in the data center. I think you first came in storage, and then you moved to offload, and then you became… I’m a little compute-biased maybe, but you were at the big table with CPUs, and you’ve done a full run of all the hyperscalers with that. And I think we may have even have worked on analysis that said, “If Arm were inside of every data center in every server, there might be 30% power that could be left over to do other things.” But anyways, thanks for those comments.
Renen, you’re up next here buddy. Hey, congratulations on your big Cosmos announcement, and also you’re part of the big xAI cluster 100,000 nodes in Tennessee. I think that’s super cool. And also, again, as an analyst firm, we’ve been chronicling, really breaking all the rules here by compressing the stack. We were joking in the green room. Okay, maybe the fourth floor up there that, “Well, wait a second. Is it a storage company or a data company? Storage company? You don’t actually sell storage. You sell software that people run on their storage devices.” But I want to ask you, in the context of this announcement, where does QLC value performance fit into what you are trying to deliver your customers? Why are you on stage?
Renen Hallak: Sure. They invited me, and so I came. The reason I came is because Solidigm has actually been with us for seven years now. The company is eight and a half years old. Ever since they were Intel, they believed in us when we were very, very small. And even though we were very, very small, they saw our vision and aligned to it. And in many ways, they are responsible for a big part of our success. So thank you for that, and thank you for inviting me up here today. We are definitely a storage company and a data company. We are proud of our storage roots. That’s where we started. And on top of that, we added a lot more over the years. Today, I sometimes use the analogy of an operating system versus a storage system. That middle-of-the-sandwich software infrastructure stack that abstracts hardware away from these new AI applications, but not within a computer, not even within a data center. It’s a global machine. We have what we call the VAST DataSpace that allows us to build one global namespace across geographies. And yes, we get used by the biggest of the big. One of our partners is here on stage, Jacob. And I think wherever you’ll find a big deployment, VAST is there, and I think also CoreWeave is there. But I won’t speak for Jacob.
Jacob Yundt: It’s okay for me. It’s fine.
Renen Hallak: In terms of QLC, Solidigm was the first one with QLC drives. And I remember when it wasn’t entirely clear that there would be a market for QLC drives because they didn’t have enough right cycles. And we told them, “Don’t worry about that. As many as you can deliver, we will sell.” We were this tiny, and so it was hard to believe us back then. But I think we fulfilled that promise over the years. And over the years, we’ve been growing and asking for larger and larger drives, which is why having the 122 is so exciting. It enables a move away from hard drives. I think you saw the previous speaker said that 90% of data storage is still on hard drives. That is not for long. This new drive and these types of drives will definitely cause that to shrink down to, in my opinion, nothing. We will still have tape somewhere in a warehouse, but there is no need for the hard drive anymore.
And especially as you move to these new AI applications, they require much larger capacities and much faster access because we’re no longer analyzing numbers. It’s now pictures and video and sound, genomes, natural language. Somebody gave the example of when cell phones switched from text messages to multimedia messages, and how much more storage capacity they required. We’re now seeing the exact same thing in AI as we switch from large language models to multimodal. And I think, or I know that these GPUs are very, very hungry for information. And so you need it to be large. You need it to be fast. That’s why we’re called VAST.
Patrick Moorhead: Oh, I love that. By the way, I really appreciate… Y’all appreciate the origin story there. I didn’t know the intersection there. But just a follow-up here. What does larger capacities actually mean to you? I mean obviously, people can store more, but does it change the way you architect your product?
Renen Hallak: It does not. We architect it from very early days to be able to sustain extreme levels of capacity and density. Most architectures fall over because the blast radius becomes too large. But we have a shared everything architecture where all of the nodes can see all of the devices, and that means that there’s no problem going up to 120. Now, we’re asking Solidigm for 240, and it’ll keep going. I’m sure we’ll see a petabyte drive out there in the not-too-distant future.
Patrick Moorhead: I don’t know. I’m putting this data on my calendar next year. So, Solidigm, don’t let me down here. You’re on a roll. I appreciate that. Let’s move to CoreWeave. Jacob, your customers aren’t advertised or public, but they are some of the most important names out there in the AI business. Congratulations on that. You’re scaling like crazy. So you said you were the compute guy and efficiency and performance. Can you talk about some of the challenges of designing for things that almost contradict each other? Performance, efficiency, and scalability all the way down to the component level.
Jacob Yundt: Yeah, definitely. Also, Renen and Greg stole all my good lines and content for stuff. But yeah, we are absolutely scaling like crazy. Basically, every time we have either a forecast or we have some sort of plan, we get it wrong. The good news is that we have customers that start consuming our platform and they’re like, “Oh, this is good. We like it. Good.” We’re like, “Okay, we’ve designed storage accordingly for this customer. We’re good. We got it.” And then they come back and they say, “That’s great. Now, we want all of it.” I’m like, “What do you mean all of it?” They’re like, “We want all of it.”
And so then we are scrambling because it’s like, “Oh, now we need a bajillion more racks of storage.” And high-cap drives, what we are doing wouldn’t be possible without these high-cap drives. We’re talking about HDDs and how they’re just terrible and how the legacy hyperscalers are still using them. None of these things would be possible if we were trying to do any of this stuff with hard drives.
And so we have already scaled internally from 16 to 30T to 60T drives, and I can’t wait for this drive because we need it essentially yesterday. And that’s primarily because our customers are saying, “Hey, again, I need all of it. And how do we scale the storage accordingly?” Another part of this is the power discussion. So we talked a lot about power close, talking about how the power demands for AI are just bonkers. And yeah, so if I can have a… This gets a little bit nerdy here. But if I can have a 20, 25-watt drive, and I can double the capacity on that one, and I’m still just trying to get a performance profile that’s just faster than HDDs, why wouldn’t I do that? That’s a no-brainer. If I have a footprint that’s designed for X amount of racks at Y power density, and now I can just double the storage capacity of that. Absolutely, give me that drive. Go get me the 240. Tell me when I can get a petabyte drive. So everything we do is designed to scale. It’s designed around efficiency. It’s designed to feed these just insane GPU workloads. And truly none of that would be possible unless we had a super high-cap drive like this. And the road map is just going more dense, which is good.
Patrick Moorhead: Jacob, by using more efficient drives, can you actually pack more GPUs into the same area? Is that how you look at the world sometimes?
Jacob Yundt: A little bit. The way that we look at this is that we would normally need to have X amount of racks to drive Y amount of GPUs. And if we can increase the density of that and say like, “Hey, now I don’t need X amount of racks for Y… I mean, X divided by two,” or something like that. Yeah, that leaves more power that I can have for either more GPUs or like Greg’s slide had it, more storage, which is great. Because when customers show up and they say, “We love your product, I need all of it.” We don’t have to panic and scramble and figure out how we’re going to go get a bajillion more racks in there to support their storage needs.
Patrick Moorhead: How far out do you plan? Meaning are you looking 26, 27 right now, and you’re seeing where everything’s falling together and you have power budget at the rack level, the fleet level, and the entire data center?
Jacob Yundt: I’ll say yes. But again, we get it wrong every time. So we looked at our 2025 roadmap, and we’re like, “Oh, we got this. We know what the power density is going to be. We know what our GPU count is going to be. We understand what the customers we’re targeting.” And then we get it wrong. I know I keep saying this, but this story is so consistent amongst all of our customers. Someone’s like, “Great, I want to buy so many GPUs from you.” And then they’re like, “Oh, we realize that GPUs are not fungible with all of these other cloud providers. I want all of your GPUs.” And so then that’s how we get these forecasts wrong because we need to get hundreds more megawatts now or we need to go increase power density because asking for slightly different products or whatever. But yes, we are planning out through 2027 and beyond. I suspect we’ll be changing that all the time because everything’s just changing all the time.
Patrick Moorhead: Are you getting your nuclear engineering certification anytime soon?
Jacob Yundt: I don’t remember if that was in the media training, but I think no comment is the official answer.
Patrick Moorhead: No, that’s good. Good sport on that last one. I appreciate that.
Melody Brue: That’s good. All right. We’re going to shift to Sophie who is not Dylan. We’re going to take a little bit of a turn on the solutions approach here. We talked a little bit about this prior to you coming on stage as not Dylan. But efficient hardware is essential for infrastructure, and how does that comprehensive solution approach including software address your customer’s challenges?
Sophie Kane: Yeah, great question. So, as I mentioned, Ocient is a data analytics software solutions company, solution being the key here. When we’re on stage with Solidigm, one of the key messages here is that it’s hardware plus software. It’s a better-together story. And one of the key things that we’re seeing obviously on stage tonight, but also in the market, is that efficiency and sustainability are top of mind. And the reason we know this in part is because we run an annual survey every year to data and IT professionals. And what we’re seeing is an emergence. Over half are concerned. They have these very real fears that energy is just becoming a problem that they can’t get a handle on.
And this is new. This isn’t something we’ve seen before. So again, going back to the software and the hardware play, what we’re talking about here is software sits on top of the hardware. And when the software solution or the solution in general software plus hardware is working great, it’s great. When it’s not, we’re seeing these very real concerns around cost. We’re seeing these very real concerns around footprint, and we’re seeing these very real concerns with our customers around energy consumption. And that has huge implications in the innovation game.
Melody Brue: What kind of efficiencies are your customers seeing?
Sophie Kane: Yeah, another great question. So we typically work with at Ocient, a number of industries, ad tech, telecommunications, government, vehicle telematics, including geospatial, financial services to name a few. And again, tonight, we’re here with Solidigm. You are an incredibly innovative company. You just announced the big announcement, and your innovations allow us to be more innovative. And the efficiencies that we’re seeing are, again, when we couple this hardware plus solutions approach, we typically see a decrease, again, across the board with all of our customers, between 50 to 90% cost efficiency, energy efficiency, footprint efficiency, which bottom line, it goes back to the innovation game. It allows our customers to do a lot more than what they’ve been able to do in the past.
Melody Brue: Talking about efficiency, PEAK addresses some unique efficiencies and challenges at the edge due to data growth. And we just read that today, PEAK announced… PEAK:AIO, sorry, I’m not saying the whole thing. You announced that you achieved 400% growth in US sales over the past year. Congratulations. That’s amazing.
Patrick Moorhead: Congratulations.
Roger Cummings: Thank you. Thank you.
Melody Brue: That’s huge. And this expansion was driven by your high-performance energy-efficient solutions, right?
Roger Cummings: Correct. Yeah. There’s a ton of examples where we’ve really been very efficient. These edge devices are getting more and more intelligent, and the ability for Solidigm to give us the density that we need really gives these applications that we run into a lot of potential to do many, many more workloads across the AI lifecycle that they can.
Patrick Moorhead: And just a quick follow-up. Edge means a lot of things. We heard the edge was the smartphone.
Roger Cummings: Yeah.
Patrick Moorhead: Chloe, thank you very much. But are we talking about retail stores? Are we talking about manufacturing? Warehousing? Is that the edge you’re talking about?
Roger Cummings: That’s a great question because it is interpretive to every industry we go into. So it’s anything from a MRI machine to a camera sitting, taking pictures of some… You’ve seen the story of Solidigm and that’s taking pictures of hedgehogs to a drone or a box sitting behind a Jeep kind of scenario. So edge is proliferating across all AI. We’re seeing examples of that across a myriad of different verticals that we work with today. It’s the ability to pull that information from various sources, and not only run the algorithm but understand and make decisions at that edge is becoming the norm. And there are some verticals talking more about the edge that data can’t move around. So how do you move that but you can move some of the inference associated with it? So those edge devices need to be very intelligent. And to be very intelligent, you need that density associated with it.
Melody Brue: Roger, what are you seeing as the real drivers of growth at the edge and what are some of the challenges that that presents?
Roger Cummings: Well, I mean the edge is… More and more data is being collected, more and more information is being gathered, so it’s truly a huge data problem at the edge. And some of the challenges of doing that, I mean Solidigm is coming up with overcoming some of those challenges of the density. That density putting… You can imagine the ability to put petabytes at the edge will overcome a lot of those challenges that we have right now, the data. And not only running the model, but collecting the inference and doing something with that data at the edge, that’s something that is here today, and it’s going to become more and more complex. And I guess how do we overcome that is by building these infrastructures that are intelligent enough to communicate with various edges because that’s what we’re doing now. We’re just doing it, talking about it from a multi-node environment. But really, that intelligence sometimes, and many times, has to stay at that edge and then the inference needs to travel across those multi-nodes.
Melody Brue: Pat asked this question before. What do you want to see a year from now?
Roger Cummings: I’m sorry.
Melody Brue: Pat asked this question before. So what do you want to see a year from now?
Roger Cummings: A year from now, I would love to see that we are making intelligent, whether it be at the home, but we’re making intelligent decisions with ethics, with governance, with understanding of those decisions at the edge. Because we can imply the technology, we can have all the infrastructure we want, but there’s still a human element to understanding how that decision is being made and have that visibility in how that decision is being made. And I think once we get there, and once people are comfortable with that, I think we’ll be far, far along in the growth of AI. And I think that we’re seeing that today, us personally, within life sciences and the healthcare space. And I think if we can get more visibility around that, we’ll be in a good place.
Patrick Moorhead: I just have a follow-up here. I mean, if you look in the last 40 years of history of these different paradigms, ultimately, the compute makes its way to the point of origin, and the storage has to come with that and the memory has to come with that. With generative AI, there don’t seem to be a lot of people who are talking about generative AI on the edge. I have seen it starting to get baked into the silicon, getting baked into the GPUs and the SDKs. I’m curious, how are you looking at specifically generative AI at the edge? And I’m curious, does storage density and performance and efficiency have an impact on that curve?
Roger Cummings: Yeah. I can answer from my perspective and in our environment, living at the edge. For what we see right now, I’ll be honest, people are gathering a tremendous amount of information and not really knowing what to do with it yet. They’re bringing it to the cloud or they’re bringing it to another platform in which to run their models against. And I think that’s great. It’s a practice of long ago we used to move data around to analyze it. But I think that as that edge and the maturity level of AI, in my opinion, we’re still very early in the AI practicing. The practice of, how do we measure, how do we manage, how do we understand what decisions are being made and how they’re being made, we’re still very in its infant stage. And I think as that matures, we’ll have better practice in it. Now, if you want to go technical about it, my counterparts here could probably do a much better job on it from a technical perspective, but that’s what I see. It’s just the maturity of AI, and I think we need a best practice associated with it.
Patrick Moorhead: And enterprise AI, I mean, all the data we have clearly shows that we’ve exited that experiment stage, and we’re in POCs, but we are not even close to scaling enterprise AI at all. And the reason there’s so much action in the hyperscaler data centers is the big training models.
Roger Cummings: Training.
Patrick Moorhead: We have to come in and have that happen. We also believe as an analyst firm that the models will be more specific, whether you want to call them vertical models, whether you want to call them smaller horizontal models that enterprises can rag off of.
Roger Cummings: One thing I’d have to say too is it’s not just we are running these training algorithms and we don’t have the right data. So just understanding what data you have, how complete that data is, and how that data is changing over time is really important. And I think that a lot of companies are running these models with data. They don’t know if it’s out of compliance, if it’s insecure if it is complete enough. So that level of insight, I think is a level of maturity we need to get to as well.
Patrick Moorhead: Data is the number one impediment data management-
Roger Cummings: Yeah, by far.
Patrick Moorhead: … to enterprise adoption of AI. My firm doesn’t always get it right, but we get most of the big ones right. It was kind of heresy to say two years ago that data would be the biggest impediment. But now, everybody wants to talk about data, and I think everybody on this panel is seeing this. I mean, Renen, you’re knee-deep. Ocient, I mean you’re making it happen there. So folks, really appreciate your time, Dylan, thank you. Just kidding.
Sophie Kane: Happy to be here.
Patrick Moorhead: Yes, great to be here. But I really appreciate bringing the context to… I think it’s great. We can bring big-picture context to a 122-terabyte drive here. This is great, Mel. It was great hearing from all the partners up here to really, what’s the right word, accentuate everything that Solidigm said today, and hopefully, validated a lot of what I discussed and you discussed upfront.
Melody Brue: Yeah, I really liked hearing about the sustainability benefits that Solidigm is delivering to its partners.
Patrick Moorhead: Yeah, I just love what this announcement means to the overall industry. Like I said, storage doesn’t get enough conversation when it comes to the generative AI conversation out there. But when you combine the right storage solutions with the right compute, the right memory, and the right networking, amazing things happen. Performance, reliability, and efficiency for the optimal power draw for those data centers. So I just want to thank everybody for tuning in to The Six Five. Check out all of our content for Solidigm. We’ve had a couple great interviews over the past year, diving into other announcements, other product lines, other interviews with the executives. Hit that Subscribe button. Take care.
]]>MI&S Weekly Analyst Insights — Week Ending December 6, 2024
https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-december-6-2024/
Tue, 10 Dec 2024 01:30:21 +0000https://moorinsightsstrategy.com/?p=44280MI&S Weekly Analyst Insights — Week Ending December 6, 2024. A wrap up of what our team published during the last week.
Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.
The time between Thanksgiving and Christmas is hectic but rewarding for me, with lots of events and client meetings between the rounds of holiday cheer with the family. Last week, I was fortunate to be with a few members of my team in Las Vegas for AWS re:Invent. Matt, Jason, Robert, and I were all there covering our various specialty subjects—and figuring out how it all fits together for a company with the reach of AWS. It’s always great to see the analysts in action, and Jason recorded his first Six Five video. You can check that out, along with all the other re:Invent Six Five coverage, here.
Andy Jassy presents at re:Invent. (Photo: Patrick Moorhead)
While we were in Vegas, Will was in Dallas for AT&T’s Analyst & Investor Day, and Anshel traveled to a couple of NDA meetings with clients that will feed some of his future articles. This week, I’m in Northern California for the Lattice Developer Conference and then Marvell’s Industry Analyst Day, where Matt will join me. Robert and Jason will be in Boston with IBM, and Anshel will be attending T-Mobile’s Analyst Summit. Mel, Jason, and Robert will all be tuning into the ServiceNow Global Industry Analyst Digital Summit.
It’s a lot of travel, but also a lot of learning and many chances to connect with and advise our clients. We wouldn’t have it any other way. We’ll be starting 2025 with a bang, too, as many of us attend CES. If you are going to be there and we don’t already have something scheduled, please reach out and let’s set something up.
Over the last two weeks, MI&S analysts have been quoted in multiple syndicated top-tier international publications such as Barron’s, ComputerWorld, Fierce Network, Investor’s Business Daily, The Register, and VentureBeat. The media wanted MI&S’s take on Avaya, AWS, Intel and more. Pat made several network television appearances, including CNBC to discuss Intel’s CEO departure.
Last week was all about the Amazon Web Services re:Invent conference, for which 60,000 people descended on Las Vegas. It was quite an event, and AWS came out swinging with announcements up and down its stack. Here were the biggest news items on the developer side:
My personal favorite announcement of re:Invent was the new transformation capabilities in Q Developer. For context, Chapter 1 (out of 4) in my career was in systems integration and consulting; the big takeaway from that chapter was that migrations are hard—and maybe I should consider moving on to Chapter 2. One key lesson learned was that anything that can alleviate the pain of migrations and upgrades will make enterprises more secure and more efficient, and give them more time and money to innovate. AWS’s solid use case shows the enterprise potential of agentic apps and gen AI. Check out the piece thatI published in Forbes about AWS agents.
For a Six Five on the Road videocast, I got to interview Sherry Marcus, Ph.D. about gen AI and AWS Bedrock. I have met Sherry before and was so happy that this time we got it on video. Bedrock has significantly expanded its capabilities in the newest release to support agentic applications. Bedrock is looking a lot like a great enterprise solution, but it is not alone, given the release of Microsoft’s AI Foundry at the Ignite conference a couple weeks ago. What we are starting to see is the formation of a new type of middleware category that I am calling agentic development frameworks. This is the type of technology that will get us past productivity agents operating within application platforms (which still do have their place) and get us into integrated high-scale agentic solutions. Want to know more? Check out the Forbes piece I wrote about these agentic frameworks.
While AWS generated a lot of focus, re:Invent is also something of an ecosystem show for the cloud. So, partners and partner announcements were everywhere as well. I got to speak to product leaders from IBM, which made multiple announcements of their own products running upon and in some cases integrating with AWS cloud. It’s an interesting combination, as IBM is a major champion for hybrid cloud, and its tools could help create a bridge for management, governance, and observability anywhere that applications are deployed.
There was also a lot of talk about AWS SageMaker’s new unified toolset. At first glance, it may seem like it’s simply a means to unify the data-scientist experience, but after doing some digging I was able to find out that what was announced this week was only step one. I think we will be seeing a great deal of work integrating SageMaker with Amazon Bedrock in 2025.
Last week was all about the Amazon Web Services re:Invent conference, for which 60,000 people descended on Las Vegas. It was quite an event, and AWS came out swinging with announcements up and down its stack. Here were the biggest news items on the developer side:
My personal favorite announcement of re:Invent was the new transformation capabilities in Q Developer. For context, Chapter 1 (out of 4) in my career was in systems integration and consulting; the big takeaway from that chapter was that migrations are hard—and maybe I should consider moving on to Chapter 2. One key lesson learned was that anything that can alleviate the pain of migrations and upgrades will make enterprises more secure and more efficient, and give them more time and money to innovate. AWS’s solid use case shows the enterprise potential of agentic apps and gen AI. Check out the piece thatI published in Forbes about AWS agents.
For a Six Five on the Road videocast, I got to interview Sherry Marcus, Ph.D. about gen AI and AWS Bedrock. I have met Sherry before and was so happy that this time we got it on video. Bedrock has significantly expanded its capabilities in the newest release to support agentic applications. Bedrock is looking a lot like a great enterprise solution, but it is not alone, given the release of Microsoft’s AI Foundry at the Ignite conference a couple weeks ago. What we are starting to see is the formation of a new type of middleware category that I am calling agentic development frameworks. This is the type of technology that will get us past productivity agents operating within application platforms (which still do have their place) and get us into integrated high-scale agentic solutions. Want to know more? Check out the Forbes piece I wrote about these agentic frameworks.
While AWS generated a lot of focus, re:Invent is also something of an ecosystem show for the cloud. So, partners and partner announcements were everywhere as well. I got to speak to product leaders from IBM, which made multiple announcements of their own products running upon and in some cases integrating with AWS cloud. It’s an interesting combination, as IBM is a major champion for hybrid cloud, and its tools could help create a bridge for management, governance, and observability anywhere that applications are deployed.
There was also a lot of talk about AWS SageMaker’s new unified toolset. At first glance, it may seem like it’s simply a means to unify the data-scientist experience, but after doing some digging I was able to find out that what was announced this week was only step one. I think we will be seeing a great deal of work integrating SageMaker with Amazon Bedrock in 2025.
Amazon’s newly announced Amazon Nova is a line of three new foundation models: Nova Pro, Nova Lite, and Nova Micro. These models have frontier intelligence and can handle difficult language tasks, as proven by benchmarks like MMLU and VATEX. Their key advantages are speed, agentic workflows, and the ability to be customized. With these models, Amazon has focused on price-performance, which could disrupt the AI market with a new standard and the ability to democratize advanced AI capabilities for small models.
Similar to Dell’s AI Factory, Microsoft announced the Azure AI Foundry to facilitate AI implementation. The foundry includes an SDK that integrates Azure AI capabilities with GitHub and Visual Studio. Azure AI Foundry’s objective is to simplify the implementation of AI through a unified platform that streamlines AI development. Azure AI Foundry includes tools to help determine AI’s effectiveness and monitor ROI, with the goal of determining AI’s impact and ensuring it is focused on the proper business objectives.
Verint has introduced a new automated Scoring Bot tool to give organizations a more accurate and efficient method to assess the quality of their customer and employee experiences. This bot likely employs advanced analytics and AI technologies to analyze data from various interactions, such as customer feedback, service calls, and employee engagement metrics. By automating the scoring process, the bot can reduce manual effort and increase the speed at which insights are generated. This should enable businesses to make more informed decisions and swiftly implement improvements.
Integrating customer and employee experience scoring into a single tool reflects the growing recognition of the interdependence between the two. Engaged and satisfied employees are often more likely to deliver better customer service, enhancing overall customer satisfaction.
I want to talk about a key announcement out of re:Invent—Q Developer for modernization. This agent does exactly as one would think: modernize environments for IT. It focuses on three environments—.NET, mainframe, and VMware. While all are compelling, the VMware agent is incredibly disruptive, and maybe a little provocative.
Here’s the gist: through Q Developer, organizations can take their on-prem VMware environments and workloads and migrate them to cloud-native environments—on AWS, of course—all with a “few clicks.” This includes everything from re-mapping networks to workload migration.
Do I think this tool is going to be as simple as entering a few bits of data and clicking the mouse? No. But if this gets an organization, say, 75% of the way to successful migration, we are talking many months of saved time. And many, many dollars.
I’m curious to see how this will take off with customers; I can’t wait to see those first success stories and understand what was really involved in getting across the finish line.
One of the stars of AWS re:Invent was silicon—specifically Trainium2. This AI training silicon, boasting 20.8 petaflops of peak performance, is the company’s answer for meeting the needs of cost-effective, performant instances. In fact, the company claims a 30% to 40% price-performance advantage over its own GPU-based training instances running NVIDIA’s H200. The Trainium2, now in general availability under the trn2 instance, is also built into an AWS UltraServer (four chips) and UltraCluster (hundreds of thousands of chips).
So what is AWS doing? Is it taking on NVIDIA and other GPU players with Trainium (and Inferentia)? People at the company will tell you no. They will tell you that they are simply providing customers with choice, just as they have with Graviton. I understand this position, and the logic behind it. Further, I have no doubt the company is simply providing choice across the AI journey.
That said, I also believe that AWS is going to see success with Trainium like it has with Graviton. This means a first-generation part that played more as a proving ground than anything else. Then a second generation that delivered significant price-performance improvements, and a third (and fourth) generation that continued to build adoption. And one day we will all take note that about half of the workloads run on AWS silicon.
Trainium will be a little harder to grow as aggressively as Graviton, however, because migrating a workload to a new virtualized CPU is considerably easier. However, I believe that success will come. With that said, there is no loser—and NVIDIA’s astronomical growth in the training space will not be slowed.
Salesforce’s AgentForce platform has made a strong initial market entry, as shown by contracts with prominent companies such as FedEx and IBM within its first quarter since launch. CEO Marc Benioff projects significant growth, estimating that Salesforce customers will deploy one billion AI agents within the next year. This expected expansion in AI integration has led to increased revenue forecasts for Salesforce.
The platform’s success will likely depend on Salesforce’s ability to provide reliable model training and address potential issues such as hallucinations or memory inconsistencies, which it seems to be managing effectively. However, the widespread adoption of AgentForce involves more than just technological reliability. Organizations face the complex challenge of integrating a new digital workforce into their existing structures and workflows. Successfully implementing a digital workforce requires a considerable change-management effort. This entails a shift in mindset, organizational culture, and operational processes. Organizations must be ready to invest the necessary time, resources, and leadership commitment to navigate this transition and unlock the full potential of AI agents.
I look forward to seeing more progress from Salesforce and AgentForce during the AgentForce 2.0 event in mid-December.
Data has always been essential for businesses, but it’s not enough to just collect it—you need to act on it. At re:Invent, AWS CEO Matt Garman said, “The next big leap in value is not just about getting great data, but about taking actions and doing something with it.” That idea stuck with me because it gets to the heart of what businesses need today.
AWS just introduced the next generation of Amazon SageMaker, which brings data, analytics, and AI into one platform. It includes tools such as SageMaker Unified Studio for accessing and working with data, SageMaker Catalog for managing and finding data, and SageMaker Lakehouse for combining analytics and AI. The existing SageMaker service has been renamed SageMaker AI, focusing on building, training, and deploying AI and ML models. It can still be used on its own, but it’s also part of the larger integrated platform for those who need everything in one system.
These updates show that AWS is innovating to simplify things for businesses. By unifying these tools, AWS looks to guide enterprises in turning data into results.
As we all know, data is essential for innovation and business transformation. This fall, I attended events hosted by Teradata, AWS, Infor, and LogicMonitor, each offering different approaches to data management. Although all four had different methods, they shared a consistent perspective on the importance of data management—which is often overlooked in discussions dominated by AI.Read my latest Forbes article, where I highlight these companies’ views on how data management supports IT integration, analytics, real-time monitoring, and AI-based processes.
Broadcom’s acquisition of VMware and subsequent price increases have introduced significant challenges for enterprises reliant on VMware’s virtualization solutions. The financial and technical concerns associated with these changes dictate a reassessment of IT strategies, especially to avoid vendor lock-in. Deep data and network observability can serve a critical function in managing hybrid workloads, optimizing resources and utilization, and ensuring seamless operations across public and private clouds.
Over time, Broadcom’s pricing strategy for VMware may spur innovative approaches as its current customers adopt competitive offerings and diversify their virtualization solutions. As businesses adjust to a new normal, I believe this evolution will shine a light on the importance of network observability solutions within a migration journey for those customers that wish to explore alternatives. For more insights on this, take a look at my recent Forbes contribution on the role of deep observability in VMware migrations.
At AWS re:Invent 2024,SAP introduced GROW with SAP on AWS, designed to make it easier for enterprises to adopt SAP S/4HANA Cloud ERP. As you may know, ERP transformations can be challenging. The goal of GROW with SAP is to reduce the upfront costs and complexity of adopting cloud ERP, enabling businesses to complete deployments in months instead of years. This solution integrates SAP data with AWS’s generative AI tools—especially Amazon Bedrock—to improve operations. It also uses SAP’s Joule AI copilot to work across SAP applications to make processes more efficient.
I’m interested to see how GROW with SAP on AWS plays out when it is released soon. I understand the promise of easier access to resources, but knowing how challenging ERP transformations can be, especially in terms of both data management and change management, I’m cautious about how well it will deliver.
PayPal and Venmo are updating their platforms to remain competitive in the digital payment market. Paypal is introducing features such as money pooling to enhance user experience and attract a broader audience. This strategy also indirectly promotes the PayPal brand to younger Venmo users, potentially fostering future loyalty to the larger PayPal ecosystem.
At the same time, PayPal is facilitating online holiday shopping with its new “Fastlane by PayPal” feature. Launched in August 2024 and now available throughout the U.S., Fastlane aims to simplify guest checkout by using e-mail recognition and one-time passcodes to fill in shipping and payment information automatically. This reduction in manual entry could lead to faster checkouts and increased sales. Currently, Fastlane is available to U.S. merchants and shoppers using U.S. dollars and integrates with various e-commerce platforms, including Adobe Commerce, BigCommerce, and Salesforce Commerce Cloud. These developments indicate a two-pronged approach by PayPal, improving the user experience across different platforms and transaction types to boost engagement and expand market share.
SoFi Invest has expanded its offerings through a partnership with Templum, providing accredited investors with access to private-market investments. This move includes new funds such as those focused on SpaceX, Pomona Investment Fund, and StepStone Private Markets Fund. The partnership leverages Templum’s technology to facilitate these alternative investments, aligning with SoFi’s strategy to enhance its investment options and cater to the growing demand from retail investors interested in privately held companies.
There are rumors that Sony is working with Apple to bring games from PlayStation VR to the Vision Pro. This would be a huge win for both companies, since Sony ultimately doesn’t care about the hardware as much as it does selling games and Apple desperately needs more-immersive and better-quality games for its headset. A big part of this rumored partnership is that Apple would ensure that Sony’s PSVR Controllers would work with the Apple Vision Pro. This would also mean that developers could develop for the Vision Pro and PSVR using the same control scheme. I believe that this is partially an admission from Apple that shipping a headset without controllers and expecting people to game on it is a fool’s errand.
It appears that Valve is not only building a VR headset codenamed Deckard, but that it is also creating new VR controllers and a living room console to accompany the Steam Deck as part of its upcoming VR hardware offerings powered by SteamOS. I talked about Valve’s efforts in the XR space in my recent State of XR Part 2 report, but it seems that Valve’s hardware ambitions are both deeper and broader than originally anticipated. That said, “Valve time” is a very real thing, and these products could launch in a few months or a few years—it’s anyone’s guess.
AWS re:invent attendees had to look behind the session titles to find IoT news. That’s because CSPs (including AWS) and enterprise ERP providers are following the money—using AI to extract significant business value from the operations data collected by IoT systems. Here are two examples from re:invent: using AI to monitor and analyze IoT data and using Greengrass to deploy machine learning on edge devices.
AWS announced the general availability of IoT SiteWise Assistant in November and demonstrated its capabilities at re:invent. IoT SiteWise Assistant adds a layer of intelligence on top of the SiteWise Monitor Dashboard to simplify industrial data collection, organization, and monitoring. The new Assistant enables enterprises to gain actionable insights into complex operational situations involving disparate data sources and types. Operators can ask simple, natural-language questions to identify problems, troubleshoot root causes, and take corrective actions.
Digging deeper, here’s why this is a game-changer for IoT: Using AI to look for patterns across heterogeneous data lowers one of the most significant industrial IoT solution barriers—data ingestion and transformation. The trend is to use data pretty much as-is rather than remodeling it into a unified, centralized, managed data fabric. This approach fits with AWS’s “data as a product” strategy, combining the benefits of mesh and fabric data architectures.
The re:Invent workshop “Unleash edge computing with AWS IoT Greengrass on NVIDIA Jetson” demonstrated deploying ML models directly on edge devices, facilitating on-device intelligence with data paths connected to AWS services. The workshop used the NVIDIA Jetson Orin platform to accelerate complex AI workloads such as image recognition, Edge Impulse to streamline cloud-to-edge AI workflows, and Greengrass to connect the data and orchestrate software deployment on fleets of devices. I’m doubling down on my prediction that AI at the edge is a 2025 “megatrend.”
Apple’s 5G modem ambitions are no secret to anyone; the company wants to ramp its 5G modem capabilities with low-cost and low-volume products. This is meant to protect the company from any potential defects or delays that could arise. I also believe it gives Apple time to refine its modem design and add capabilities while shrinking die space and power consumption. We could even see a 5G RedCap modem from Apple for the Apple Watch or other wearables like an AR headset. I would expect a more powerful modem inside new MacBooks or iPads that’s more in-line with what Apple will use in the iPhone.
A comprehensive global study, the 2024 HP Work Relationship Index, reveals that only 28% of knowledge workers have a healthy relationship with their work. This represents a mere one-point increase from 2023. However, the study highlights two promising solutions to improve this dynamic: integrating artificial intelligence and offering personalized work experiences.
The study’s key findings include:
AI usage among knowledge workers has surged to 66% in 2024, up from 38% last year, with AI users reporting 11 points higher satisfaction with their relationship to work.
Approximately two-thirds of workers desire personalized work experiences, with 87% willing to forgo part of their salary to achieve this.
AI is crucial for making jobs easier, improving work-life balance, and opening new opportunities for enjoyment and career advancement.
Microsoft finally pushed its Recall feature to x86 Copilot+ PCs with chips from Intel and AMD that have capable enough NPUs. This update came through the Windows Insider Dev channel, so it’s not quite a broad release, but it is good to see Microsoft deliver on its promise of bringing Copilot+ to x86 PCs. I did think it was a bit odd that Recall didn’t come as part of the November update that brought Copilot+ to x86 PCs, but at least it wasn’t a long wait. I believe that we’ll continue to see a feature delta between Snapdragon-based systems and x86 systems, given Qualcomm’s six-month head start.
IonQ recently announced IonQ Quantum OS, a quantum operating system to increase the efficiency and scalability of quantum computing. It reduces classical and cloud overheads, provides improved qubit calibration, and increases security for enterprise-level applications. IonQ is currently using the OS in the IonQ Forte system and plans to use it in the IonQ Forte Enterprise in Switzerland.
IonQ also announced the IonQ Hybrid Services suite, which is designed to blend quantum and classical computing. It contains a new tool called the Workload Management & Solver Service, which makes cloud integration of quantum tasks easier. It also announced “Sessions” for optimized QPU time management.
Nile Secure’s latest Trust Service leverages its core architecture to simplify zero trust access and policy management with continuous updates as well as monitoring and enforcement capabilities. I also like its SSE integrations with Microsoft Security, Palo Alto Networks, and Zscaler for extended cloud workload protection. The company’s approach to delivering secure networking as a service with SLA guarantees is somewhat unconventional, and this latest announcement has the potential to provide additional value for its customers.
At this year’s AWS re:Invent, there was of course a lot of great technology to discuss. But one thing that stands out to me is how these tools and solutions aren’t just geared toward enterprises—they’re also making an impact in sports. I attended an AWS Sports session that highlighted work being done with the National Football League, National Hockey League, PGA Tour, and Deutsche Fußball Liga.
This session gave context to our Moor Insights & Strategy Game Time Tech podcast. Melody Brue and I had a chance to catch up with Julie Neenan Souza, head of global sports strategy at AWS, to discuss how AWS technologies are impacting organizations, players, and fans.Check out this great conversation.
Renewable energy company RWE has chosen Hewlett Packard Enterprise’s Private Cloud AI to enhance its weather modeling and energy resource management. This collaboration is part of HPE’s NVIDIA AI Computing portfolio and will enable RWE to leverage advanced AI to improve forecast accuracy and optimize its global renewable energy operations.
RWE’s AI Research Laboratory will use the AI-optimized private cloud—which HPE says can be deployed in just a few clicks—to evaluate, fine-tune, and deploy weather models. The solution’s ability to handle large datasets and automate processes should streamline this work. It will integrate with the HPE GreenLake cloud, which should allow RWE’s researchers to concentrate on model development and accelerate their time-to-market.
This initiative aligns with RWE’s growth strategy, “Growing Green,” which aims to expand its renewable energy portfolio and achieve net zero emissions by 2040. The advanced AI capabilities should also give RWE a competitive advantage in the renewable energy market.
I had the opportunity to attend AT&T’s Analyst & Investor Day in Dallas. The company’s continued significant fiber investment serves as the bedrock for it to provide converged network services that include highly performant broadband and mobility at scale. In the process, it is bridging the digital divide and providing digital literacy through its network of connected learning centers. It’s a model to follow for other companies that have the capital and operational resources to marshal. One statistic that was shared at the event stood out for me: Converged services are lifting the operator’s lifetime subscriber value by 15%. That flies in the face of convention that subscribers bundle services purely to get discounts. This positions AT&T to offer additional adjacent solutions that have great potential for lifting ARPU over time.
]]>Solidigm Aims To Improve Data Center Sustainability With 122TB SSD
https://moorinsightsstrategy.com/solidigm-aims-to-improve-data-center-sustainability-with-122tb-ssd/
Thu, 05 Dec 2024 19:04:27 +0000https://moorinsightsstrategy.com/?p=44509The AI boom has led to an energy crunch in datacenters, but Solidigm plans to significantly lower the environmental impact of datacenters with its high-density storage.
]]>Industry panel discussion at Solidigm’s 122TB launch event Solidigm
The ongoing AI revolution brings with it massive demands for data and data processing. Unfortunately, this data explosion presents significant business challenges in the form of rising energy costs and strained infrastructures. Datacenters are experiencing an energy crunch, and the financial strain could hinder innovation and eat into profitability.
At a launch event in New York City last week, Solidigm, a NAND flash memory maker, unveiled a robust new answer to help solve this problem: the 122TB D5-P5336 SSD. This solid-state drive offers a new level of storage density and efficiency, promising to ease sustainability pressures in datacenters and unlock the full potential of AI. The launch event brought together media, analysts and key partners, including executives from Arm, PEAK:AIO, VAST Data, Ocient and CoreWeave, underscoring the acknowledgment of Solidigm’s advancement within its ecosystem.
Datacenter Efficiency In The Age Of AI
The pressing task is to rein in datacenter power consumption. Greg Matson, senior vice president of strategic planning and marketing at Solidigm, noted at the event that “Analysts expect datacenter power [consumption] to increase threefold by the end of the decade, with a huge part of that being driven by AI.” This creates an urgent need for more efficient storage solutions. The D5-P5336 directly addresses this challenge, consuming up to 84% less power in network-attached storage deployments than traditional hybrid HDD and TLC solutions. At the edge, it delivers 3.4 times more terabytes per watt than 30TB TLC drives, significantly improving power density. This translates to significant cost savings for businesses, allowing them to redirect resources toward AI development and other strategic initiatives.
During an executive panel at the event—moderated by Patrick Moorhead, CEO and chief analyst at Moor Insights & Strategy, and me—Chloe Jian Ma, vice president at Arm, provided further context. “Currently, datacenters as a whole are consuming about 460 terawatt-hours of power,” she explained. “That’s about equivalent to the power consumption of Germany as a country, but it’s going to increase significantly.” She cited the example of Meta’s new AI cluster, which consumes enough power to supply 34 million American households. “We have to think about new ways to make datacenters more efficient,” Ma urged.
Making Every Watt And Square Inch Count
The drive’s high storage density also translates to substantial space savings. It enables up to a 4:1 reduction in NAS footprint compared to legacy solutions. In space-constrained edge installations, it stores four times more data versus 30TB TLC drives. This efficiency allows datacenters to store massive amounts of data in smaller footprints, reducing capital expenditure and operational costs. Matson emphasized the importance of both energy and space efficiency, noting that “We’re in an era right now where every watt and every square inch counts.”
The D5-P5336’s high capacity and low power consumption make it particularly well-suited for edge computing environments, where space and energy resources are often limited. As Roger Cummings, CEO of PEAK:AIO, pointed out, “Edge devices need to be very intelligent. And to be very intelligent, you need that density associated with it.”
Sustainable AI Development: A Business Imperative
As AI adoption accelerates, the demand for high-capacity, energy-efficient storage solutions continues to grow rapidly. A recent Microsoft and Carnegie Mellon study revealed that storage power accounts for a significant portion of datacenter emissions. To take one example from the study, storage causes 33% of operational emissions and 61% of embodied emissions in Microsoft Azure’s general-purpose cloud. This highlights the potentially staggering environmental impacts as datacenter usage increases to accommodate AI and other uses.
Efficient storage ensures that AI infrastructure operates at peak performance. As Ma explained, “GPUs and AI compute are like the engine, and storage and networking are feeding it data. You don’t want to keep the engine idle.”
Companies such as PEAK:AIO are developing software solutions to further optimize AI workloads. “Our software-defined approach allows organizations to build high-capacity, densely configured nodes, significantly reducing the overall infrastructure required,” Cummings explained. This synergy between efficient hardware such as Solidigm’s 122TB SSD and intelligent software is crucial for addressing the evolving needs of data-intensive AI workloads while minimizing environmental impacts.
Industry Perspective
“We’re seeing very real concerns from our customers around energy consumption,” Sophie Kane, director of growth marketing and business development at Ocient, told me. “Over half of the data and IT professionals we recently surveyed expressed fears about energy becoming a problem they can’t get a handle on. This has huge implications for innovation.”
Travis Vigil, senior vice president of product management in the Infrastructure Solutions Group at Dell Technologies, offered a complementary perspective. “The AI opportunity does not come without challenges, and organizations today are making unprecedented infrastructure power and space decisions,” he said. “Dell Technologies believes that higher density provides the path to maximizing storage energy efficiency while minimizing datacenter footprint.”
Sustainability By Design
Solidigm’s commitment to sustainability extends beyond the D5-P5336 SSD’s efficiency and 122TB capacity. The drive incorporates OCP-compliant thermal management, ensuring efficient cooling even under demanding workloads. This extends the SSD’s lifespan and reduces datacenter cooling needs, further lowering overall energy consumption.
The D5-P5336 is engineered with longevity in mind so it can maintain consistent performance throughout its operational life. It uses smart software for lower write amplification, has a large capacity to absorb data writes and is built for workloads with larger files and sequential writes—all of which reduce stress on the drive. Datacenter operators should be able to rely on these drives for extended periods without significant degradation in speed or capacity. Minimizing the need for frequent replacements reduces electronic waste, resource-intensive manufacturing and the carbon footprint associated with transportation and installation.
Building A More Sustainable Future For Datacenters
Solidigm’s 122TB SSD marks a significant advancement for meeting the increasing demand for sustainable data storage solutions. By combining high capacity with energy efficiency, this technology could significantly lower the environmental impact of datacenters. Its compact size and reduced power consumption are also beneficial for various edge computing applications, helping to minimize resource use and electronic waste wherever data is processed. As the industry deals with the challenges and opportunities presented by AI, solutions like the D5-P5336 point to a future where technological innovation and environmental responsibility can work hand in hand.
]]>Game Time Tech with Special Guest Julie Souza, AWS Global Head of Sports Technology
https://moorinsightsstrategy.com/game-time-tech/game-time-tech-with-special-guest-julia-souza-aws-global-head-of-sports-technology/
Tue, 03 Dec 2024 23:10:26 +0000https://moorinsightsstrategy.com/?post_type=gametimetech&p=44233Hosts Melody Brue and Robert Kramer are joined by special guest Julie Souza, head of AWS’s Global Sports Strategy.
]]>Tune in to the latest episode of the Game Time Tech Podcast as Melody Brue and Robert Kramer host special guest Julie Souza, head of AWS’s Global Sports Strategy.
In this episode, we explore:
The three pillars of sports innovation: Performance Data & Analytics, Media Management, and Fan Engagement – and how AWS is driving progress in each area.
AWS and the NFL: Cool tech collaborations, from player health and safety (including the Digital Athlete portal) to Next Gen Stats and the exciting potential of generative AI.
Impact on the field and beyond: How cloud technology is changing the game for teams, broadcasters, and fans, including injury reduction, rule changes, and new ways to engage with the sport.
The Seattle Seahawks partnership: A deep dive into how the Seahawks are using generative AI to enhance content creation and fan engagement.
The future of sports tech: What’s next for AWS in the world of sports, including globalization and live cloud production.
Behind the scenes at AWS Sports: Julie shares unique insights and a fun story from the early days of implementing tracking technology in the NHL.
Key Takeaways:
Data is king in sports, and AWS is helping teams and leagues unlock its full potential.
AI is transforming everything from player safety to fan engagement.
Cloud technology is making sports more efficient, accessible, and sustainable.
Partnerships between AWS and sports organizations are driving innovation and creating new possibilities for the future of sports.
Watch the episode here:
Or listen here:
Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show, its contents should not be taken as investment advice.
]]>Box Debuts AI-Driven Content Management Solutions At BoxWorks 2024
https://moorinsightsstrategy.com/box-debuts-ai-driven-content-management-solutions-at-boxworks-2024/
Sat, 30 Nov 2024 02:23:09 +0000https://moorinsightsstrategy.com/?p=44502The AI-focused announcements at BoxWorks reflect a significant shift in the content management landscape, with the aim of realizing the full value of enterprise data.
]]>Box cofounder and CEO Aaron Levie presents at BoxWorks 2024. Box livestream
Box held its annual BoxWorks conference this week in San Francisco, announcing a series of AI-driven innovations. The focus was squarely on how Box aims to help organizations manage content and extract more value from their data.
The company has taken advantage of a significant shift in enterprise data management that has been driven by advancements in large language models; this has enabled Box to continually add AI capabilities to support the content lifecycle. Box estimates that 90% of the data in an enterprise is unstructured, with the majority of that being content. Traditionally, systems have not been able to realize the full value of enterprise data, relegating it to silos and “black holes.”
Aaron Levie, cofounder and CEO at Box, explained this is changing because of the company’s Intelligent Content Management platform. Customers will be able to “leverage the data inside their files to drive innovation, automate processes and secure their most important information at a fraction of the cost of legacy systems,” according to Levie.
Box Studio: No-Code Agents
Box introduced Box AI in mid-2023 and has since integrated its AI features across the platform, including the recently announced general availability of Box Hubs. Now, Box AI Studio sits at the heart of the company’s AI strategy. Introduced at BoxWorks and available in January 2025, this no-code feature will allow organizations to tailor AI to their specific needs. Administrators can select from various AI models (including those from Microsoft, AWS and Google), create AI agents for workflows and customize them with prompts and parameters—all without coding. Box AI Studio also includes tools to test, deploy and track the performance of different AI agents to ensure effectiveness for various use cases. Security features enable administrators to control access and permissions.
As an example, an HR department could create a “Talent Agent” to optimize its recruitment process for long-term outcomes. A company could customize this agent to align with its hiring guidelines and organizational needs. It could automatically screen resumes, engage with candidates and even help manage employee performance reviews. An agent like this might also identify skills gaps and career paths, helping to create a cohesive employee lifecycle experience.
At the event, Box also introduced Box Apps. This no-code application development environment allows users to create custom applications with features such as dashboards, specialized content views and automated workflows by leveraging AI-powered metadata extraction. By automatically identifying and tagging critical information within documents, Box Apps can help automate complex processes such as contract management, invoice processing and employee onboarding.
Box also announced two new features currently in beta: Box Forms for creating web and mobile forms and Box Doc Gen, which automates document creation by populating templates with data from various sources, including Box Forms, third-party applications and metadata extracted from content within Box. These solutions should help eliminate manual data entry and ensure consistency in document creation.
These advancements demonstrate Box’s commitment to delivering on the promises of its strategic acquisitions. The company recently acquired Alphamoon, an intelligent document processing company, and Crooze, a no-code workflow automation platform. Alphamoon’s technology integrates into the AI-powered metadata extraction capabilities of Box Apps, while Crooze’s influence appears in streamlined workflow automation features. These acquisitions have clearly been a driver in Box’s push from content storage and management to the broader market play of intelligent processing and workflow automation.
Enhanced Security And Compliance
The company made two new security and compliance feature announcements that will be available in beta in January 2025. Box Archive is used for long-term content preservation, which is crucial for meeting many regulatory requirements. The new Content Recovery capability is designed to aid in rapid recovery from ransomware attacks. It provides admins with a dashboard that details the attack’s impact and identifies compromised files. It automatically syncs to Box Drive so businesses can get their content back quickly. With 30% of ransomware demands exceeding $5 million, Content Recovery could be a vital safety net against increasingly sophisticated cyberattacks.
The company has a strong foundation of security with features such as AES 256-bit encryption, granular access controls, AI-powered data-leak prevention and a zero-trust architecture. Box does not directly compete with data protection leaders like Commvault, Veeam or Cohesity. However, it is strategically—and aggressively—advancing its security offerings. These enhancements introduce valuable overlapping features, giving users strong data protection and recovery capabilities within their existing Box environment.
AI And The Box Impact Fund
At BoxWorks, Box also announced the fourth annual installment of the Box Impact Fund, which provides six $25,000 grants to nonprofits pursuing digital transformation projects. (I was a judge for last year’s grants; I wrote about that experience here.) This year, the fund will prioritize projects leveraging AI in child welfare, crisis response and environmental protection. Nonprofits can apply between November 12 and December 9, 2024. This initiative reflects Box’s recognition that nonprofits require the same digital tools and capabilities as enterprises to maximize their impact.
The Future Of Intelligent Content Management
Box’s announcements at BoxWorks 2024 signal a significant shift in the content management landscape, and I believe Box is well-positioned in the sector. The company’s strategic direction shows it is addressing the growing need for organizations to manage and extract value from increasing content volumes while tackling the challenges of data sprawl, security and regulatory compliance. The company’s commitment to empowering both enterprises and nonprofits with these advanced tools looks toward a future where data is not merely stored, but truly understood and leveraged for maximum impact.
]]>Episode 39: Microsoft Ignite, Solidigm, AWS, Canva, Zoom, and Panzura
https://moorinsightsstrategy.com/hot-desk-pod/episode-39-microsoft-ignite-solidigm-aws-canva-zoom-and-panzura/
Sat, 23 Nov 2024 00:53:34 +0000https://moorinsightsstrategy.com/?post_type=hot_desk_pod&p=44210Join Robert Kramer and Melody Brue as they talk about key developments from Microsoft Ignite, Solidigm, AWS, Canva, Zoom, and a company to watch: Panzura.
Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show, its contents should not be taken as investment advice.
]]>MI&S Weekly Analyst Insights — Week Ending November 22, 2024
https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-november-22-2024/
Fri, 22 Nov 2024 20:02:20 +0000https://moorinsightsstrategy.com/?p=44170MI&S Weekly Analyst Insights — Week Ending November 22, 2024. A wrap up of what our team published during the last week.
Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.
MI&S CEO Patrick Moorhead and MI&S analyst Melody Brue (far right) moderate a panel of ecosystem partners at the launch event for Solidigm’s new 122TB SSD for datacenters. Photo: Solidigm
As I mentioned last time, the firm took a week off from publishing this Analyst Insights roundup so we could devote a whole day to an all-company strategy session held in Austin. It was a great opportunity for us to review our projects and performance in 2024 and to hone our focus for 2025. We capped off the day in a private room at a downtown restaurant where we welcomed spouses, family members, and friends of the firm for a celebratory dinner.
This week, I was at Microsoft Ignite, where, in addition to my regular analyst duties—including an exclusive briefing with Microsoft chairman and CEO Satya Nadella—I filmed several Six Five videos with key executives to break down the Ignite news. Anshel was in New York for Qualcomm’s Investor Day, Matt was at SC24 in Atlanta, and Will was in Tokyo at the NTT R&D Forum.
Read more about significant tech trends and events in this week’s Analyst Insights, including insights on happenings from the week you didn’t hear from us, such as BoxWorks (Mel) and the Veeam Analyst Summit in Scottsdale (Robert).
For those of you in the U.S., the team at MI&S wishes you a happy Thanksgiving! And a wonderful week to all!
Over the last two weeks, MI&S analysts have been quoted in multiple syndicated top-tier international publications, including VentureBeat and Wired, with our thoughts on Amazon, Dell, Intel, Nvidia, Microsoft, low-code/no-code, semiconductors, chips, AI, and more. Patrick Moorhead appeared on Yahoo! Finance and CNBC to discuss Nvidia’s Q3 2025 earnings.
This week Microsoft hosted its Ignite event in Chicago and, as expected, it was very heavy on AI agents in terms of announcements. This makes a lot of sense, and I intend to do a deeper research piece on all of the announcements very soon. What stands out so far is the breadth of the announcements. Microsoft is a massive technology company providing value to many different stakeholders. So, it was not a surprise to see agentic aspects throughout its developer tools such as Copilot Studio, as well as embedded into standard products such as Office 365. What I wonder is how quickly we will see the agent category start to fracture into sub-categories. This will likely be a healthy thing, since my conversations with sales and marketing leaders tells me that education around agents and AI in general is seriously lacking. One click down into how this stuff works—and the implied business value of different types of agents—might do wonders for everyone.
This week I got the chance to meet the founding team at Zavvis Technologies to discuss their approach to agentic applications. The discussion provided a cool perspective on startups in the age of AI. The access to LLMs via hyperscalers and horizontal tooling for data science and application development enables new approaches to innovating. What Zavvis is doing is digging into how agents can improve the CFO function in a company. To be specific, it’s not really about automating financial operations and processes, but more a set of agents to help the CFO understand options and see opportunities in a mass of data. To me, it’s a very interesting approach for a couple of reasons. First is simply the fact that a company can now use technology to take AI deep into a very specific functional area. Second, the agents should help blend together structured and unstructured data, which may in fact enhance the CFO’s role. That said, this approach is a bit tricky, given that the CFO role in particular is known for risk aversion—which suggests a different sort of go-to-market strategy for Zavvis. I am excited to hear what the company learns as it gets moving.
In October, IBM announced the availability of its Granite 3.0 models; since then it has been engaging many different ecosystems talking about the open source value proposition for LLMs. While IBM is not the only player here, it does have an evolved take on how transparent vendors can be about LLMs. But one item that got lost in the shuffle was IBM introducing one Granite variant specifically to implement guardrails on speech and bias. IBM has a notion of this LLM sitting in front of other LLMs. For an analogy, think of when someone in a TV studio “bleeps” out a word they are not allowed to broadcast. I was talking to another client about this today and I started to think about the utility of purpose-built LLMs being used as a front-end. It’s an interesting notion, and whether it’s going to win out against other rules-based guardrails and security measures is unknown. But it also suggests that we are about to enter into some really serious architectural discussions for AI in 2025.
This week Microsoft hosted its Ignite event in Chicago and, as expected, it was very heavy on AI agents in terms of announcements. This makes a lot of sense, and I intend to do a deeper research piece on all of the announcements very soon. What stands out so far is the breadth of the announcements. Microsoft is a massive technology company providing value to many different stakeholders. So, it was not a surprise to see agentic aspects throughout its developer tools such as Copilot Studio, as well as embedded into standard products such as Office 365. What I wonder is how quickly we will see the agent category start to fracture into sub-categories. This will likely be a healthy thing, since my conversations with sales and marketing leaders tells me that education around agents and AI in general is seriously lacking. One click down into how this stuff works—and the implied business value of different types of agents—might do wonders for everyone.
This week I got the chance to meet the founding team at Zavvis Technologies to discuss their approach to agentic applications. The discussion provided a cool perspective on startups in the age of AI. The access to LLMs via hyperscalers and horizontal tooling for data science and application development enables new approaches to innovating. What Zavvis is doing is digging into how agents can improve the CFO function in a company. To be specific, it’s not really about automating financial operations and processes, but more a set of agents to help the CFO understand options and see opportunities in a mass of data. To me, it’s a very interesting approach for a couple of reasons. First is simply the fact that a company can now use technology to take AI deep into a very specific functional area. Second, the agents should help blend together structured and unstructured data, which may in fact enhance the CFO’s role. That said, this approach is a bit tricky, given that the CFO role in particular is known for risk aversion—which suggests a different sort of go-to-market strategy for Zavvis. I am excited to hear what the company learns as it gets moving.
In October, IBM announced the availability of its Granite 3.0 models; since then it has been engaging many different ecosystems talking about the open source value proposition for LLMs. While IBM is not the only player here, it does have an evolved take on how transparent vendors can be about LLMs. But one item that got lost in the shuffle was IBM introducing one Granite variant specifically to implement guardrails on speech and bias. IBM has a notion of this LLM sitting in front of other LLMs. For an analogy, think of when someone in a TV studio “bleeps” out a word they are not allowed to broadcast. I was talking to another client about this today and I started to think about the utility of purpose-built LLMs being used as a front-end. It’s an interesting notion, and whether it’s going to win out against other rules-based guardrails and security measures is unknown. But it also suggests that we are about to enter into some really serious architectural discussions for AI in 2025.
Microsoft just released Magentic-One, a multi-agent AI system that can handle open-ended tasks common to daily life. The system’s multiple agents each have specialized functions. These agents are controlled by an orchestrator agent that acts as a monitor and supervisor. The open source system is new from the standpoint that it is active instead of passive and can provide recommendations and execute tasks. According to Microsoft, Magentic-One excels at software development, data analysis, and navigating the internet.
My biggest takeaway from the Microsoft Ignite conference is how much the company has invested in its infrastructure. It has added an HSM for crypto management and a DPU for networking and storage acceleration to complement its Cobalt CPU, Maia AI accelerator, and existing security platform. Effectively, Microsoft has joined AWS and Google Cloud in developing custom silicon to deliver a full compute experience.
In addition to this, the company worked with AMD to develop a custom chip—the EPYC 9V64H—to support virtualized HPC workloads. This chip will be outfitted with HBM3 memory and double the infinity fabric. While this is an incredibly powerful compute platform, what is perhaps more interesting is to see the dominant position AMD has taken in the cloud space. Custom chip work for the CSPs was once the domain of Intel and Intel only.
Finally, Microsoft has expanded its partnership with Oracle by activating an additional 12 regions for Oracle Database@Azure and making the environment managed and governed through Azure Resource Manager and Purview, respectively. Effectively, Oracle Database@Azure is now fully integrated as a native service.
Part 1: SC24 came and went, and boy was it a ride! There are so many storylines to trace from the big supercomputing conference, so let me just share a few.
My biggest takeaway is a little bit esoteric. When I attended SC15 in Austin in 2015, it felt like I was walking through a science fiction magazine because the technology was so disconnected from what was happening in the enterprise datacenter at that moment. In particular, it was very focused on the big national and academic labs. At that event, I saw and talked about topics that would eventually become common in the enterprise—but not for years.
By contrast, walking through the show floor this week in Atlanta, I could immediately tie all of the innovations I saw to uses by the average enterprise trying to operationalize AI. Put more succinctly, over the past decade the pace of innovation has drastically accelerated from the innovators to the deployers.
Here’s another big takeaway: holy liquid cooling, Batman! I have been watching the liquid cooling market for some time. Some of the earlier players such as Vertiv, LiquidStack, CoolIT, GRC, and Motivair are now part of a much larger market peppered with logos—both the familiar and the new. I saw a total of 50 companies listed on the exhibitor list, including 22 for liquid cooling. And this doesn’t count the likes of Delta, Schneider Electric, and some of the power and infrastructure companies that have either already joined the market or are looking to enter it. The (smart) acquisition of JetCool by Flex is a good example of this.
Overall, I’m quite impressed with the amount of liquid cooling I’ve seen from cooling vendors and OEMs alike. Lenovo has been pushing Neptune for a long time, and we saw HPE start aggressively telling its liquid cooling story this summer at its Discover conference. Now Dell is really starting to jump in the game (for instance, the XE9712 racks they are shipping to CoreWeave are liquid-cooled).
With this said, I think we are still very early in this cooling game, and what we are seeing in today’s market is kind of like the days of discovering fire and inventing the wheel. As warm-water cooling is starting to find a place in the market, look for two-phase direct-to-chip (D2C) cooling to play a bigger role, as it is so much more able to address the heat density that we see on chips. Longer term, I think immersive technologies will be niche in application and will eventually bridge to cooling technologies we aren’t even covering today.
The last big takeaway is about the silicon innovation going into this market across the entirety of the data journey. I have been in the tech industry for over 30 years and I’ve never seen so much innovation in the silicon space. Most of us see NVIDIA’s biggest threat coming from AMD, Intel, and Arm—or maybe even a Qualcomm or Marvell. However, don’t overlook the many, many innovation engines in the chip industry like NeuReality, Tenstorrent, Cerebras, Untether, or others.
Part 2: The line from supercomputing to enterprise computing has become short and straight.
We’ve seen HPC-like requirements creeping into the enterprise for some time. First it was the larger enterprise organizations with workloads like crash simulation and high frequency trading. Big data, EDA, and data analytics really pushed this requirement for accelerated compute and more bespoke storage and networking to populate the enterprise datacenter. But AI has totally disrupted the game and, yes, it has brought supercomputing into the enterprise in a major way. And to the edge—and wherever else there’s data. This is why we see such big market sizing and CAGRs associated with AI. It’s not just about the chips, servers, storage, and networking; it’s about the cost of deploying, tuning, and managing these environments. And because of its nascency, there is so little knowledge to share—certainly no institutional knowledge or “muscle memory.” Because of this, I see the consulting companies playing a big role in the AI journey.
Like I said when I was talking about the SC15 conference earlier, the line from supercomputing to mainstream enterprise used to be long and crooked. That line is now very short and very straight. No longer is technology being developed and then kind of iterated on for eventual broader consumption. The ability for technology to be broadly adopted (and used) in a commercial way is now a primary concern for any startup playing in this space.
Canva recently appointed Kelly Steckelberg, former Zoom CFO, to the same position within its organization. Steckelberg brings a wealth of experience, having successfully steered Zoom through its IPO and a period of rapid growth. Canva is currently valued at approximately $32 billion, with more than $2 billion in annual recurring revenue. It has seen significant success in expanding into the enterprise market, with 95% of Fortune 500 companies as users. Although Canva states there are no immediate plans for an IPO, Steckelberg’s appointment and the company’s strong financial performance suggest a public offering could be on the horizon. Steckelberg says she sees tremendous opportunity at Canva, and I believe the company is very fortunate to have her. At a recent Zoom event, she expressed confidence in her successor, Michelle Chang. Chang joins Zoom from Microsoft, where she served as corporate vice president and CFO of the Commercial Sales and Partner Organization. Chang will be front and center next week as Zoom presents its earnings. Analysts project Zoom to deliver year-over-year earnings growth driven by higher revenues when it releases its Q3 2025 financial results for the quarter ending October 2024. Chang will be a critical part of Zoom’s next growth phase as the company moves from a video conferencing company to an AI-driven collaboration and productivity platform.
Microsoft Fabric announced significant updates to its unified data platform at Ignite 2024. Fabric Databases, initially including SQL Database, now include transactional capabilities with built-in security, vector search, RAG support, and Azure AI integration, enabling the development of AI-optimized applications. OneLake, the platform’s multi-cloud data lake, now has enhanced multi-cloud and on-premises data integration with Azure SQL DB Mirroring. Several workload-specific updates were also announced, including sustainability data solutions, AI functions for text analysis in notebooks, and a GraphQL API for simplified data access. AI capabilities expanded with conversational AI tools, Azure Event Hubs KQL database support, and integration with Azure AI Foundry.
These updates strengthen Microsoft’s position against competitors, enhancing Fabric’s appeal as a unified platform for data management and AI development. By addressing enterprise requirements, Fabric reinforces Microsoft’s ability to compete with other major players in data management and AI such as Google Cloud and AWS.
IBM has modernized Db2 with a new AI-powered database assistant. As data demands grow, database systems must evolve to keep pace, and last week IBM released Db2 12.1, incorporating a slew of AI features. The new release addresses key challenges faced by database administrators and introduces the Database Assistant, developed using IBM watsonx. This assistant delivers instant answers, real-time monitoring, and intelligent troubleshooting. Explore these innovations further in my latest Forbes article, which features Miran Badzak, IBM’s program director for databases.
LogicMonitor is advancing in the hybrid observability industry with an $800 million investment to integrate AI into datacenter operations. Led by CEO Christina Kosmowski, the company focuses on helping businesses reduce costs and scale AI while improving efficiency and meeting sustainability goals. This funding looks to strengthen LogicMonitor’s role in supporting modern datacenter management and AI-driven operations.
Cloudera announces the acquisition of Octapai’s data lineage and catalog platform, expanding its data catalog and metadata management capabilities. With this move, Cloudera will be able to provide customers with visibility across data solutions, allowing them to use trusted data for AI, predictive analytics, and decision-making tools. Key benefits include improved data discoverability, quality, governance, and migration support.
Change is in the air as we approach the new year:Amazon Web Services has brought in Julia White as CMO. White was recently the Chief Marketing and Solutions Officer at SAP. Before that, she was at Microsoft for two decades, including as corporate vice president for Azure product marketing. Her expertise spans cloud services, AI, and product messaging, making her well-suited to AWS’s strategic needs. With this move, AWS looks to strengthen its position in the competitive cloud market against Microsoft Azure and Google Cloud. With AWS quarterly profits surpassing $10 billion for the first time, White’s leadership is expected to enhance AWS’s focus on cloud computing and AI innovation. This leadership change follows recent executive departures at AWS, including former CEO Adam Selipsky and VP of AI Matt Wood. Wood has since joined PwC as the firm’s first Commercial Technology and Innovation Officer.
Microsoft launched Flight Simulator 2024 with significantly enhanced features and new flying jobs that enable you to pilot virtually any kind of flying craft, from hot air balloons to helicopters and dirigibles. Unfortunately, Microsoft didn’t adequately anticipate the demand for the game, and servers crashed; the company apologized to gamers for not being prepared for the launch to be such a hit. I believe part of the success came from the game being included with Game Pass. Flight Simulator is the second big title this month to launch on Game Pass after Call of Duty launched earlier in November.
Qualcomm Investor Day — Nakul Duggal (Qualcomm’s group GM for automotive, industrial, and cloud) presented Qualcomm’s IIoT deployment model for AI at the edge. The company foresees a $50 billion market opportunity for edge intelligence by 2029 and has defined a path to achieve that goal. The path includes new edge computing chips (the Qualcomm IQ series) designed to support a comprehensive edge deployment architecture. The architecture aligns with industry-wide trends that make IIoT much more scalable.
Here’s how it works. Qualcomm customers develop AI-powered applications in the cloud for deployment on both cloud and local platforms. AI-accelerated on-premises “AI edge boxes” run cloud-native computing software environments on appropriately scaled compute platforms. The development model is “build in the cloud, deploy on the edge” using the same software infrastructure. However, IT-managed, cloud-native platforms do not extend all the way down to the chaotic world of OT (operational technology) devices. These small embedded platforms are often highly customized and optimized for specific tasks. OT devices require unique software stacks, device management services, and connectivity schemes. The result is a line of demarcation that separates small OT platforms from large IT systems running distributed cloud environments. Mr. Duggal explained the Qualcomm IQ Series processors and subsystems in terms of this model, enabling a new generation of on-premises compute platforms with the power and scale to address a wide range of vertical industries.
MY TAKE: AI at the edge is the new IIoT north star. I’ve advocated this three-tier architecture (cloud, distributed cloud, device) for years, and it’s great to see Qualcomm and other big suppliers follow the same pattern.
Microsoft Ignite — Not surprisingly, Ignite focused on AI in the cloud and at the edge. From an IIoT standpoint, Azure’s Adaptive Cloud approach was the star of the show, enabling AI to work across sites, clouds, distributed computing, and devices. The Azure IIoT model is consistent with my “fractal” view of IIoT intelligence, with cloud-native environments scaling from global clouds to local on-premises servers. Microsoft’s AI enablement products, data fabric, event grid, event hubs, storage, Power BI interfaces, and other services run on the whole range of platforms, while Azure IoT Operations (enabled by Azure Arc) implements device data interfaces and manages the OT data at the edge. The IIoT devices are, essentially, peripherals communicating via standards-based protocols.
MY TAKE: Azure IoT Operations is emerging as the interface between the chaotic world of OT devices and the structured world of Microsoft’s AI-enhanced IT systems. Other hyperscalers and platform suppliers are moving in this same direction, allowing enterprise applications (i.e., ERP) to immediately scale up OT-enabled AI-powered solutions with minimal dependencies on IIoT device systems. Even though Ignite didn’t have many IoT-specific sessions, the industrywide trend to separate chaotic device development from high-growth, AI-driven business transformation came across loud and clear.
Shure, a company known for high-quality audio equipment, has partnered with Microsoft by integrating its products with the Microsoft Device Ecosystem Platform (MDEP). This collaboration allows Shure to develop new audio solutions for Android devices for Microsoft Teams Rooms. What this means for Shure is enhanced security measures that meet Microsoft’s high standards, improved compatibility with Microsoft Teams Rooms, and the opportunity to tap into new markets. For example, government agencies that rely on Microsoft Teams for secure communication could now outfit their conference rooms with Shure microphones and audio processors that integrate with their existing systems. This partnership paves the way for Shure to deliver enhanced audio experiences to a broader range of users who depend on Microsoft Teams for collaboration.
Box held its annual BoxWorks event in person for the first time in several years and announced a suite of new AI-powered tools focused on helping businesses unlock the value of their content. These include AI Studio for building custom AI agents, Box Apps for automating workflows, and enhanced security features. Box also supports nonprofits using AI for social good through its Impact Fund. I provide more detail about BoxWorks and the company’s strategy for addressing organizations’ growing need to manage and extract value from increasing content volumesin this Forbes contribution.
Amazon announced the new Echo Show 21, its biggest and most capable smart home display with a built-in smart home hub. This new smart home hub includes support for both Matter and Thread wireless standards, making it an anchor for your smart home and putting Amazon at the center of that experience. While I’m not sure I have the need or space for this, there is also a 15-inch version which may be easier to fit into smaller kitchens. Amazon has designed the Echo Show 21 as a wall-mounted display; I may have to test it myself to understand the utility of having such a large display in the kitchen.
At Microsoft Ignite, the company announced a few updates to Copilot and Window 365, including a thin client called the Link. The Link appears to be Microsoft’s way of satisfying the market need for low-cost products while driving Windows 365 virtual PC usage. While many of Microsoft’s partners like HP, Dell, and Lenovo already offer thin clients, this appears to be Microsoft’s own approach focused around Windows 365 only. Microsoft also announced the new Copilot Actions, which gives prompt templates to help automate repetitive tasks. While this isn’t quite a way to enable scripting, I do think this should help to improve Copilot usage.
Microsoft and Atom Computing successfully entangled 24 logical qubits using neutral atoms under control of Atom Computing lasers. This is the record for the highest number of entangled logical qubits. Logical qubits are constructed from multiple physical qubits, and they allow complex quantum algorithms to be run. The system demonstrated high gate fidelities: 99.963% for single-qubit gates and 99.56% for two-qubit gates, making this the highest neutral-atom two-qubit gate fidelity in a commercial system.
A characteristic of neutral atom quantum computers is the tendency of atoms to disappear during operations. The team developed a method to detect and replace lost atoms without disrupting computations. As a benchmark, the researchers ran the Bernstein-Vazirani algorithm, which identifies a hidden binary string. The 20 logical qubits (created from 80 physical qubits) found the secret code in a single attempt, outperforming classical counterparts that must run the search many times to find all the bits.
This is good news for quantum computing. Great progress is being made with logical qubits, in fact doubling the number that the Microsoft-Atom Computing team has accomplished in just a few short weeks.
Qualcomm held its investor day—the first in three years—and updated analysts and investors on the progress it has made over the last three years in its product diversity strategy. The company also stated that it expects its smartphone business to be just 50% of revenue by 2030. This is a significant goal because the company currently generates 75% of its revenue from its handset business. In addition to that lofty goal, the company also revealed that it is planning an even more affordable PC processor soon, which will enable $600 CoPilot+ PCs. Additionally, it said that the next generation of its X Elite processors will be powered by the third generation of its Oryon CPU cores, which have done exceptionally well in benchmarks against Apple.
The whole industry eagerly awaited NVIDIA’s earnings, and the company beat on revenue and profit and guided slightly above expectations . . . yet still got punished in after-hours trading. The reality is that people’s expectations of NVIDIA are simply unrealistic and heavily tainted by retail hype, even though the company is now generating $35 billion in revenue per quarter and almost $20 billion in profit per quarter. NVIDIA is basically printing money compared to almost everyone else in the industry and dwarfs many of its nearest competitors. I believe that we are still very much in the early phases of AI, and while some of the AI model builders may be hitting a bit of a wall with training, NVIDIA says that demand for training chips remains high — and it is convinced that it is also well-positioned with chips for inference.
IBM has partnered with the Ultimate Fighting Championship to become UFC’s Official Global AI Partner. The UFC Insights Engine, built with IBM watsonx, utilizes data and AI to analyze live bouts, fighter tendencies, projected match outcomes, and methods of victory. This provides fans with detailed information to deepen their interest in the sport. It is another great example of sports technology enriching a sport and driving fan engagement.
Solidigm has launched a new 122TB SSD designed to reduce energy consumption in data centers, which are facing increasing demand and costs due to the rise of AI. This new drive offers significantly higher storage density and efficiency, which should lead to lower energy bills and a smaller physical footprint. This is crucial for sustainability and allows companies to invest more in AI development.Read more about Solidigm’s 122TB drive in my recent Forbes article.
Microsoft Ignite, November 18-22, Chicago (Patrick Moorhead, Robert Kramer – virtual, Will Townsend – virtual, Melody Brue – virtual, Jason Andersen – virtual)
Super Computing, November 18-22, Atlanta (Matt Kimball)
NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
Microsoft Ignite, November 18-22, Chicago (Patrick Moorhead, Robert Kramer – virtual, Will Townsend – virtual, Melody Brue – virtual, Jason Andersen – virtual)
Super Computing, November 18-22, Atlanta (Matt Kimball)
NTT R&D Forum, November 19-23, Tokyo (Will Townsend)
]]>MI&S Weekly Analyst Insights — Week Ending November 8, 2024
https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-november-8-2024/
Tue, 12 Nov 2024 17:47:55 +0000https://moorinsightsstrategy.com/?p=44035MI&S Weekly Analyst Insights — Week Ending November 8, 2024. A wrap up of what our team published during the last week.
Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.
Moor Insights & Strategy is headquartered in Austin because that’s where I’ve lived for a long time; although more of our staff lives in the Austin area than anywhere else, we are a fully virtual firm with members who live in California, Florida, Massachusetts, and Virginia as well as Texas. For as much as I believe in the power of remote and hybrid work, there’s also no substitute for spending time with each other face to face. That’s why I was glad to attend the annual Dell Tech Analyst Summit last week with my colleagues Matt Kimball, Paul Smith-Goodson, and Anshel Sag, analysts from different generations who represent a wealth of experience, and who each bring their own perspectives to the work we do.
MI&S analysts Matt Kimball, Paul Smith-Goodson, and Anshel Sag. Photo: Patrick Moorhead
This week, Mel and I will be in New York for a BIG event we are sworn to secrecy about for now, but that we will let you in on soon. We’ll also be catching the BoxWorks conference virtually. Robert will be in Scottsdale for the Veeam Analyst Summit.
As touched on above, last week, Matt, Anshel, Paul, and I attended the Dell Tech Analyst Summit in Austin. After the Dell event, Anshel hopped on a plane across the pond to spend some time with Qualcomm in Manchester, England, to get an update on Manchester United’s Snapdragon partnership. Meanwhile, Jason was in sunny San Diego for the Apptio TBM Conference (discussed below) and some tacos.
At the end of this week, our entire team is convening in Austin for a company-wide strategic planning session focused on the new year. (Because of this, it will be two weeks until our next installment of Analyst Insights.) As we grow, we’re committed to delivering the highest caliber research and advisory services in the industry. In the coming months, we’re excited to introduce you to new team members and share updates on our progress. We always value your feedback and look forward to your continued partnership.
Read more about strategically significant tech trends and events in this week’s Analyst Insights. I hope you have a rewarding week!
Over the last week, MI&S analysts have been quoted in multiple top-tier international publications with their thoughts on Broadcom, Celona, Freshworks, Intel, Nvidia, AI, earnings, and the Dow Jones Industrial Average change in the semiconductor segment. Patrick appeared on Yahoo! Finance Morning Brief to discuss Qualcomm Q3 2024 earnings.
Last week I attended the annual TBM Council Conference in San Diego. TBM stands for Technology Business Management, which is a movement to demystify and optimize IT spending—leading to improved business results. While that may sound like it’s strictly FinOps, in fact it goes well beyond FinOps and covers the business and resource metrics to facilitate a transparent and predictable digital supply chain. What makes it so great is that TBMC is a user conference that keeps the focus on the present state and the solution space—it is not a product and technology extravaganza. What stood out was the common origin story of how companies adopted TBM solutions. It seems to commonly start with a surprise budget-busting bill that nobody can quite account for. However, once the right level of discovery happens, not only are financial surprises minimized, but other areas also emerge such as cross-project resource conflicts or smarter product architecture. The takeaway was twofold: For users, TBM as a framework is worth checking out. For vendors like Apptio, by establishing a very clear view and driving engagement on a solution space, they are able to more effectively position their solutions in the TBM space.
Anthropic is having its moment in the sun with developers. Last week I mentioned that both GitHub and AWS made announcements around their respective AI-powered IDEs. But within those announcements, it was also revealed that Anthropic’s Claude 3.5 Sonnet was being used as either the LLM or a major new LLM option. Sonnet is a deeper-thinking model without a degradation in performance from the previous generation. It’s another example of LLMs being packaged to meet specific situations; for developers, it should be a welcome offer since devs are prioritizing accuracy over speed. But for those who need speed, Antropic happens to have a different model available.
JFrog has been a key partner of GitHub for a while. The combination of curating source code and binaries tees up what is a very complementary partnership. But at GitHub Universe last week, JFrog and GitHub announced they are working together on a more security-driven GitHub Copilot IDE. JFrog has launched integration with GitHub Autofix so devs can scan their code pulls as part of the development workflow using JFrogs own SAST security capabilities. This is a great way to enable devs to intercept problems even earlier in the process, reducing the cost and complexity of future remediation. For more on this, you can read JFrog’s blog post about the integration here.
Last week I attended the annual TBM Council Conference in San Diego. TBM stands for Technology Business Management, which is a movement to demystify and optimize IT spending—leading to improved business results. While that may sound like it’s strictly FinOps, in fact it goes well beyond FinOps and covers the business and resource metrics to facilitate a transparent and predictable digital supply chain. What makes it so great is that TBMC is a user conference that keeps the focus on the present state and the solution space—it is not a product and technology extravaganza. What stood out was the common origin story of how companies adopted TBM solutions. It seems to commonly start with a surprise budget-busting bill that nobody can quite account for. However, once the right level of discovery happens, not only are financial surprises minimized, but other areas also emerge such as cross-project resource conflicts or smarter product architecture. The takeaway was twofold: For users, TBM as a framework is worth checking out. For vendors like Apptio, by establishing a very clear view and driving engagement on a solution space, they are able to more effectively position their solutions in the TBM space.
Anthropic is having its moment in the sun with developers. Last week I mentioned that both GitHub and AWS made announcements around their respective AI-powered IDEs. But within those announcements, it was also revealed that Anthropic’s Claude 3.5 Sonnet was being used as either the LLM or a major new LLM option. Sonnet is a deeper-thinking model without a degradation in performance from the previous generation. It’s another example of LLMs being packaged to meet specific situations; for developers, it should be a welcome offer since devs are prioritizing accuracy over speed. But for those who need speed, Antropic happens to have a different model available.
JFrog has been a key partner of GitHub for a while. The combination of curating source code and binaries tees up what is a very complementary partnership. But at GitHub Universe last week, JFrog and GitHub announced they are working together on a more security-driven GitHub Copilot IDE. JFrog has launched integration with GitHub Autofix so devs can scan their code pulls as part of the development workflow using JFrogs own SAST security capabilities. This is a great way to enable devs to intercept problems even earlier in the process, reducing the cost and complexity of future remediation. For more on this, you can read JFrog’s blog post about the integration here.
Konecta and Google Cloud have established a three-year strategic partnership to integrate Google Cloud’s AI and cloud technologies into Konecta’s customer experience solutions. This collaboration seeks to enhance Konecta’s Digital Unit with AI-powered tools and services, including implementing Google Cloud’s Customer Engagement Suite and generative AI solutions. As part of the partnership, Konecta will transition its workforce to Google Workspace and certify up to 500 engineers in Google Cloud technologies. These initiatives are designed to improve operational efficiency, enable the development of advanced CX offerings for clients, and strengthen Konecta’s position as a provider of AI-driven customer service solutions. Konecta and Google Cloud expect this alliance to facilitate more personalized and efficient customer interactions for businesses and contribute to their digital transformation objectives.
Twilio has partnered with Google Cloud to integrate generative AI into its customer engagement platform. This enables Twilio users to deploy AI-powered solutions such as virtual agents and interactive chatbots to enhance their customer service capabilities. By leveraging Google Cloud’s AI tools, including Dialogflow, businesses can automate responses to common inquiries, provide 24/7 support, and efficiently escalate complex issues to human agents. Early results from this collaboration are promising. Google says that a major automotive manufacturer using its “Destination Assist” feature has reported an 18% to 20% reduction in agent call times. While Google did not include resolution rates for this particular use case, this type of reduction in call times and always-on support will likely lead to higher resolution and happier customers.
Freshworks has announced a significant restructuring that will result in layoffs affecting approximately 13% of its global workforce, equating to about 660 employees. This move surprised many, particularly since the company recently reported strong fourth-quarter financials for 2024, demonstrating a 22% year-on-year increase in revenue. Most of the positions affected by these layoffs are based in India and the United States. CEO Dennis Woodside described this decision as a difficult but necessary step to streamline operations and focus on the company’s key strategic areas: employee experience, artificial intelligence, and customer experience. Following the announcement, the company’s share price surged by 15%, and Freshworks unveiled a substantial $400 million stock buyback program, indicating a strong financial position despite the layoffs.
While Freshworks is framing this as a strategic realignment, the increasing role of AI could be a contributing factor to these layoffs. Overall, this decision reflects a broader trend in the tech industry, where companies are optimizing their workforces in response to changing business objectives and the rise of AI. It will be interesting to observe how this restructuring impacts Freshworks’ growth trajectory in its key strategic areas moving forward.
I write this as I am finishing up the final day of Dell’s analyst event—Dell Tech Summit. While this was an NDA event, so the details are secret, there are a few general takeaways I want to share:
If you thought AI hype had peaked—think again. However, unlike some other hype cycles, AI is driving a lot of revenue and completely disrupting enterprise IT organizations.
Dell’s AI journey is an amazing story. I’m convinced this experience puts the company in a stronger position for helping its customers evolve.
Partners are critical to successfully delivering AI transformation projects to the market
We are still very early in the AI game. Very, very early.
There is opportunity for every company in the AI supply chain. Dell partners. Dell suppliers. Frankly, Dell competitors.
2025 is going to be a fun year in tech and AI. If you are an IT solutions vendor: better understand where you fit in the enterprise AI journey and start aligning portfolios and GTM efforts. If you are an enterprise organization: don’t start this AI journey on your own. Learn from those that have gone through this before.
Most of us in tech are familiar with software-defined storage, software-defined networking—even software-defined infrastructure. How about software-defined silicon? NextSilicon, a semiconductor company based in Tel Aviv and Minnesota, has launched its Maverick-2 intelligent compute accelerator (ICA). This silicon, which in earlier versions has been powering HPC clusters for a number of years, has been tuned to also support AI and vector database operations, delivering a performance-per-watt advantage of 4x over GPUs while cutting operational costs in half.
Isn’t this just a fancy name for an FPGA, you say? Not really. With Maverick-2, real-time application telemetry feeds the silicon, tuning its characteristics for optimal performance. If you are familiar with Kalman filtering for voltage/current regulation, a very crude comparison can be drawn.
My thoughts: There is a really interesting play here with Maverick-2. As datacenters struggle with power and space constraints, such a piece of silicon enables a bit of platform flexibility that can optimize itself in real time for a variety of workloads. NextSilicon is not going to use Maverick-2 (or -3, -4, etc.) to displace the likes of NVIDIA or AMD for AI training. Nor is it going to replace those companies or Qualcomm, Untether, or others for AI inference. However, NextSilicon has found a niche for itself. Further, the way the company is approaching what I would call autonomous programmability will have a longer-term impact on the silicon market in general.
The Ultra Accelerator Link (UALink) Consortium was officially incorporated last week. This group, which includes some of the biggest tech companies in the world, is focused on establishing standards and specifications for GPU-to-GPU connectivity. What problem is this solving? In larger AI and HPC environments where GPUs are used to perform complex computations, the design of legacy infrastructure can force GPUs to rely on CPUs that can become bottlenecks. As you can imagine, this introduces a lot of latency into the equation.
To solve this, NVIDIA developed a fabric called NVLink to enable its GPUs to bypass the CPU as a controller or head node. This allows NVIDIA-based environments to be more performant. However, NVLink was developed by NVIDIA for NVIDIA. UALink aims to take this concept and make it open for all: a universal set of standards and specifications that enable accelerator-to-accelerator connectivity at scale.
My thoughts: The broad support for UALink (from AMD, Intel, Meta, HPE, Astera Labs, AWS, Microsoft, and others) tells me that this consortium has legs and will find adoption among the customers that have immediate needs. I expect to see other big players such as Dell and Broadcom to join in as contributors at some point. See my full analysis on Forbes.
Should enterprises be concerned about cyberattacks from AI agents? AI’s advanced capabilities can enhance the sophistication and scale of malicious attacks, making them more difficult to detect and defend against. This is especially concerning given AI’s ability to automate and personalize attacks on a large scale. Potential threats include AI-powered phishing, deepfakes, and malware that can bypass traditional security measures.
Here are some key reasons why AI poses a cyberthreat to enterprises:
Advanced Attack Vectors — AI can analyze large datasets to exploit vulnerabilities and deliver highly targeted attacks.
Automated Attacks — AI enables rapid, large-scale attacks that are difficult to contain due to automation.
Evasion of Detection Systems — AI can create and modify malware to bypass traditional security measures, complicating detection.
Data Manipulation and Poisoning — Attackers can use AI to corrupt training data, impacting security models and resulting in missed threats or false positives.
Accessibility to Malicious Actors — The availability of AI tools allows even attackers who are less skilled to deploy AI-powered attacks.
In response, here are some mitigation strategies for enterprises:
Develop AI-powered defenses to detect and respond to sophisticated threats in real time.
Monitor AI systems closely, particularly the behavior of AI agents, to detect potential malicious activity.
Implement employee awareness training to help staff recognize and report AI-driven threats.
Strengthen data protection and privacy measures to safeguard sensitive information.
It’s important to stay updated on this topic, as the integration of AI agents into various applications will continue to expand, bringing with it new challenges.
By all measures, Juniper Networks is executing well financially, beating expectations for both top-line revenue and profitability with its recent 3Q 2024 earnings announcement. Chief executive Rami Rahim credits the company’s support of front-end and back-end AI networking for its recent success. Juniper’s reinvigorated focus on enterprise networking and its depth in AI help explain why HPE is acquiring the company, with an expected close of the deal anticipated soon. The combination of both companies’ engineering resources and complementary portfolio coverage of campus, branch, and data center is powerful, which should allow the combined entity to compete more effectively with Cisco and others.
NetSuite has examined emerging trends and made some predictions about ERP as we head into the new year. Enterprises are increasingly prioritizing ERP modernization, with the cloud ERP market projected to expand from $72.2 billion in 2023 to $130.5 billion by 2028. Because ERP systems are so essential for so many enterprise processes, modernizing them can reshape how enterprises operate.
Many enterprises face the challenge of modernizing their operations by transitioning to cloud ERP solutions. Software providerIFS saw strong performance in Q3 2024, with a 30% increase in ARR, driven by a 71% rise in IFS Cloud usage and a 46% growth in cloud revenue. IFS added 90 new clients, introduced AI features in IFS Cloud 24R2, and launched a new module for sustainability management.
Matter, the smart home automation standard from the Connectivity Standards Alliance, just released version 1.4. The new version has some much-anticipated enhancements, including support for many new device types. When the CSA released Matter 1.0 in October 2022, the organization’s leadership promised a regular six-month release cadence, and that’s exactly what they have done. The team has shipped four releases in two years, each with significant improvements in device coverage, ease of use, and functionality. The cumulative expansion of device types is impressive, with version 1.4 adding support for energy management products including solar power, energy storage, heat pumps, water heaters, electric car charging, and time-based orchestration. The CSA was smart to focus device development on a single high level use case—energy—rather than adding a hodgepodge of random devices.
But the most significant version 1.4 enhancement is automating “multi-admin” configurations. Here’s why. Matter’s value proposition rests upon interoperability, and that means enabling devices from any manufacturer to work with any combination of home automation ecosystems—i.e., HomeKit, Google Home, Alexa, or SmartThings. Although previous Matter versions support this feature, the configuration process has been complicated and often confusing. Version 1.4 automates multi-admin configuration by allowing the user to grant permission just once, then Matter automatically adds a new ecosystem. Thus, “Alexa” and “Hey Google” can both turn down your thermostat. This fulfills a fundamental brand promise for Matter. Between now and CES, I’ll write more about Matter’s progress. I’m also doubling down on my prediction that Matter will hit its tipping point in early 2026, becoming the leading smart home connectivity standard for new designs.
A new security feature in iOS 18.1 has made it more difficult for police and other agencies to snoop inside iPhones they’ve confiscated for investigations. This feature is an inactivity timer that reboots iPhones into a more secure state when they haven’t been unlocked for a while. This is a welcome feature for many privacy and security advocates because so many warrantless searches have been conducted on people’s phones. And Apple isn’t the first to implement this, as GrapheneOS for Google Pixel phones already offers this capability.
Workvivo, an employee experience platform acquired by Zoom in 2023, has launched a new suite of tools called “Employee Insights.” This suite is designed to measure and improve employee engagement. Integrated directly into the Workvivo platform, Employee Insights enables organizations to deploy pulse surveys, monitor engagement across 12 key drivers, and analyze results using real-time dashboards.
This solution provides a centralized way to gather employee feedback, particularly from frontline workers who may have limited access to e-mail. Workvivo emphasizes that the platform facilitates actionable insights based on the “listening” data collected, aiming to foster a cycle of continuous improvement. Future updates are expected to include integration with Zoom AI Companion for enhanced industry data analysis and benchmarking capabilities. This launch comes at a significant time for Workvivo, as it was recently named Meta’s preferred migration partner for the discontinued Workplace platform.
I served as a judge for UC Today’s inaugural UC Leaders Awards 2024, which aims to highlight the top professionals in the unified communications (UC) and collaboration space. The judging process was enjoyable, and I learned a great deal about the individual and team contributions to the companies and industries I cover. There were many outstanding applicants this year.
UC Today has announced the UC Leaders finalists, including Craig Walker from Dialpad and Eric Yuan from Zoom for the Innovator of the Year award. Eric Yuan is also nominated alongside Vlad Shmunis from RingCentral for the UC Leader of the Year award. The finalists for the Women in UC Leadership Award include Smita Hashim, chief product officer at Zoom; Christina Hyde, VP of revenue at SkySwitch; Kira Makagon, chief innovation officer at RingCentral; and Aruna Ravichandran, SVP and chief marketing and customer officer at Cisco Webex—quite a powerhouse lineup!
Additional award categories include Industry Influencer of the Year, Rising Star of the Year, The Editor’s Choice Award, and the UC Team of the Year. The award ceremony will be held on November 21, 2024, and will be streamed live on the UC Today website, LinkedIn, and X at 4:00 p.m. GMT / 11:00 a.m. EST. I will be presenting the awards in an Oscars-style format, and I hope you will join us!
People are really loving Apple’s new M4 Mac Mini thanks to its extremely compact size and reasonable price. With an M4 chip, it may be one of the most inexpensive and powerful computers on the market. One of its biggest downfalls, however, is that memory upgrades are extremely expensive, so much so that in some cases it’s cheaper to buy a second Mac Mini than it is to upgrade the RAM. The one upside of this new design is that there is user-upgradable memory; however, there are no known third-party memory products for the device yet, although I expect OWC will offer one fairly soon.
North Korean threat actors have successfully used a combination of phishing emails and social engineering schemes to target cryptocurrency-related business. Cybersecurity solution provider SentinelOne has named the campaign “Hidden Risk”; this complex attack also employs a seemingly benign PDF with fake cryptocurrency news headlines and offers of employment to infect Apple macOS users with a malicious payload. It is a highly sophisticated campaign, one that is difficult to defend against, especially given the extensive grooming of targets over social media. The silver lining in this latest cyberattack is that security operators have access to an incredible amount of real-time threat intelligence shared by Infoblox, Microsoft, Palo Alto Networks, SentinelOne, Zscaler, and others to enable stronger security postures.
AMD’s new 9800X3D CPU is officially on the market and has already sold out. This new CPU is the first to use AMD’s second-generation 3-D V-Cache, which has faster CPU clock speeds, better memory bandwidth, and significantly improved thermals because now the CPU cores sit atop the memory instead of the other way around. While I haven’t had a chance to test this chip fully, the consensus is that its gaming performance is industry-leading, and its handling of multitasking and productivity workloads isn’t far behind—which wasn’t necessarily the case with past processors in this family.
Globant has opened a new office within Intuit Dome, home of the Los Angeles Clippers. This move is part of the two organizations’ ongoing partnership to enhance the fan experience through digital transformation initiatives. The new office includes a “Digital Playground” technology showcase, which the company says it hopes will foster creativity and attract talent while strengthening Globant’s presence in Southern California. This strategy aligns with a growing trend of businesses creating experience centers to demonstrate the potential of their products and services in an engaging environment. These centers provide tangible, interactive experiences beyond traditional marketing, allowing consumers to interact directly with technology and see various real-life use cases.
IBM is partnering with Ferrari’s racing team starting in 2025. This partnership will leverage IBM’s advanced data analytics to improve engagement for fans of Scuderia Ferrari through personalized content and insights that “bring racing enthusiasts closer than ever to the racing team.” IBM will also support Ferrari by using cutting-edge data analysis to enhance performance on and off the track. This newly announced partnership shares several similarities with IBM’s long-standing collaborations with the US Open, Wimbledon, and The Masters. All of these partnerships focus on enhancing fan engagement with premium sporting events—now including Formula 1—through digital platforms, leveraging advanced AI and data analytics to process vast amounts of data for real-time insights. These partnerships also serve as high-profile demonstrations of IBM’s capabilities on a global stage.
TheNational Football League’s data-driven approach to reducing injuries is showing progress. By analyzing player data with Amazon Web Services, the NFL provides insights to help teams improve training and safety. The league and its teams have also used computer vision and sensors to track head impacts, helping coaches implement injury-prevention strategies.
“When you can integrate and aggregate data across all 32 [teams] for all 53 [players], you have more power in the data that you are generating to model,” said Jennifer Langton, NFL Player Health & Safety Innovation Advisor. With AWS, the NFL can track injuries in real time, automatically linking incidents to specific plays. The league is also working on new tech for full-body tracking to better prevent injuries in the future.
As one example, the redesigned kickoff rule, created in collaboration with AWS, has reduced injury risk. The rule change has led to 32% fewer injuries overall on kickoffs, including no ACL or MCL tears, as player speeds have decreased.
US Cellular informed the FCC earlier this fall that T-Mobile’s acquisition of its wireless operations was essential in ensuring uninterrupted service for its dwindling subscribership. A sticking point is regulatory approval of the transfer of 30% of US Cellular’s spectrum assets to T-Mobile. However, that may be less of an issue now, given the breaking news that AT&T has agreed to purchase over $1 billion worth of US Cellular’s spectrum licenses.
It is an unfortunate situation for US Cellular, signaling the company’s inability to keep pace with necessary infrastructure deployments to support its primarily rural customer base. It is extremely challenging to provide mobility services in remote areas, given lower population densities and longer timelines to recoup investment in radio access network infrastructure. T-Mobile’s acquisition makes sense in many ways, especially given its success in connecting rural America with 5G access that leverages its lower-band spectrum assets.
]]>Episode 38: Hot Desk Podcast – Google, Alphabet, SAP, Zoom, Zoho, & More
https://moorinsightsstrategy.com/hot-desk-pod/episode-38-hot-desk-podcast-google-alphabet-sap-zoom-zoho-more/
Thu, 07 Nov 2024 17:53:14 +0000https://moorinsightsstrategy.com/?post_type=hot_desk_pod&p=43925Join Robert Kramer and Melody Brue as they break down Alphabet's recent earnings, SAP’s latest updates to SuccessFactors and more.
]]>It’s Episode 38 of the MI&S Hot Desk Podcast — Join Robert Kramer and Melody Brueas they break downAlphabet Inc.‘s recent earnings,SAP’s latest updates to SuccessFactors and its new Sustainability Data Exchange,Zoom‘s AI Companion,Zoho’s AI partnership withNVIDIA, and the growing influence of AI agents in ERP.
Watch the full episode here:
Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.
]]>Miro Redefines Collaboration Tools With AI-Driven Innovation Workspace
https://moorinsightsstrategy.com/miro-redefines-collaboration-tools-with-ai-driven-innovation-workspace/
Wed, 06 Nov 2024 18:24:41 +0000https://moorinsightsstrategy.com/?p=44421Miro touts its new Innovation Workspace as the most significant product update in the company’s history. If successful, it could reposition Miro within its market.
Miro, the online collaboration software provider, recently introduced Innovation Workspace, which it touts as the most significant product update in the company’s history. This AI-powered platform is designed to optimize the product design and innovation process; more than that, it signals a big shift in Miro’s strategic direction and its position within the broader collaborative workspace market.
According to the company, Miro’s aim with Innovation Workspace is to address the “massive disconnect between how organizations want to work and how they’re actually working” by providing a centralized, intelligent canvas for the entire innovation lifecycle. This is meant to reduce reliance on multiple tools and overcome challenges such as information silos and communication barriers. All features that were announced as part of the launch are currently in private or public beta and will become GA on a rolling basis before the end of Q4.
Functional And Competitive Implications For Miro
In my assessment, this launch is significant in two big ways. First, on a functional level, Innovation Workspace directly tackles common organizational challenges that impede innovation, such as fragmented workflows and communication breakdowns. As a former CMO in a product-led company, I’ve worked closely with product and development teams, and I know these obstacles can significantly hinder productivity and delay time to market. By providing a centralized platform with AI-powered tools for prototyping, feedback collection and project management, Miro aims to foster a more streamlined and efficient innovation environment. Specifically, Miro is incorporating AI features such as Sidekick, which offers contextual suggestions and automates tasks, and Catch-up, which uses AI to summarize important changes and decisions made on the board to help everyone stay informed.
AI features such as these will likely be a critical factor in driving adoption. These features are designed to be easily integrated into the user workflow, minimizing the learning curve and maximizing efficiency. Miro emphasizes making collaboration “more interactive, engaging and inclusive” through features such as interactive stickers, new intelligent widgets and integrated video calls.
Second, on the competitive front, this launch could reposition Miro within its market. Miro has a strong reputation for visual collaboration and innovative features. However, with Innovation Workspace, Miro could be evolving from a whiteboard provider into a comprehensive solution for managing the whole innovation lifecycle. This shift could be a crucial differentiator to distinguish Miro from competitors such as Mural, Lucidspark and Figma’s FigJam, which concentrate primarily on visual collaboration tools. While the competitors just mentioned excel at visual brainstorming and collaboration, they seem to lack the depth of project management and AI-driven features that Miro is now offering. For example, Mural’s AI features primarily focus on mind-map generation and clustering, while FigJam’s AI is limited to design suggestions. Conversely, Miro leverages AI across a broader range of activities, from project planning and task management to automated feedback analysis.
By bridging the gap between visual collaboration and structured project management, Miro has put itself in a unique position in the market with Innovation Workspace. Its strength lies in its highly visual and flexible platform, which helps users with real-time brainstorming, ideation and collaborative design. Its extensive template library also supports this by offering pre-built structures for various innovation methodologies, including design thinking, agile development and customer journey mapping. This “canvas-first” approach, along with integrations for popular development tools such as Azure DevOps, GitHub and Trello, make Miro a valuable asset for development teams.
While Miro excels at visual planning and collaborative design thinking, it’s not a replacement for dedicated project management tools with granular task management features such as Jira or Trello. Instead, Miro can be a useful complement to these tools, allowing teams to visually collaborate and generate ideas that inform the tasks managed in more structured platforms. This combination could boost engagement and clarity during the early stages of development, leading to a greater sense of purpose and understanding as teams execute their tasks.
If organizations require that granular task level detail within Miro, many of these leading project management tools can be integrated directly into the canvas or through third-party integrations such as Zapier. Still, addressing Miro’s project management limitations through add-ons can lead to a complex ecosystem. Organizations could need to integrate multiple vendors’ solutions and manage an expanding tech stack. This need to navigate data synchronization challenges across platforms and balance the cumulative costs of these additional tools could potentially offset the initial appeal of Miro’s visual collaboration capabilities.
Miro’s Potential Benefits And Success Factors
According to Miro, the objective of the new functionality is to deliver an enterprise-grade platform for teams involved in developing and launching new products and services. The potential benefits for businesses are substantial: increased efficiency, enhanced collaboration and accelerated time-to-market. For example, by centralizing communication and reducing the friction of task management, Innovation Workspace could reduce meeting times and streamline workflows, leading to faster product development cycles. Early user feedback suggests that the platform has helped teams shorten project timelines by 15% and increase cross-functional collaboration by 20%.
Other parts of Miro’s offering play key roles, too. The Miro Developer Platform provides a robust set of APIs and SDKs that allow developers to extend Miro’s functionality, build custom integrations and embed Miro boards into other applications. This open-platform approach and support for more than 160 integrations and 6,000 private apps should enable businesses to tailor Innovation Workspace to their specific needs by connecting Miro with their existing technology stacks.
As an example of its open-platform approach, just this month, Miro announced the integration of Adobe Express into its platform. With a public beta release slated for late November 2024, this integration will allow users to access various Adobe design tools without leaving the Miro environment. The goal is to streamline the creative process by enabling image creation, graphics editing and the use of templates within the collaborative workspace, eliminating the need to switch between applications. This initiative reflects Miro’s ongoing effort to broaden its platform’s functionality. It gives users the benefit of remaining focused on the task at hand while using tools that are likely familiar to them.
To gain further insight into Innovation Workspace, I solicited hands-on feedback from my colleague Jason Andersen, vice president and principal analyst for application platforms at Moor Insights & Strategy. Here’s what he had to report.
A Hands-On Test Drive Of Miro
Miro was nice enough to give us access to its platform so we could try things out. The experience was quite good. The interface was understandable to anyone with a background in drawing applications, and access to the AI features was prominent. Two things really stood out during our time on the platform.
First, the AI features are a great launching point for achieving higher-quality outputs in less time. For example, I built a quick mind map for a basic application, shown in the first picture below. Frankly, it was only an okay first draft; it would need many more iterations to make it usable by a team of product managers and architects. But by taking my basic thoughts and putting them into the AI generator, I produced a much better revised draft, which is shown in the second picture below. Developing the prompt for the AI Generator took less than a quarter of the time compared to making the map myself, and it was something I could easily modify and hand off to a product team.
A mind map for a new software application created by Jason Andersen in the Miro platform Miro / Jason AndersenAn AI-generated mind map for the same software application created by Jason Andersen in the Miro platform Miro / Jason Andersen
Second, the templates and integrations are great. The platform comes with several preloaded templates to structure deliverables, as well as integrations with toolsets from Jira, Azure and AWS. All of this makes documenting contextual processes, flows and infrastructure very straightforward. I would say that while this is a great way to augment common work tools, Miro could help users by taking the concept further. I would like to see user-defined flows to walk teams through a whole innovation/ideation process that can then load the relevant content into the integration points where work is done and measured.
Some of this seems to be possible based on quick testing. For example, I was especially pleased when I could generate an entity relationship diagram from the mind map above. But I am thinking about a more agent-driven approach like we are starting to see on other platforms such as Salesforce and ServiceNow. It could be a great way to extend what Miro has already done very well.
Drawing Conclusions
Andersen’s feedback indicates that Innovation Workspace delivers on its promise of enhanced efficiency and collaboration through AI and integrations. However, it also highlights the potential for Miro to further evolve the platform by deepening its workflow automation and creating a more integrated experience that connects the entire innovation lifecycle.
It will be particularly interesting to observe how Miro leverages its canvas-first approach and extensive integration capabilities to cater to the increasingly sophisticated demands of enterprise clients. Miro’s commitment to security and compliance, evident in its advanced security features and Enterprise Guard add-on, will likely be another key factor in its adoption. The market response to Innovation Workspace may hinge on Miro’s ability to capitalize on its strengths in AI, user-centric design and platform extensibility. If it can effectively do this, Miro is positioned to be a strong contender in shaping how product teams approach the innovation lifecycle, from ideation to execution. As the lines between different categories of work software continue to blur, Miro appears well-prepared to navigate this evolving landscape.
]]>MI&S Weekly Analyst Insights — Week Ending November 1, 2024
https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-november-1-2024/
Tue, 05 Nov 2024 01:18:26 +0000https://moorinsightsstrategy.com/?p=43834MI&S Weekly Analyst Insights — Week Ending November 1, 2024. A wrap up of what our team published during the last week.
Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.
Each autumn, Qualcomm hosts its Snapdragon Summit on the island of Maui to showcase its newest technologies. As longtime Moor Insights & Strategy analyst Will Townsend noted in his writeup of Qualcomm’s new mobile networking chips, “It’s an awe-inspiring setting for a technology conference.” And yet the gorgeous tropical environment makes for a wonderful backdrop rather than a distraction because Qualcomm and the attendees are all so engaged with the business at hand.
Qualcomm CEO Cristiano Amon presents at the Snapdragon Summit in Maui. Photo: Will Townsend
This time of year, the seemingly endless sequence of tech conferences can feel like a grind at times. How could it not, with so many different flights, hotels, and restaurant meals? But I wouldn’t trade it, because that’s where the action is for the newest technology just coming to market. And because sometimes it’s precisely when you’re in some distant metropolis (or Hawaiian island) that you can look at technology—new and old—with fresh eyes.
This week, Matt, Anshel, Paul and I will attend the Dell Tech Analyst Summit in Austin, while Jason will attend the Apptio TBM Conference in San Diego.
Last week, I was at AWS HQ in Seattle for an exclusive re:Invent preview, then spent some time in Los Angeles for the analyst session at Cisco’s Partner Summit. (Don’t miss the Six Five On the Road interviews with Cisco executives from the event.) Mel tuned into SAP SuccessConnect Virtual to dig into everything happening in HCM. Jason (virtually) and Matt attended the Red Hat Analyst Day in Boston while Will was in Riga at 5G Techritory. Will moderated two panels at the event on topics ranging from road connectivity to bridging the gap between industry, research, and public understanding of 5G innovation. Robert hosted a webinar with Sam Gupta, principal consultant at ElevatIQ, to discuss 2025 ERP trends and how businesses can prepare for digital transformation.
Read more about these events in the respective analysts’ insights.
Over the last week, MI&S analysts have been quoted in multiple top-tier international publications with their thoughts on Amazon, AMD, AT&T, Celona, Google, Intel, SAP, agentic AI, earnings, wearables, and more. Patrick appeared on CNBC Asia and Yahoo! Finance to discuss Intel Q3 2024 earnings.
Last week AWS announced inline editing for Amazon Q Developer, among a series of other new capabilities. Inline editing is very much in line (ha!) with AWS’s commitment to an efficient pro developer experience. Instead of the AI working in a window next to the coding workspace, you can now inject AI recommendations right into your work. What’s nice is that it needs to be invoked via a command key instead of just looming there all the time. Initial findings suggest that developers like using inline editing when they are truly stuck on something where they may lack familiarity. Think of something like a whole routine versus just something at the line level. The intended benefit is that devs will get richer and more verbose assistance, leading to even greater productivity. It will be very interesting to see how this approach moves the needle in metrics such as accepted changes, which is something that AWS is tracking and publishing regularly.
GitHub hosted its Universe event last week and made two major announcements. The first is that GitHub Copilot will now support the use of multiple LLMs. While this is not a new concept, given that many other developer tools do this, it’s interesting since Gitub has been all-in on using parent company Microsoft’s Copilot LLM until now. Additionally, GitHub announced a new no-code tooling and runtime offering called Spark, which is geared towards citizen and business developers. My initial thought is that this tool is geared towards business users who want to build simple forms-based applications, as opposed to power users who want to incorporate workflows and processes. Again, this is a capability that many other companies already have. And to be honest, it’s interesting to see GitHub branching out to attract new users. Especially since a shared set of repositories and services could be of value to enterprises. What will be more interesting is to see how this introduction shapes Microsoft’s low-code and no-code strategy, which currently spans multiple projects.
Red Hat hosted an analyst event in which its leadership walked through its AI strategy and other product updates. However, just like at IBM’s Analyst Summit a couple weeks ago, Red Hat could not resist talking about its major push to move customers from VMware to its OpenShift Virtualization offering. To be candid, it’s not cool, sexy, or even new since Red Hat has been in the virtualization game for decades now. But, frankly, it’s a sneaky-brilliant monetization play. Red Hat has a very strong history in executing pricing disruptions in commoditized and semi-commoditized markets (Unix to RHEL, BEA to JBoss). And Red Hat typically needs a lot of time to spin up in new markets (like AI). This is very typical of low-cost players in any marketplace. So while Red Hat talks the AI talk, my sense is that continued financial results will come from ripping-and-replacing hypervisors for a while. Red Hat will then attempt to parlay that customer savings into consideration for more modern solutions From Red Hat or parent IBM down the road.
Last week AWS announced inline editing for Amazon Q Developer, among a series of other new capabilities. Inline editing is very much in line (ha!) with AWS’s commitment to an efficient pro developer experience. Instead of the AI working in a window next to the coding workspace, you can now inject AI recommendations right into your work. What’s nice is that it needs to be invoked via a command key instead of just looming there all the time. Initial findings suggest that developers like using inline editing when they are truly stuck on something where they may lack familiarity. Think of something like a whole routine versus just something at the line level. The intended benefit is that devs will get richer and more verbose assistance, leading to even greater productivity. It will be very interesting to see how this approach moves the needle in metrics such as accepted changes, which is something that AWS is tracking and publishing regularly.
GitHub hosted its Universe event last week and made two major announcements. The first is that GitHub Copilot will now support the use of multiple LLMs. While this is not a new concept, given that many other developer tools do this, it’s interesting since Gitub has been all-in on using parent company Microsoft’s Copilot LLM until now. Additionally, GitHub announced a new no-code tooling and runtime offering called Spark, which is geared towards citizen and business developers. My initial thought is that this tool is geared towards business users who want to build simple forms-based applications, as opposed to power users who want to incorporate workflows and processes. Again, this is a capability that many other companies already have. And to be honest, it’s interesting to see GitHub branching out to attract new users. Especially since a shared set of repositories and services could be of value to enterprises. What will be more interesting is to see how this introduction shapes Microsoft’s low-code and no-code strategy, which currently spans multiple projects.
Red Hat hosted an analyst event in which its leadership walked through its AI strategy and other product updates. However, just like at IBM’s Analyst Summit a couple weeks ago, Red Hat could not resist talking about its major push to move customers from VMware to its OpenShift Virtualization offering. To be candid, it’s not cool, sexy, or even new since Red Hat has been in the virtualization game for decades now. But, frankly, it’s a sneaky-brilliant monetization play. Red Hat has a very strong history in executing pricing disruptions in commoditized and semi-commoditized markets (Unix to RHEL, BEA to JBoss). And Red Hat typically needs a lot of time to spin up in new markets (like AI). This is very typical of low-cost players in any marketplace. So while Red Hat talks the AI talk, my sense is that continued financial results will come from ripping-and-replacing hypervisors for a while. Red Hat will then attempt to parlay that customer savings into consideration for more modern solutions From Red Hat or parent IBM down the road.
Osmo.ai is able to capture and digitally recreate a scent. That means a scent can be teleported from one location to another. The company uses gas chromatography-mass spectrometry (GCMS) to analyze a given scent so it can be uploaded to Osmo’s cloud-based Primary Odor Map, which uses AI to predict the molecular composition. Next, a formulation robot creates the scent by mixing different molecules. Osmo produces high-fidelity scent replication, and it can capture even the subtlest nuances of a scent. The system is almost fully automated, and human assistance is needed only at the input and output stages. As a result of its experimentation, Osmo has accumulated the largest AI-compatible scent databank, which is now used for training the AI.
Osmo has the opportunity to create a number of new use cases. Perfume retailers could digitally transmit a perfume’s scent to potential customers. Because scent has an important role in memory and emotion, Osmo believes the technology could be used for PTSD or dementia therapy by recreating comforting or significant scents. All in all, the technology could blend the physical and digital worlds and offer a new way to experience and share sensory information.
Can we stop and take a moment to look at the novel ways Oracle is building OCI’s relevance? Through Alloy, Database@[CSP], HeatWave, Cloud@[customer], and a number of other avenues, OCI is becoming an indispensable part of the cloud marketplace. Rather than solely focusing on delivering better services at a lower price point than the competition (which it also does), Oracle is looking at embracing traditional competitors to better meet the needs of enterprise customers.
I attended Red Hat’s analyst summit in Boston last week, and there were two themes front and center: AI and VMware. While focusing on these may sound somewhat boring to analysts, this is a smart move as these are the top two infrastructure focus areas for any IT organization of size across the industry.
On the VMware front, Red Hat has a solid competitive story with OpenShift. In fact, right now the post-VMware reality for many IT organizations comes down to a two-horse race—Nutanix or Red Hat. I think Red Hat’s depth in cloud-native/containerization combined with its virtualization makes for a compelling story. Further, its long reach into the enterprise datacenter perhaps gives it a little bit of an advantage in pursuing the larger opportunities. With that said, I don’t believe that Red Hat has been aggressive enough in establishing and amplifying its dominant position. My expectation is that we will see the volume turned up in 2025.
Looking at AI, Red Hat seems to be in a little bit of a muddled place in the market as it tries to focus on RHEL AI and OpenShift as foundational elements of the enterprise AI equation. While I can see the story after spending a day with the company, I believe this is another marketing opportunity for Red Hat to educate and engage with enterprise IT. It is critical to be a part of the early AI discussion and project planning if Red Hat hopes to be successful.
It was earnings week for AMD and Intel. Both companies demonstrated growth and progress. However, the markets certainly treated each of them differently.
In the case of AMD, the company showed strong datacenter performance for its fiscal third quarter with revenue of $3.5 billion, representing a 122% year-over-year increase, and operating income of $1.04 billion, a whopping 240% YoY increase. Overall, AMD beat expectations for revenue and met them for EPS. Interestingly, the market was down on AMD (about 15% since earnings) due to the company’s conservative guidance and its weakness in other contributions to revenue (e.g., gaming). Realistically, I don’t believe that investors are viewing AMD negatively; rather, they are normalizing after a hype period that followed the company’s very aggressive take on the AI market.
As an ex-AMD employee, I don’t think there is a single person in Austin or Santa Clara (or Markham) who is disappointed with a $141 share price—save Dr. Lisa Su and the CFO. However, I do believe we will continue to see a little less stability in the stock price as the AI chip market continues to evolve, even though I believe that AMD is making all the right long-term bets for the AI future.
Intel also showed growth in its datacenter segment on revenues of $3.3 billion, representing YoY and sequential growth. Its $300 million in operating income represents margins of 10.4%—also up sequentially. Following the company’s earnings, the market showed favorable coverage as the stock moved up about 2%. Further, a number of institutional investors have raised their target buy price for Intel’s stock.
Intel’s financials were a lot more nuanced than what we saw from AMD. The company is going through a major restructuring, with many of the restructuring costs impacting bottom-line numbers. It seems investors are accounting for this and showing confidence in the work that CEO Pat Gelsinger and team are doing to right the ship.
Stock prices are about expectations: meeting, beating, or failing to meet. Intel has done a good job of managing the investor community as it continues its multi-year turnaround.
Cisco recently held its partner summit, using it to announce a revamped compensation and training initiative. What I really like about the Cisco 360 Partner Program is the long-lead-time deployment, which should ensure a smooth introduction in 2026. The company is also making a substantial $80 million investment in training to facilitate meaningful opportunities for the company’s channel sellers to expand their capabilities for providing expertise across AI, networking, security, observability, and more.
As regulatory demands for environmental accountability grow, enterprises need to track and manage carbon footprints across supply chains. SAP’s new Sustainability Data Exchange (SDX) offers a platform for standardized data sharing and improved emissions accounting. Ahead of the COP29 meeting in November 2024, SDX provides enterprises with tools to support international climate goals, particularly in sharing Scope 3 emissions data across the value chain. Integrated with SAP S/4HANA Cloud ERP, SDX enables precise data exchange of carbon emissions among businesses and their suppliers and customers. It helps address the issues created by outdated tools, data inaccuracies, and inconsistent calculations, helping enterprises move from estimations to actual emissions data provided by suppliers. Read more in my latest Forbes article onSAP’s Sustainability Data Exchange.
AI agents are becoming integral to ERP systems, creating a balance between automation and human oversight. Leading ERP providers are embedding these agents as a core component of their recent innovations. AI agents offer real benefits by supporting task management and improving operational efficiency in areas such as forecasting, customer satisfaction in sales and marketing, and collaboration across the supply chain. By managing routine tasks, these agents simplify ERP interactions, making the systems easier for users to navigate and apply across various business functions. As I’ve suggested in the past, strong data management strategies are necessary for these benefits to be attainable—AI agents rely on accurate, up-to-date data to be effective.
Blue Yonder’s Q3 2024 results show that the company added 31 new customers, including BJ’s Wholesale Club, PepsiCo’s Latin American unit, and Sainsbury’s. Its recent acquisition of One Network Enterprises expands Blue Yonder’s capabilities with real-time collaboration and data sharing across supply chains. Blue Yonder’s latest features include the Intelligent Rebalancer for real-time order adjustments, the Fulfillment Sourcing Simulator for optimizing fulfillment, and tools for automating warehouse and yard tasks. Updates to Cognitive Demand Planning improve forecasting and planning, providing clients with more accurate, flexible supply chain management solutions.
Blue Yonder has identified some key SCM industry trends this quarter, including rising food costs pushing grocery retailers toward value-focused inventory strategies, increasing demand for traceability due to U.S. and EU regulations, and labor and logistics challenges leading companies to cross-train staff and relocate distribution centers closer to U.S. markets. Additionally, manufacturers are adopting generative AI tools to enhance supply chain efficiency, customer service, and cost management. I see these trends as critical real-world challenges that Blue Yonder addresses to support its clients’ complexities and to align with today’s operational and regulatory demands.
As announced last week at the Money 20/20 conference, NVIDIA has introduced an AI-powered workflow for fraud detection running on AWS. With financial losses from credit card fraud projected to reach $43 billion by 2026, this solution is critical. The workflow leverages advanced algorithms and accelerated data processing to identify fraudulent transactions more accurately than traditional methods, potentially improving detection by up to 40%. It takes advantage of NVIDIA’s AI Enterprise software platform, GPUs, and tools including RAPIDS AI libraries and the Morpheus application framework to enhance fraud detection models and streamline their deployment.
For context, financial institutions are increasingly adopting AI and accelerated computing to combat fraud. This new workflow aims to provide a comprehensive solution for fraud use cases beyond credit card transactions, including new account fraud, account takeover, and money laundering. While fraudsters can also exploit AI to develop new and sophisticated schemes, AI equips the good guys with powerful tools to analyze vast amounts of data, detect subtle patterns, and adapt to evolving threats to fight back.
Across three days last week I attended multiple SAP SuccessConnect virtual sessions, during which several key trends in human capital management surfaced. I noted a strong emphasis on strategic workforce planning, particularly the need for proactive approaches to address skills gaps, enhance employee competencies, and support career transitions. Speakers frequently highlighted the importance of integrating HR and financial data to drive growth. Many sessions showcased and acknowledged the potential of AI in HCM, but also emphasized the importance of responsible AI development and deployment to prevent bias. I observed a recurring theme around the critical role of data analytics in understanding workforce trends and informing effective HCM strategies. SAP’s acquisition of WalkMe appears to generate positive results within SuccessFactors, with reports of improved user experience, increased task completion rates, and higher overall satisfaction.
Amal Clooney’s keynote address was a highlight of the event for me. She offered a compelling look at real-world use cases of AI in human rights and the workforce, providing valuable insights into the broader societal impact of these technologies.
I often write about the increasing practicality of using AI and machine learning techniques in small, low-power edge devices. NXP’s new AI-enhanced chips and high-productivity edge AI software development tools align with this trend.
NXP i.MX RT700 — NXP is doubling down on intelligence for small devices with AI-enhanced SoCs, new AI development tools, and eIQ Neutron, an internally developed family of NPUs (neural processors). The company recently introduced the i.MX RT700 “crossover” SoC, designed for ultra-low-power smart devices. (In NXP parlance, crossover MCUs are simple, low-power processors with MPU-like performance.) I don’t usually dive into the technical details of SoC designs in these weekly summaries, but this chip is noteworthy because of its heterogeneous design. There are two Cortex-M33 compute subsystems, each with a DSP (Tensilica HiFi). One M33 is the main processor for the chip, and the other is a low-power subsystem for always-on applications such as keyword recognition. The chip also has a modest graphics subsystem, a dedicated I/O processor (RISC-V), and an advanced memory architecture optimized for multiprocessor partitioning. The chip’s most disruptive compute subsystem is its eIQ Neutron N3-64 NPU. NXP claims the NPU provides a 172x performance boost and 119x per-inference power decrease compared with “general purpose processors.”
NXP eIQ tools — This week, NXP announced two new software enablement tools for the eIQ NPU family: eIQ Time Series Studio (TSS) and eIQ GenAI Flow. TSS automates machine learning workflows, streamlining time-series-based machine learning model development and deployment across MCU-class devices such as the i.MX RT700. Applications include anomaly detection, classification, and regression for many types of sensor data. The development model is BYOD (bring your own data), and the TSS development flow simplifies model tuning, optimization, and deployment. GenAI Flow provides building blocks for large and small language models (LLMs, SLMs) that power applications for generation and reasoning. These large models run on NXP’s i.MX MPU processor families. GenAI Flow makes generative AI applications accessible on these devices and supports retrieval-ugmented generation (RAG), a technique for securely fine-tuning models on domain-specific knowledge and private data.
Efficient application development for complex chips such as the i.MX RT700 requires more than a board support package (BSP) on GitHub. These devices are complete computer platforms, so developers need hardware, software, and tools that “just work” with mainstream OS distributions right out of the box, including support for all the accelerators and specialized function blocks on the chip—AI, DSPs, low-power compute subsystems, I/O, graphics, security, memory management, connectivity, and networking. I’m pleased to see NXP enabling application developers to focus on applications rather than system code.
Apple has expanded its relationship with Globalstar via a $1.1 billion investment, which gives Apple a 20% stake in the company and guarantees it a new satellite constellation for satellite messaging and other satellite services. This new constellation will have 85% of its capacity dedicated to Apple, and Globalstar can allocate the remaining 15% for its own customers. This gets Apple to become its own service provider, a route that I expected it would achieve with a 5G network slice well before I would’ve expected it over satellite. That said, this new network won’t replace terrestrial connectivity, but it does guarantee iPhone users nearly global coverage and likely enhanced satellite services once that new constellation goes up.
The OnePlus 13 has been announced in China and is already showing some stellar specs and performance, including a 6,000 mAh battery, an under-screen fingerprint sensor, and a Snapdragon 8 Elite SoC. Chinese reviewers who have already gotten their hands on the OnePlus 13 are reporting it as the new king of performance on tests in the AnTuTu benchmarking tool. OnePlus did this all while making the phone thinner and adding IP69 water resistance, which is higher than the previous IP65 rating and even higher than devices that proudly proclaim IP68 water resistance. I will be attending the company’s U.S. launch later in December and will give my thoughts after I’ve seen this device for myself.
RingCentral has announced that its AI Assistant is now included at no additional cost for RingEX users. This provides access to features like live transcription, closed captioning, meeting summaries, real-time call note capture, text and chat message writing, editing, and translation across different tiers. This strategic move allows RingCentral to demonstrate the value of its AI capabilities, potentially driving wider adoption of AI-powered solutions across its product portfolio. By giving users a taste of the efficiency gains and communication enhancements possible through AI, RingCentral encourages further exploration and integration of these technologies within its ecosystem.
Google’s Q3 2024 results indicate the growing influence of AI on its modern work solutions and overall Google Cloud business. Introducing the Customer Engagement Suite and its adoption by prominent clients such as Volkswagen of America show a strategic focus on enhancing AI-driven customer experience. The company reported improvements in work quality among Google Workspace users leveraging Gemini AI—and as a user myself, I would agree with this sentiment. While Google does not provide specific revenue breakdowns for these segments, the company’s overall solid revenue growth for Google Cloud (35% year-over-year, reaching $11.4 billion) and its attribution of this growth to increased generative AI adoption indicate that AI is a significant driver of its Cloud business. This data suggests that Google’s AI-powered tools have attracted new customers, facilitated more significant deals, and increased product adoption among existing users.
Apple announced a series of new Mac computers based on the new M4, M4 Pro, and M4 Max chips. These include the Mac Minis, MacBook Pros, and iMacs. Apple’s M4 series now starts with a 10-core CPU/10-core GPU configuration and ranges up to an M4 Max with 16 CPU cores and 40 GPU cores. Apple also finally admitted defeat over its claim from last year that 8GB of memory is adequate for AI applications; it simply isn’t enough. CoPilot+ PCs and now almost all new Macs include a minimum of 16GB of memory, including the 15-inch M3 MacBook Air and 13-inch M2 MacBook Air. This is a huge win for consumers.
The new Kindle Colorsoft has officially launched and it’s everything you would’ve hoped for from a color Kindle. It still has all the great advances in the latest Kindle Paperwhite, including great water resistance and a bright screen; it just has a color screen that looks great—and it still delivers an enjoyable Kindle experience.
Microsoft has unfortunately delayed Recall once again, this time pushing the feature’s release out to December. Realistically, this means that Recall won’t reach most users until next year, given that the initial release will be via the Preview channel; Microsoft said the feature would be there for a while as the company works out bugs and other issues. This is truly disappointing, and I believe these delays are hurting the entire Copilot+ PC narrative and the rollout of AI PCs in general. Microsoft took the narrative by the horns with Copilot+ PCs, but seems to have tripped over itself repeatedly since then.
The U.S. Department of Energy’s Advanced Research Projects Agency-Energy (ARPA-E) has announced $30 million in funding for the Quantum Computing for Computational Chemistry (QC3) program. The objective is to develop quantum algorithms that are better than existing chemistry and materials science simulations from classical computing. The ultimate goal is to design new industrial catalysts, discover superconductors for efficient electricity transmission, and improve battery chemistries.
Classical computing is unable to handle the full complexity of simulating the chemistry and materials needed for energy research and development. However, quantum computing has the potential to overcome these limitations. Project teams will be assigned specific problems in chemistry or materials science where quantum computing can be applied to reduce greenhouse gas emissions. The objective is to obtain a 100x improvement over classical methods or a scalable approach to achieving this, validated on existing quantum hardware.
CrowdStrike and Fortinet recently announced a partnership that aims to integrate AI-native endpoint security protection from the CrowdStrike Falcon platform into Fortinet’s FortiGate next-generation firewall portfolio. It seems like an unconventional partnership, given that both companies compete for the same set of customers. However, the collaboration has the potential to marry best-of-breed cybersecurity protection capabilities, enhancing threat protection and delivering optimized security outcomes.
AT&T recently announced a new fiber and 5G fixed wireless access gateway. On the surface, the solution could be attractive for branch operations where downtime equates to immediate loss of revenue and goodwill. Other connectivity infrastructure products offer cellular redundancy, but this converged gateway provides automatic failover to 5G in the rare event of a fiber cut or outage. Marrying the company’s depth in both fiber and FWA broadband in a single form factor that’s easy to provision and deploy could be a game changer—especially for small and medium-sized businesses that have limited IT staff resources.
]]>Adobe Debuts Free Web App To Fight Misinformation And Protect Creators
https://moorinsightsstrategy.com/adobe-debuts-free-web-app-to-fight-misinformation-and-protect-creators/
Thu, 31 Oct 2024 21:13:06 +0000https://moorinsightsstrategy.com/?p=44397Adobe has announced the upcoming launch of a free web application designed to address the growing threats of misinformation and concerns about unauthorized content use.
Adobe has announced the upcoming launch of a free web application designed to address the growing threats of misinformation and concerns about unauthorized content use. The Adobe Content Authenticity app allows creators to easily embed Adobe’s Content Credentials into their digital work, including images, audio and video files. These credentials function as a form of verifiable metadata, providing information about the creator and the content’s history, akin to a “nutrition label” for digital content.
The full web app is expected to launch in public beta in Q1 2025; creators can currently use Content Credentials within existing Adobe Creative Cloud applications such as Photoshop, Lightroom and Firefly. The beta version of the Content Authenticity extension for Google Chrome is currently available for early access.
Adobe Content Authenticity App Key Features And Functionality
The Adobe Content Authenticity app offers a user-friendly interface for applying Content Credentials. It allows creators to batch-process their work and include information such as their name, website and social media links. The app enables creators to specify whether they consent to their content being used to train generative AI models. This last feature, designed to function across various AI platforms, is in line with Adobe’s stated commitment to responsible AI development and should allow creators to control how their work is used.
Beyond empowering creators, Adobe also provides tools for end users to verify content authenticity and further enhance transparency. This is the rationale behind the Content Authenticity extension for Google Chrome mentioned above. The extension allows users to view Content Credentials associated with online content, even if the content has been reposted or captured via screenshot. This functionality ensures that provenance information remains persistently linked to the content.
Addressing The Growing Threat Of AI-Generated Misinformation
These initiatives are particularly crucial now, given the rise of AI-generated content—especially deepfakes—which pose a significant threat to the trustworthiness of digital media. By providing a standardized way to verify the origin and the edit history of a piece of content, Adobe aims to foster a more trustworthy digital ecosystem. Beyond the threat to public trust, the proliferation of AI-generated content has alarmed many creators. The new app addresses the concerns of creators who face unauthorized use and misattribution of their work. A recent Adobe study revealed that 91% of creators seek reliable methods for attributing their work, and 56% are worried about their content being used to train AI models without their consent.
Building Trust And Transparency Across The Creative Ecosystem
To counter these threats and foster a more trustworthy digital environment, Adobe is championing a new era of transparency. In situations where the truthfulness of content is paramount, such as news reporting or documentary photography, Content Credentials offer a mechanism for establishing trust and accountability. Conversely, the new app enables transparency in situations where creators are engaging in artistic expression or experimentation, allowing them to indicate if a piece has been heavily edited or incorporates AI-generated elements.
“Content Authenticity is about transparency and trust in digital content,” said Andy Parsons, senior director of the Content Authenticity Initiative at Adobe. “It’s about empowering creators with control over their work and ensuring they have proper attribution.” He added, “It also equips companies and consumers with the information they need to make informed decisions about the content they encounter. This shared benefit cultivates a digital environment where provenance is readily available, allowing creativity and innovation to thrive within a framework of accountability and ethical engagement.”
To make this vision a reality, Adobe is actively working to expand the reach of the Content Authenticity Initiative, which now has more than 3,700 members. To do this, the company partners with key players across various platforms and industries to encourage widespread adoption. This collaboration includes integrating credentials into the products and services of major tech firms such as Microsoft and OpenAI, camera manufacturers including Sony, Leica and Nikon and media platforms such as TikTok. Adobe also works with news organizations including the Associated Press and advertising companies such as Publicis Groupe to promote the use of Content Credentials in journalism and creative industries.
Through these partnerships and its Content Authenticity Initiative, Adobe aims to establish Content Credentials as the industry standard for digital content transparency and attribution.
My Analyst Perspective
Tools such as Content Credentials have the potential to significantly address the challenges posed by AI-generated content and the erosion of trust in digital media. By providing a standardized method for verifying content origin and authorship, Adobe’s initiative could contribute to a more transparent and accountable online environment. The industry’s positive response to Content Credentials suggests a growing awareness of the need for such solutions; widespread adoption could lead to a significant shift in how digital content is created, shared and consumed.
However, the success of Content Credentials hinges on industry partnerships, technological advancements and user adoption. Creators must actively embrace and integrate Content Credentials into their workflows for them to become a genuinely effective standard. Furthermore, continued development by OEMs beyond Adobe’s initial partnerships, including makers of cameras and other capture devices with built-in Content Credentials functionality, will be crucial for ensuring widespread accessibility and ease of use.
Further development of the relevant software could expand the reach and functionality of Content Credentials. This could include firmware updates for cameras, tools for retroactively applying credentials to existing content, enhancing credential management with advanced features and extending support to a broader range of devices.
Regardless of the specifics, the effectiveness of Content Credentials will depend on a collaborative effort. My hope is that, by promoting transparency and accountability and equipping users with the necessary tools and knowledge, technology companies and users can work together to build a more trustworthy and authentic online experience.
]]>Six Five On the Road – Fem.AI: Igniting Women Into the AI Workforce
https://moorinsightsstrategy.com/the-six-five/six-five-on-the-road-fem-ai-igniting-women-into-the-ai-workforce/
Tue, 29 Oct 2024 16:35:51 +0000https://moorinsightsstrategy.com/?post_type=six_five&p=43785Nicole Johnson, Global Director of Social Impact and President at Cadence, joins Melody Brue to share insights on Fem.AI and its mission to usher more women into the AI workforce.
]]>Did you know October is National Women in AI Month? Start by learning about opportunities for women in tech and AI on this episode of Six Five On the Road. Host Melody Brue is joined by Cadence Design Systems‘ Nicole Johnson, Global Director of Social Impact and President of the Cadence Giving Foundation, for a conversation on empowering women through technology and initiatives to integrate them into the AI workforce more efficiently.
Their discussion covers:
The vision and goals of Fem.AI
The current landscape for women in AI and technology
Strategies and programs Cadence Design Systems employs to support women in AI
The impact of social impact initiatives on corporate culture and the wider community
Future directions for Fem.AI and how individuals and organizations can get involved
Disclaimer: Six Five On the Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Melody Brue: Welcome to the Six Five. I’m Melody Brue, Vice President and Principal Analyst at Moor Insights & Strategy. Today, we’re going to explore the fascinating world of AI and its impact on society. We’re going to dive into a crucial topic, the role of women in the AI workforce, and the numbers paint a bit of a stark picture. AI is projected to add trillions to the global economy and millions of jobs by 2030, yet women currently hold only a fraction of those roles and even fewer hold leadership positions. So this disparity demands our attention, and that’s why we are thrilled to have Nicole Johnson join us today. Nicole is the Global Director of Social Impact and the President of the Cadence Giving Foundation or Cadence. She is really helping to lead the charge to empower women in AI through initiatives like the Fem.AI Alliance. Welcome, Nicole.
Nicole Johnson: Thank you, Mel. Thanks for having me.
Melody Brue: So glad to have you, and I loved being with you and so many other amazing speakers and guests and participants at the Fem.AI Summit. And I know that was just the beginning, just the kickoff. We talked about so many things there, and one of the things, I think, that really stuck out was that on one hand, there are tons of entry points of women in tech and AI that we have to address, but then there’s the place where there’s the dropout. I think that was something that was really important that was talked about there. Where do you see women most dropping out of the pipeline?
Nicole Johnson: Yeah, so as we’ve started to embark on understanding this problem and where we can start to integrate in solutions, we’ve identified three leakage points where we see women most commonly drop out. The first is actually completing a STEM degree. So 45% of women who start college with a STEM major don’t graduate as a STEM major, which is fascinating. And so, that first one is how do we keep women in school and in STEM majors? The second is accessing a first job. Women have a harder time than men accessing that first job, should they complete a degree. And that becomes even more significant if there’s intersectionality with being a first-generation college student, with being BIPOC, with being LGBTQ+. So those can all contribute to, again, that first job access that is so critical to get you on your career trajectory.
And then, the third is rising to leadership levels. So you mentioned it, but I’ll just give the statistic. Right now in AI, we see women making up about 20% of the AI workforce and only about 10% of leadership. So once women are in those roles, we actually see about 50% of women dropping out of the workforce before they hit 12 years in tech versus about 20% in other industries. So how do we keep them in the workforce? How do we keep them engaged? How do we keep them wanting to rise to leadership levels so that other young women who are coming up can see role models and have that as an example of what a career trajectory can look like.
Melody Brue: Yeah, those are important points in all of those steps. How do you see AI differing from tech on a broader scale?
Nicole Johnson: Sure. So to me, there’s two key things. So the first is where we are right now in time. And so, when we were kind of doing a landscape analysis of where can we make an impact as a company that sits at the intersection of AI as a foundation that’s working on big problems, the AI workforce and the AI economy really came to light because we weren’t intentional about the internet revolution. The internet happened. Nobody was thinking like, “We’ve got to make sure… I mean, if you can remember back in the late ’80s, early ’90s, late ’90s, I mean, nobody was thinking, “Who’s in the workforce?” And so, we think that right now is a really key moment in time to start inserting some intentionality around this function, so that’s one.
The second is when you look at the type of roles that we believe are going to be available in AI, it’s actually roles that women are, and I don’t want to gender everyone, but typically, what we’ve found is that women like to work on impact issues, big problems. They want to make that impact with their career. And where we see AI actually is kind of blossoming is bringing in liberal arts majors, bringing in subject matter experts. So these are all places that we think that women can take really full advantage of being involved in this new AI workforce.
Melody Brue: Yeah, so it seems like that’s not going to happen, just like you said, with the internet and technology in general. I remember early in my career being like, “Oh, I’m the only woman in the company,” and sort of being okay with it and not really realizing that that’s something that people need to pay attention to. How do you see the kind of interventions that can be taken to ensure that that’s not okay, that we can ensure full participation?
Nicole Johnson: So this is where we really look to our nonprofit partners as the experts in these specific areas they work on. And so, we’ve partnered with seven organizations to date that are on the ground doing this work with women every day, ensuring that women, again, addressing those three leakage points of ensuring women stay in school. I’ll give you an example. Last Mile Education Fund is one of our partners that works on this. They do emergency grants for people, but it is really skewed towards women in their numbers that are within four semesters of graduating with a CS or EE major that need emergency loans to make it through. And it is shocking. The dollar amount that these women need, it is usually $1,300 or less, and that actually keeps them in school and gets them to graduate. And so, I mean, it’s those kinds of partnerships that we have been fostering and working on to really ensure that these on the ground solutions are being implemented.
Melody Brue: Those partnerships are so important, and you also have a bunch of corporate partnerships as well as part of your Fem.AI alliance.
Nicole Johnson: Yeah.
Melody Brue: So let’s talk about the Alliance and then also some of the very impressive and very exciting partnerships that were just announced.
Nicole Johnson: Sure. So when we started embarking on this… And you were generous with your time to attend our summit, thank you. But when we started embarking on this, we recognized that this is not a problem we could solve on our own, and we really needed to bring industry together to work on this issue congruently. So we launched an alliance, and we’ve had some really impressive partners already sign up. So to date, we have Nvidia. We have Workday. We have NetApp. We have Equinix. We have breaking news to you that we haven’t even announced yet. We have Vodafone, and we have Unity Technologies, so we’re moving quickly. And I think what’s been interesting about that is seeing how many companies are excited about addressing this issue and want to.
And so, our plan for this alliance is to bring them together with, again, some of the stakeholders out in the community. We are not under the impression that we can solve everything. And so, again, we look towards those partnerships, whether it’s with nonprofits, whether it’s with universities, whether it’s with scholarships, whatever that partnership looks like that best fits those leakage points. We’re going to work in collaboration with those experts on how to ensure that, again, women are not dropping out along those leakage points.
Melody Brue: I love that we have some breaking news here, too. That was a nice surprise. And Cadence started this out with a $20 million funding. I mean, that in itself is super impressive. So I just want to close this out with… I mean, I’ll give you an opportunity to share any of your last thoughts. I think from a broad outlook perspective, you have a huge job in terms of being in charge of corporate responsibility and all of that. And that feels to me like it must be a huge honor, but also in a way, I don’t want to say a burden, but it’s like that must weigh really heavy on you at times to be like, “There are so many problems to solve.”
And I look at what Cadence does in so many different ways, the $50 million investment in the racial equity fund and other initiatives. And I think you’re so fortunate to be a part of that, but also that’s a lot to take on, and to have those partners join you and everything, it must be great. How do you function daily, just deciding what do I invest in, what do we do? Because you guys do a lot, but there are a lot of things to do.
Nicole Johnson: There are a lot of things to do. So to be perfectly candid, there are days where you’re like, “Oh my gosh, I’ve been doing this… I mean, I’ve been working in this space or the nonprofit space for all of my career, and sometimes it can feel like, “Has anything changed?” But at the end of the day, I mean, again, I’m going to call out Last Mile Education Fund just because they were kind enough to bring one of their students that had been impacted by their funding to an event that we had recently. And she is a junior at Stanford. She just transferred to Stanford from Houston Community College, and it was that grant that she got from Last Mile Education Fund that allowed her to finish her community college courses and get into Stanford, and she’s a junior in computer science.
Melody Brue: That is awesome.
Nicole Johnson: I mean, there is impact, and I will also say I have an amazing team. First of all, they are completely committed to this work as well, as well as our board, so our Cadence Giving Foundation Board includes our CEO, our CFO, our CHRO, and many others. And so, they’re passionate about it, so I don’t feel like I’m going at it alone. I will say one of the things that keeps me sane is I also have a group of social impact leaders from other tech companies that I talk to on a regular basis. In fact, I’m going to do yoga with several of them tonight.
Melody Brue: Nice.
Nicole Johnson: And so, that network and those relationships can really help keep you grounded and sane when you’re dealing with all of the things that are coming at you.
Melody Brue: I’m sure that helps, too. I think this is something also that I’ve seen Cadence do is really not just do the things as Cadence is doing this, but also say, “Hey, I want you to join us. I want this company to join us,” and really kind try to set this as more of a movement, not just Cadence is doing this and that’s it, but really bring people in to do that, which is awesome. Is there anything that we didn’t cover that you want to share either about the alliance, or I mean, one, definitely would love for you to share how people can get involved, where to go to get more information about anything else?
Nicole Johnson: So we do have a website. It’s Fem.AI, and there is information there on either joining the alliance on learning more about women in AI. We do plan on updating that page with opportunities for trainings and other things that women can take advantage of. So that is a work in progress, but there is some great stuff there right now. And I would also say I would just really encourage women to look at this space that’s developing right now, look at this economy, look at this workforce, and think about ways that they might be able to fit into it. Again, I see this as a huge opportunity space for women, and I want them to be able to take advantage of it because my thinking is when women are successful, the whole workforce, everything lifts up. The economy lifts up, families lift up, and so it is an opportunity space. And so, take advantage.
Melody Brue: That’s great. Super inspiring and thank you for all your work and Cadence for all the work that you all do. It really is truly inspiring, and I’m so happy to have been a part of the summit and been able to kind of see all of the work that you do. Thank you for joining us today on The Six Five, and we look forward to seeing all of the goodness that the Fem.AI initiative continues to do and all of the partners who join along with you. And thank you for joining us, everybody who’s watching. And again, you can go to the Fem.AI website to get more information and hopefully join along. Thanks for joining us, and we will see you next time on The Six Five.
]]>MI&S Weekly Analyst Insights — Week Ending October 25, 2024
https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-october-25-2024/
Mon, 28 Oct 2024 18:16:58 +0000https://moorinsightsstrategy.com/?p=43721MI&S Weekly Analyst Insights — Week Ending October 25, 2024. A wrap up of what our team published during the last week.
Welcome to this edition of our Weekly Analyst Insights roundup, which features the key insights our analysts have developed based on the past week’s events.
A screenshot of the AI sidekick in Miro Innovation Workspace.
Our analyst Melody Brue joined the firm in 2020 to cover fintech, but her versatility means that she is now responsible for other areas of enterprise software, plus our “Modern Work” practice—the tools, policies, and strategies that affect things like working from home (or being required to RTO). Last week she wrote a detailed analysis of the new AI-driven Innovation Workspace product from Miro. Fittingly, to round out her coverage of this collaboration app, she worked with our enterprise software development analyst, Jason Andersen, who put Innovation Workspace through its technical paces. We’re always on the lookout for opportunities like this to draw on the different flavors of expertise of Moor Insights & Strategy analysts.
Last week, Patrick and Will were in gorgeous Maui at Qualcomm’s Snapdragon Summit, Melody was back in Florida for Cisco’s WebexOne, and Matt attended the RISC-V Summit virtually.
This week, Patrick will be at AWS HQ in Seattle for an exclusive re:Invent preview, then off to Los Angeles for the analyst session for Cisco’s Partner Summit. Mel will be tuning into SAP SuccessConnect Virtual. Jason (virtually) and Matt will be attending the Red Hat Analyst Day in Boston. Will is off to Riga to spend Halloween at 5G Techritory.
After several weeks of back-to-back travel, our team is looking forward to a brief respite at our desks and home offices as we head into the holidays. We’d love to connect with you over the next several weeks to touch base and discuss your plans for 2025 and beyond. Reach out to schedule some time!
Over the last week, MI&S analysts have been quoted in multiple top-tier international publications with our thoughts on IBM, Qualcomm & Arm, Google, Nvidia, the CHIPS Act, and more. Patrick was a guest on Prof G Marketsto break down the state of play in the chip industry. He also made appearances on CNBC Power Lunch and Yahoo! Finance.
Last week IBM hosted its TechXchange event in Las Vegas. While there has been a lot of attention on the new Granite models (which was the big announcement), the show itself was pretty interesting. Now in its second year, TechXchange focuses on IBM technology practitioners. So by intention, it’s not going to have major product announcements. It also had two other notable differences. First was an incredibly broad span of technologies. There were sessions on the latest AI innovations and DevOps tools, but also quite a bit on IBM mainframes and older technologies such as DB2. Second, IBM very intentionally managed multiple programs to better engage this important part of the IBM community. For instance, IBM has a program to name IBM Champions, who are peer-nominated customer ambassadors. The Champions were easy to find because they had special blue sweatshirts, and there was programming just for them. As an ex-IBM salesperson, I always found that the internal technical sponsor was a big key to growing IBM footprint at a given client. So, I think IBM increasing its focus on that community is a really smart move.
While at TechXchange, I also metDave Nielsen from IBM, who is a leader of theAI Alliance that IBM co-chairs with Meta. As two of the biggest open source players in AI, I was quite interested to hear how IBM and Meta were collaborating on how to make AI more transparent and safe. I’m looking forward to a follow-up conversation on this soon.
I also had an opportunity to check outUiPath’s Forward and TechEd shows, which ran concurrently last week. UiPath has been a leader in business process automation and robotic process automation for some time now. CEO Daniel Dines is not unique in pitching generative AI as a potential to expand the UiPath universe. While this is not unique, what is unique is how many pieces its existing platform has that could truly accelerate agentic programming. In fact, I was introduced to UiPath after publishingmy article on AI Agents last month in Forbes. UiPath is certainly a candidate for the first-mover status I mention in the piece. Also, I was quite taken with the leadership at UiPath. After meeting SVP and general manager Mark Geene and CTORagu Malpani, I saw a team that was open, collaborative, and in many ways humble in its approach. This is not something I get to see every day.
As mentioned in the headnote for this week’s updates, last week my colleague Melody Brue (with some help from me) publishedthis great piece on Miro Innovation Workspace on Forbes. I got to try out the product, using an application development process as a test run of its functionality. Quite frankly, I was impressed with how much the GenAI helped me create a better set of deliverables and how Miro could help overall. While methods like agile and extreme programming push a code-first agenda, I find that in many cases there is still value in taking some time for product owners within the organization to develop a cohesive set of documents to help drive purpose and customer sentiment to the development teams. Now, I am not talking about hundreds of pages of requirements like we see in older methods, but something in between—maybe even just a sprint timeframe for POs and architects to collaborate and get the team on board. What’s nice is that Miro can improve and accelerate that Sprint 0 activity and then feed the data into familiar project management tools like Jira or Monday. It’s worth a look if you are encountering teams that are overly focused on tickets and not the journey itself.
Last week IBM hosted its TechXchange event in Las Vegas. While there has been a lot of attention on the new Granite models (which was the big announcement), the show itself was pretty interesting. Now in its second year, TechXchange focuses on IBM technology practitioners. So by intention, it’s not going to have major product announcements. It also had two other notable differences. First was an incredibly broad span of technologies. There were sessions on the latest AI innovations and DevOps tools, but also quite a bit on IBM mainframes and older technologies such as DB2. Second, IBM very intentionally managed multiple programs to better engage this important part of the IBM community. For instance, IBM has a program to name IBM Champions, who are peer-nominated customer ambassadors. The Champions were easy to find because they had special blue sweatshirts, and there was programming just for them. As an ex-IBM salesperson, I always found that the internal technical sponsor was a big key to growing IBM footprint at a given client. So, I think IBM increasing its focus on that community is a really smart move.
While at TechXchange, I also metDave Nielsen from IBM, who is a leader of theAI Alliance that IBM co-chairs with Meta. As two of the biggest open source players in AI, I was quite interested to hear how IBM and Meta were collaborating on how to make AI more transparent and safe. I’m looking forward to a follow-up conversation on this soon.
I also had an opportunity to check outUiPath’s Forward and TechEd shows, which ran concurrently last week. UiPath has been a leader in business process automation and robotic process automation for some time now. CEO Daniel Dines is not unique in pitching generative AI as a potential to expand the UiPath universe. While this is not unique, what is unique is how many pieces its existing platform has that could truly accelerate agentic programming. In fact, I was introduced to UiPath after publishingmy article on AI Agents last month in Forbes. UiPath is certainly a candidate for the first-mover status I mention in the piece. Also, I was quite taken with the leadership at UiPath. After meeting SVP and general manager Mark Geene and CTORagu Malpani, I saw a team that was open, collaborative, and in many ways humble in its approach. This is not something I get to see every day.
As mentioned in the headnote for this week’s updates, last week my colleague Melody Brue (with some help from me) publishedthis great piece on Miro Innovation Workspace on Forbes. I got to try out the product, using an application development process as a test run of its functionality. Quite frankly, I was impressed with how much the GenAI helped me create a better set of deliverables and how Miro could help overall. While methods like agile and extreme programming push a code-first agenda, I find that in many cases there is still value in taking some time for product owners within the organization to develop a cohesive set of documents to help drive purpose and customer sentiment to the development teams. Now, I am not talking about hundreds of pages of requirements like we see in older methods, but something in between—maybe even just a sprint timeframe for POs and architects to collaborate and get the team on board. What’s nice is that Miro can improve and accelerate that Sprint 0 activity and then feed the data into familiar project management tools like Jira or Monday. It’s worth a look if you are encountering teams that are overly focused on tickets and not the journey itself.
IBM unveiled the third generation of Granite LLMs—the Granite 3.0 models—featuring Granite 3.0 2B Instruct and Granite 3.0 8B Instruct. These open-source models were trained on 12 trillion tokens in 12 human languages and 116 programming languages. The 3.0 models can be used for RAG summarization, entity extraction, and editing. According to IBM, by the end of 2024 Granite 3.0 models will be capable of understanding documents, interpreting charts, and answering questions about a GUI or product screen.
Agentic use cases are new capabilities. Agents can proactively identify needs, utilize tools, and initiate actions without human intervention. This advancement marks a significant step forward in the functionality and autonomy of IBM’s language models.
Anthropic announced some important model upgrades—Claude 3.5 Sonnet and Claude 3.5 Haiku, plus a new “computer use” feature. Anthropic has improved Claude 3.5 Sonnet’s coding and tool use. Claude 3.5 Haiku has better performance, but at the same cost and speed as the previous version. With “computer use,” AI can control computer interface actions such as cursor control and typing information with human-like precision. These improvements show even more ways that AI has the potential to automate complex tasks and improve productivity.
I shared a car service last week with a gentleman from a competing analyst firm. This man, who is based in Latin America, was talking about how he would be on the road for four straight weeks covering Huawei (yes, that Huawei) events. This got me thinking a little bit. I remember that when I first started at MI&S, Huawei was a company that most analyst firms covered—and for good reason. It had solutions that fit in every technology category: silicon, hardware, software, cloud; client, server, networking, storage, and phone; consumer and commercial; and even consulting services. I was invited to the company’s analyst meeting and walked away fully in awe of the breadth and depth of its portfolio.
Fast forward seven years or so years and Huawei is everywhere—except the U.S. and Canada. If you chat with U.S. government officials or major U.S. OEMs, you’ll hear that this all has to do with the company being under the control of the Chinese government. If you ask U.S. IT executives, you’ll hear much of the same.
Is this concern real? Or is there more to the story? I think it’s the latter. First, there was the controversy about—and banning of—China-based ZTE Systems in the United States after the company was found to be illegally shipping technology to Iran and North Korea. After that, there was a very strong anti-Chinese sentiment in the United States (and some Western European countries) with regards to technology. In that vein, we eventually saw server manufacturer Inspur get placed on the U.S. exemption list and get hit with a patent infringement lawsuit.
Second, I believe U.S.-based server vendors saw a real threat from Huawei and put a lot of money into lobbying against the company in D.C. What better way to protect against the market threat of this company than to amplify a “national security threat” concern with lawmakers and policy folks?
I am not an advocate of Huawei, and I’m not a geopolitical expert. However, when a product or technology is used pretty much everywhere except in one or two countries—I figure something is amiss.
Can’t we all just get along? There is a lot of noise around the Arm and Qualcomm licensing dispute. And there are a lot of opinions. The licensing issue originates with Qualcomm’s acquisition of Nuvia—a company that included many ex-Apple chip designers focused on developing Arm-based chips to compete with x86. Effectively, they wanted to create a commercial version of the M-Series chips that we see in the Apple MacBook. Here’s the rub: Arm says that the Qualcomm acquisition nullified the Nuvia architectural license. So, in effect, those Oryon cores that are inside the Snapdragon CPU should not be shipping to Dell, HP, Lenovo, Microsoft, and others. And oh yeah, there is more Nuvia-derived IP that hasn’t yet come to market but could also be considered in violation of the architectural license.
Qualcomm says “no way” and that Arm is employing anti-competitive practices.
It’s easy for pundits to point fingers and pick a side. But these architectural licenses are very complex—and very specific. Enough so that they are far beyond my (and most folks’) cursory understanding of IP law and the specifics of this agreement.
Here’s what I do know. Regardless of what the outcome of a court case may be, nobody wins here. If Arm follows through with its termination of the agreement—nobody wins. If Qualcomm prevails on its own terms—nobody wins. There has to be a settled agreement between the two parties, one in which both walk away feeling good about the relationship and in which neither feels overly emboldened. This isn’t about only Arm Holdings or only Qualcomm winning. It’s about Arm maintaining a strong market position relative to x86. This licensing issue dragging out and becoming overly burdensome on one side or not profitable enough on the other side is going to have a long-lasting negative impact across the entire Arm market.
McAfee and Yahoo News are teaming up to fight deepfakes in the news with an AI-powered detection tool. This tool, driven by McAfee Smart AI, analyzes images for signs of AI manipulation and flags them for review by Yahoo’s editors. This effort is similar in spirit to Adobe’s Content Authenticity Initiative, which allows creators to attach “nutrition labels” to their digital content, providing details about how it was created and edited. While Adobe’s initiative promotes transparency across various digital media, McAfee focuses on protecting the integrity of news media, a critical area of concern as deepfakes become increasingly sophisticated. This collaboration could prove essential to preserving trust and credibility in the news, especially during critical events where misinformation can have significant consequences.
Both McAfee and Adobe are utilizing AI to combat misinformation and foster trust in digital content, but their approaches differ. McAfee’s new partnership with Yahoo News focuses specifically on detecting deepfakes in news media, while Adobe’s Content Authenticity Initiative aims for broader transparency across various digital content through “nutrition labels” detailing creation and editing history. This difference reflects their specific priorities and target audiences. However, with growing support for content authenticity across the board, I suspect that a collaboration between the two tech companies could be on the horizon, potentially leading to even more robust solutions for verifying digital content.
Jira, long known for helping software developers manage projects, has been branching out to support marketing teams. This is happening at a time when marketing workflows are becoming increasingly complex, with higher-velocity campaigns, more stakeholders, and a wider range of channels to manage. This complexity makes staying organized and on track more challenging than ever. Jira’s tools should help marketers streamline their work, improve communication, and track progress more effectively. This push to be more than a dev tool is evident in its new “Jira for Marketers” series, a live learning session set to show marketing professionals how to use Jira to manage campaigns, content creation, and events. It’s a smart move by Jira, recognizing that many teams, not just software engineers, can apply agile development and project management principles.
Zoho announced a partnership with Nvidia to boost its AI capabilities, specifically in developing and deploying LLMs for its business software. This significant move for Zoho shows its commitment to providing robust, business-focused AI solutions. Zoho plans to use Nvidia’s AI Enterprise software and accelerated computing platform to build LLMs tailored for various business needs, focusing on privacy and providing contextually relevant information. The company aims to help businesses see a fast return on their investment by using AI to speed up operations and reduce delays. This partnership allows Zoho to accelerate its LLM deployment and optimize performance. It’s clear that Zoho is serious about AI and is making strategic moves to become a leader in enterprise AI for its market.
IBM has acquired Prescinto, a company that makes software for renewable energy asset management. The acquisition will enhance IBM’s Maximo Application Suite by enabling it to manage renewable assets such as solar panels and wind turbines. This expansion adds to Maximo’s existing enterprise asset management (EAM) capabilities for managing physical assets such as buildings and infrastructure, inventory, work orders, and maintenance. Prescinto’s AI-driven tools should improve Maximo’s analytics and predictive maintenance features, enabling users to optimize the management of renewable energy assets. By consolidating renewable energy asset management into Maximo, IBM aims to simplify customers’ operations by eliminating the need for separate systems.
Spirent recently published a report highlighting the opportunity for Ethernet to benefit from the growing adoption of next-generation AI applications. It’s not a surprising conclusion, especially given recent efforts by AMD in productizing its silicon for Ultra Ethernet Consortium network interface cards that utilize RoCEv2 to power back-end data center interconnect fabrics. (For more details on that, see my recent research brief that covers the AMD NIC.)
SAP has launched itsSustainability Data Exchange, a SaaS application designed to help enterprises achieve their net-zero goals by enabling standardized carbon data sharing across supply chains. Gunther Rothermel, chief product officer and co-general manager of SAP Sustainability, said, “Managing carbon to accelerate a net-zero future makes measurability critically important. That is where technology and innovation can make a real difference. With SAP Sustainability solutions and our ERP-centric, cloud-based, AI-enabled approach, we support our customers to use integrated sustainability data and embed it holistically into their core business processes.”
One specific advantage of this SAP application is that it assists enterprises in transitioning from estimates to actual emissions data. The platform ensures accurate carbon footprint tracking by integrating with SAP’s ERP ecosystem and supporting industry standards such as Catena-X and PACT. With sustainability being a key focus for many businesses in 2025, this solution demonstrates SAP’s commitment to providing an intelligent sustainability platform for enterprises. You can read more on this in my upcoming article on sustainability practices enabled by ERP.
SAP also reported strong financial Q3 2024 results, demonstrating growth with its cloud-based ERP solutions. Total revenue increased 9% year-over-year to €8.47 billion, with cloud revenue rising 25% to €4.35 billion. The Cloud ERP Suite saw a 34% revenue increase, indicating its continuing appeal for helping businesses manage operations more efficiently in the cloud. The company’s operating profit and free cash flow also grew, improving by 29% and 44%, respectively. SAP has made a big commitment to supporting customers’ digital transformation efforts, an approach enhanced by its acquisition of WalkMe (which has already begun to contribute to the company’s backlog of cloud business). Meanwhile, SAP’s overall results show the success of its strategic efforts to shift customers towards cloud-based ERP systems.
Microsoft has launched autonomous agents for Microsoft Copilot Studio and Microsoft Dynamics 365, the company’s ERP/CRM platform. Microsoft has introduced ten new AI agents for Dynamics 365, focusing on sales, customer service, finance, and supply chain operations. These agents are designed to automate routine tasks, improve workflows, and increase efficiency. Of course, there are challenges, including ensuring data security and privacy, integrating with current systems, and maintaining accuracy.
Agents are a key advancement for Microsoft Dynamics 365 Finance and Supply Chain, bringing flexibility to data management and task execution. The company says that the supply chain agents can help identify bottlenecks and disruptions, suggest improvements, and optimize order fulfillment. AI agents can support payment processing and compliance in finance and provide real-time data for better financial planning. More to come on this topic.
Call of Duty: Black Ops 6 is out, and I’ve already had a chance to play it. What makes this year’s version stand out isn’t that it’s a particularly new or exciting game—it’s the fact that the game is available for the first time via Microsoft’s monthly game subscription service, XBOX Game Pass. This service is how Microsoft plans to continue to grow its gaming business into a recurring revenue source, and in that context Call of Duty was a big part of why Microsoft paid $69 billion for Activision Blizzard, which makes the game.
That said, the game still has to be attractive and exciting enough to get people to keep subscribing, and I believe that Microsoft could ultimately convince gamers of the value of Game Pass—with its many titles—rather than just buying an individual game. I haven’t played the new Call of Duty enough to decide whether it’ll be a success or a flop, but I do know that a lot of people are playing it right now. If they still are in a few weeks, that’ll be a good sign for Microsoft.
AWS announced the end-of-life of AWS IoT Device Management Fleet Hub, a service that sits on top of AWS IoT Device Management and overlaps some features in AWS IoT Console. Conceptually, it’s a dashboard generator that creates applications (“single panes of glass”) through which customers can monitor large numbers of IoT devices. AWS will permanently decommission Fleet Hub a year from now, on October 18, 2025.
I see two reasons for this abrupt EOL. (1) Application generators have evolved considerably since Fleet Hub’s launch four years ago, so it’s time to refresh or rewrite. (2) Extreme device diversity requires extensive (and brittle) interface customization, which often costs more than the benefits of orchestration. The EOL press release says it this way: “As technology and customer needs continue to evolve, we have made a decision to discontinue the feature.” In other words, the solution needs to be rewritten, and doing so doesn’t make sense because the device interface diversity problem remains unsolved.
This is the right decision for AWS. Device diversity should be managed and simplified in IoT middleware, not the user-facing application layer. Smarter middleware with simple APIs that enable high-level device management makes fleet management practical. My take: Efficient IoT device orchestration requires a consistent model for middleware. A few visionary companies are already working on that, and I’ll offer a deeper analysis in an upcoming paper.
This week, Honeywell and Google Cloud announced a collaboration project focused on developing AI-based solutions to two big problems common to many industries: (1) A looming talent shortage and skills gap in the industrial sector, and (2) on-device AI for autonomous operation. Honeywell and Google aim to address the talent gap by making industrial processes increasingly autonomous, leveraging Google’s Vertex AI to customize, train, and deploy ML models and AI applications.
Suresh Venkatarayalu, Honeywell’s CTO and president of Honeywell Connected Enterprise, says, “We’re moving from automation to autonomy. Our goal is to equip companies with AI agents that assist workers in real time—on factory floors and in the field. With AI running both in the cloud and at the edge, we’re making sure that systems work smarter and more efficiently.” In addition to using autonomy to reduce dependencies on scarce talent and skills, the companies are extending Honeywell Forge, a massive database of industrial knowledge, with Vertex AI and LLMs. The idea is to create AI “coaching” agents that deliver helpful information when and where employees need it.
Industrial processes require continuous operation, even when the Internet is down or cloud services are unavailable. Google’s Gemini Nano addresses this problem by providing AI services at the edge of the network, enabling devices like scanners, cameras, sensors, and controllers to operate autonomously. Honeywell’s first solutions built with Google Cloud AI will hit the market in 2025.
My take: Last year, Siemens and Microsoft made a similar deal. Big industrial suppliers are pairing up with CSPs to accelerate the development of advanced AI-based solutions. This is how AI gets a blue-collar job and starts working for a living.
At the same time Qualcomm launched its newest mobile and automotive platforms in Hawaii last week, Bloomberg reported that Arm decided to terminate Qualcomm’s v8 architectural license to escalate the two companies’ ongoing IP dispute. I consider this to be the nuclear option, and it seems like a very odd move considering that the two are expected to be in court in less than 60 days. I believe that this is a mistake on Arm’s part, especially since Qualcomm is one of its biggest partners, and won’t bode well for how other vendors see Arm. Additionally, the entire RISC-V ecosystem is salivating at the prospect of having a company like Qualcomm backing their efforts, especially considering China’s appetite for RISC-V. I believe that Arm and Qualcomm are mostly fighting over egos rather than a few million dollars here or there for either company. This will hurt the ecosystem that Arm claims to be protecting.
AT&T and Verizon both say they are seeing reduced excitement around the iPhone 16 series, even with generous trade-in offers. People just aren’t sold on Apple Intelligence as a reason to upgrade, especially since Apple’s AI product hasn’t properly launched yet and won’t be fully available until next year. While Apple will absolutely continue to market these features, the reality is that consumers won’t be convinced that they are real until they are all available and functioning outside of beta. That might not be until Q2 of next year, which is why analyst Ming-Chi Kuo has said that Apple has reduced its orders of iPhone 16 by 10 million over the next few quarters.
AST SpaceMobile continues to be on a roll with the news that it has won a contract with the U.S. Government that qualifies it as a Prime Contractor for the US DoD, which enables it to win more federal contracts. Additionally, the company has successfully unfolded its first five commercial satellites, which it had recently launched into LEO with SpaceX. AST SpaceMobile may soon be a viable alternative to Starlink from SpaceX, which also recently announced that it would be delivering commercial direct-to-cell service with T-Mobile by the end of the year.
Cisco launched its new Ceiling Microphone Pro at WebexOne last week which was designed to enhance audio quality and flexibility in meeting rooms. The microphone uses beamforming technology to capture sound from a specific area, minimizing background noise to ensure clear audio for both in-room and remote participants. It offers unidirectional and omnidirectional modes, adapting to different room sizes and configurations. It’s designed for plug-and-play installation and integrates with Cisco’s Room and Board Series endpoints. IT can also manage it remotely through an administrator portal.
I saw this microphone firsthand at Cisco’s WebexOne conference in the Miami area. The ceiling-mounted design minimizes clutter and provides a clean aesthetic that looks very modern but not cold. Notably, this microphone is the first product to showcase Cisco’s new design language, which prioritizes sustainability with a soft, organic shape contrasted with sharp, defining lines. It’s constructed entirely of aluminum, a highly recycled and recyclable material, and even the speaker grille is a structural element, not just a cosmetic cover. Cisco also considered the manufacturing process in its design, placing a seam between components on the top surface to make assembly more forgiving. Furthermore, it’s the first Cisco product to ship with zero plastic packaging, reflecting Cisco’s increasing commitment to environmentally conscious product development.
Cisco also launched Workspace Designer, an online tool that uses Cisco’s collaboration technology to simplify the process of planning and equipping meeting rooms. Users can choose from various room sizes and layouts, experiment with different device configurations and furniture placements, and even receive recommendations on the best technology for their needs. The tool aims to reduce the complexity often associated with designing adequate meeting spaces, which previously might have required extensive consultations and a lot of guesswork. Workspace Designer also provides helpful warnings and tips, such as flagging potential speaker or camera placement issues that could impact audio or video quality. This allows users to proactively address potential problems and optimize their meeting rooms for effective communication and collaboration. Cisco’s goal with all of its collaboration devices and technology is to reach what Cisco refers to as “Distance Zero”—where everyone feels no distance from meeting participants, regardless of their location.
Zoom has launched AI Companion 2.0, a significant upgrade to its AI assistant, with expanded capabilities for summarizing meetings and chat threads, generating content like emails and meeting agendas, and automating tasks. AI Companion 2.0 works across various Zoom products, including Team Chat, Whiteboard, Mail, and Meetings. While Zoom’s AI strategy is generally strong, it’s often underestimated. Many users may not fully grasp the sophistication of Zoom’s AI capabilities, even though they use features such as AI summaries and noise cancellation regularly—and those features are outstanding.
In my opinion, Zoom needs to better articulate the value of its AI features, even those offered for no additional charge. By clearly demonstrating the advantages of its AI-powered tools, Zoom can increase user appreciation for these capabilities and potentially drive the adoption of more advanced, paid AI features. This clearer communication is essential for Zoom to fully capitalize on its AI investments and remain competitive.
Apple has announced that this week will be full of Mac news, which I believe will be Apple’s way of releasing the M4 chip and all its variants across desktop, laptop, and mini form factors. This will also give Apple an opportunity to (try to) reassert the M4’s performance leadership over Intel, AMD, and Qualcomm—which I believe that it could do, considering the M4 iPad’s thermal and power limitations. It will likely be an interesting week of cherry-picked benchmarks with questionable scaling and no labels on any graphs. Nevertheless, we’ll get plenty of talk about AI and Apple Intelligence, I’m sure.
IBM has opened its first quantum datacenter in Europe, located in Ehningen, Germany. The datacenter has two quantum computers to support the growing demand from European businesses, research institutions, and government agencies.
The establishment of this datacenter is part of IBM’s broader strategy to advance quantum computing technology and foster a robust quantum ecosystem in Europe. The datacenter will also facilitate compliance with European data sovereignty requirements, ensuring that sensitive data remains within the region. See my full writeup on Forbes for more details.
At its Oktane 2024 conference, Okta announced new capabilities tied to securing generative AI applications. GenAI is poised to reimagine consumer and enterprise applications, but it creates security risks given the use of personal and shared data, underlying algorithms, and large language models, API calls, and more. To address these challenges, Okta announced a new product within its Customer Identity Cloud portfolio: Auth for GenAI.
Auth for GenAI enables developers to build next-generation AI agents and applications securely while not introducing unnecessary constraints that could stifle innovation or create a cumbersome customer experience. Okta’s ability to facilitate security by design for GenAI developers is potentially powerful, anchored by its leadership in identity and access management. See my Analyst Insight piece for more details.
Last week Qualcomm announced products in the mobile and automotive categories at its annual Snapdragon Summit. All the announced products leveraged Qualcomm’s new second-generation Oryon CPU-based SoCs. These new chips significantly improve upon the first generation and deliver mind-melting performance improvements north of 40% on CPU and GPU while also bringing real competition to Apple. Qualcomm is also the first Arm vendor for Android to hit 4 GHz on an ARM architecture. The new Snapdragon 8 Elite mobile processor features the new Oryon CPU cores, as do the new Snapdragon Ride Elite and Cockpit Elite. These products set an entirely new standard for mobile and automotive compute that will heat things up against Apple—and likely find their way into the next generation of Snapdragon X Elite platforms for Windows PCs.
The world of sports technology is constantly evolving, and we knew early on that our Game Time Tech podcast and sports technology advisory practice had to stay ahead of the curve. These days, it’s not just the tech conferences buzzing about sports technology—I’m also starting to see dedicated sports tech tracks emerge at finance conferences. This signals a shift in how these innovations are perceived, especially as operations teams demonstrate how technology can drive operational efficiencies and contribute directly to the bottom line. With CFOs increasingly recognizing the financial benefits, things are about to get even more interesting. The days of marketing teams lobbying for sports sponsorship dollars or IT teams justifying tech spend might be over. I’m excited to see how finance and accounting teams drive this next wave of sports innovation as the ROI of these technologies becomes increasingly apparent. Stay tuned for a finance-focused GTT pod coming soon!
Meta’s new Meta Quest back-of-shirt partnership with Wrexham AFC—both the men’s and women’s teams—should bring some exciting opportunities for fans. Through virtual reality, supporters can enjoy virtual stadium tours, behind-the-scenes views, and interactive gameplay. This collaboration opens new avenues for fans to engage and connect online and potentially in person using Meta’s cutting-edge VR technology.
“We’re so excited to welcome Meta Quest as our back-of-shirt sponsor,” said Wrexham AFC co-chairmen Rob McElhenney and Ryan Reynolds. “Meta Quest allows you to immerse yourself in new worlds and experiences and is all about connection—something that resonates with us at Wrexham AFC.”
Meta’s involvement further enhances Wrexham’s visibility, aligning with the club’s growing fanbase following the popular Welcome to Wrexham series on FX. Wrexham supporters may also get to benefit from special offers, such as discounts on Meta Quest headsets.
AT&T and T-Mobile published their respective 3Q 2024 earnings this week. AT&T continues to build momentum for its fiber franchise with an impressive 19 consecutive quarters of 200,000 net adds. Broadband continues to be a bright spot for the company, balancing flat mobility top-line revenue. I also expect that AT&T’s relationship with AST SpaceMobile will facilitate monetization of new rural mobility applications in agriculture technology as that commercial low earth orbit (LEO) satellite constellation matures.
T-Mobile continues its impressive financial performance, buoyed by significant net income growth. A key contributor is its 5G fixed wireless access business. I also believe the company will enjoy continued revenue upside in broadband services as it readies an aggressive push with fiber.
]]>Game Time Tech Hosts Melody, Robert & Anshel Talk the Hottest Topics in Sports Tech This Week!
https://moorinsightsstrategy.com/game-time-tech/game-time-tech-hosts-melody-robert-anshel-talk-the-hottest-topics-in-sports-tech-this-week/
Sat, 26 Oct 2024 18:27:55 +0000https://moorinsightsstrategy.com/?post_type=gametimetech&p=43711Melody Brue, Robert Kramer, and Anshel Sag are back for the Game Time Tech Pod to discuss the hottest topics in sports tech
]]>Melody Brue, Robert Kramer, and Anshel Sag are back for the Game Time Tech Pod to discuss the hottest topics in sports tech: money ball, marketable athletes, plus Meta, Lenovo, Qualcomm, and T-Mobile’s latest partnerships.
Watch the episode here:
Or listen here:
Join us for insights on the dollars and sense behind technology in the sports industry! We explore:
The Game Time Tech Podcast is a Moor Insights & Strategy Six Five Media production.
Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.
]]>Cadence Launches Fem.AI To Close The AI Gender Gap
https://moorinsightsstrategy.com/cadence-launches-fem-ai-to-close-the-ai-gender-gap/
Fri, 25 Oct 2024 20:12:53 +0000https://moorinsightsstrategy.com/?p=44377The Fem.AI initiative created by Cadence and joined by Nvidia, Workday and other companies aims to bring more women into technical and leadership roles in AI
]]>The Fem.AI initiative created by Cadence and joined by Nvidia, Workday and others aims to bring more women into technical and leadership roles in AI. Cadence Design Systems
Cadence Design Systems recently launched Fem.AI, an initiative to address the persistent underrepresentation of women in artificial intelligence. Cadence has committed $20 million over the next decade to the program. The program aims to create a more equitable tech sector by focusing on education, career pathways and leadership opportunities for women in AI.
The Fem.AI initiative comes at a time when the AI field is experiencing rapid growth yet continues to struggle with a significant gender gap. Cadence’s initial research indicates several “leakage points” in the pipeline where women are disproportionately lost. These include low rates of women in North America graduating with computer science degrees—only 20% in 2021—and difficulties securing AI-related jobs after graduation. Furthermore, 56% of women leave the tech industry before reaching the midpoint of their careers.
“Achieving true innovation in the current AI revolution requires the full participation of talented individuals, and too many women are slipping through the cracks of the AI pipeline,” said Dr. Anirudh Devgan, president and CEO of Cadence. “Cadence is committed to shrinking the gender gap in AI through Fem.AI, and we hope to be a catalyst within the tech industry by calling for gender equity in the AI workforce and beyond.”
Nicole Johnson, president of the Cadence Giving Foundation, moderates a panel at the Fem.AI Summit in Menlo Park, California on October 1, 2024. Melody Brue
Cadence Kicks Off National Women In AI Month At Fem.AI Summit
To further solidify its commitment to gender equity in AI, Cadence hosted the inaugural Fem.AI Summit on October 1 in Menlo Park, California, coinciding with National Women in AI Month. I attended this event, which brought together leaders from various sectors, including AI experts such as ethicist Dr. Joy Buolamwini and other prominent figures like Dr. Chelsea Clinton, to foster cross-sector collaboration and address the critical need for greater gender diversity in the field.
Throughout the event and in panel discussions, the overall sentiment was clear: diverse teams are better equipped to identify and mitigate biases in AI systems, leading to fairer outcomes. Many of my conversations and observations at the event underscored the belief that diverse perspectives foster more robust AI solutions and inclusive development practices that ensure AI technologies serve all demographics. The importance of AI literacy for policymakers and developers, along with the need for international standards to ensure responsible AI development, were also key themes discussed. Ultimately, the Fem.AI Summit served as a powerful call to action for the tech industry and beyond, with a clear message: inclusivity in AI is crucial for both innovation and ethics.
Cadence’s initiative bolsters efforts such as Unesco’s Women4Ethical AI, which strives to ensure women’s full participation in the development and use of technology. Unesco’s research shows that women and girls are 25% less likely than men to have basic digital skills, four times less likely to know programming and 13 times less likely to hold ICT patents. This disparity shows up in the AI field, with women occupying just 20% of technical roles in machine learning, 12% of global AI research positions and 6% of software development jobs.
Despite efforts to improve gender balance in the tech industry, progress remains slow. Fem.AI aims to combat these challenges through a multipronged approach. The initiative will focus on building partnerships with organizations that promote women in STEM, providing resources (including mentorships and returnships) to support women in AI careers and funding scholarships for women pursuing AI-related education. Cadence also plans to leverage its existing academic network to create further opportunities for women in the field.
Propelling An Industrywide Movement With The Fem.AI Alliance
To maximize impact, Cadence has actively encouraged tech industry involvement via the Fem.AI Alliance. Nvidia, Workday, Equinix and NetApp are among the first companies to join, signaling a potential trend of industrywide commitment to addressing gender imbalance in AI. This collaborative approach is crucial in driving meaningful change and creating a more inclusive environment.
The potential consequences of failing to address the gender gap in AI also cannot be overlooked. Cadence’s senior vice president of global human resources, Tina Jones, warned, “If women continue to be left out, the effects will hinder innovation and exacerbate current labor shortages in the high-tech sector.” Jones emphasized that Fem.AI is not just a standalone initiative but rather “part of Cadence’s ongoing commitment to propelling women in STEM and other inclusion initiatives at the core of the company.” She reiterated the company’s belief that “innovation thrives when diverse perspectives come together.”
Taking The Fem.AI Movement Forward
Fem.AI represents a significant step towards addressing the gender gap in AI, and I found the Fem.AI Summit inspiring. With a substantial financial commitment, strategic partnerships and a focus on industry collaboration, the initiative has the potential to create meaningful change. By investing in programs that support women throughout their AI careers, from college education all the way to leadership roles, Cadence is putting its money where its mouth is to increase female representation and foster a more inclusive AI landscape. The same goes for its partner organizations, and broad industry participation will be crucial to ensure long-term success as Fem.AI’s programs address the complex challenges women face in this field. The initiative’s commitment to ongoing assessment and adaptation will also be key to maximizing its lasting impact.
This is more than just an initiative; it’s a spark that could ignite a much-needed transformation in the AI field. I eagerly anticipate witnessing the progress of Fem.AI and the ripple effect it creates, inspiring other industry leaders to embrace the power of inclusivity and unlock the full potential of women in AI and beyond.
]]>MI&S Weekly Analyst Insights — Week Ending October 18, 2024
https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-october-18-2024/
Mon, 21 Oct 2024 16:47:42 +0000https://moorinsightsstrategy.com/?p=43554MI&S Weekly Analyst Insights — Week Ending October 11, 2024. A wrap up of what our team published during the last week.
Welcome to this edition of our Weekly Analyst Insights roundup, which features key insights our analysts have developed based on the past week’s events.
Last week the CEOs of AMD (Lisu Su, right) and Intel (Pat Gelsinger, left), did something unexpected: they joined forces to launch the x86 Ecosystem Advisory Group. The new entity aims to boost interoperability, smooth out integration, and generally simplify life for developers, ISVs, OS makers, and OEMs in the x86 space. Our own CEO and chief analyst, Patrick Moorhead, has known Gelsinger and Su for years. He had the opportunity to conduct a 1-on-2 interview with the two of them a day ahead of the public announcement, which led to this full-length analysis on Forbes.
The busy autumn conference season continues for our team. Last week, Melody was at AdobeMAX in Miami, and Patrick, Matt, Paul, and Anshel were in Bellevue, Washington, for Lenovo’s Global Analyst Summit & Tech World. Will was at Blackberry’s Analyst Day, and Matt, Robert, and Jason participated in IBM’s Analyst Day—all in the Big Apple.
Over the last week, our analysts have been quoted in numerous top-tier international publications with our thoughts on AMD, Nvidia, Intel, AppleVision Pro, Pure Storage, UL Solutions, the smartphone market, and of course, chips and AI.
Last week I attended the IBM Analyst Summit in New York. As typical at these events, much of what was covered was under embargo. The good news is that the embargo will be lifted this week in concert with the IBM TechXChange event in Las Vegas. However, I was pleasantly surprised to hear what was going on in IBM Consulting. This was in terms of both the nature of their work and how they are delivering projects for clients. I will be doing more in-depth research on this topic in the coming weeks.
Over the past couple of weeks, I have gotten a great deal of feedback and dialogue regardingmy recent Forbes column on Agents. The conversations have led to two very interesting pieces of feedback. First, there seems to be a bifurcation between the agent development method and the focus of tooling vendors. To wit, low- and no-code tool makers are pumping out hundreds of general-purpose agents to help knowledge workers empower themselves. This is exemplified by recent announcements from Oracle, Salesforce, ServiceNow, and others. Meanwhile, the vendors more aligned to pro-code tools are more focused on specific problems. A good example of this is AWS with application modernization. AWS is not alone, as other pro-code vendors are lining up around more specific use cases and will unveil their visions this year. This recent turn of events is leading to the second point of feedback, which is how customers should approach starting to use agents. This seems like a great piece of future research, so stay tuned.
On a more personal note, I am experimenting with how I do my research. I have always been a note-taker, carrying my pads, pens, and pencils wherever I go. The physical writing process helps my brain cement ideas in place, and my constant doodling helps me see patterns in my research. I have never been able to type my notes, and while Patrick Moorhead (among others) is a big fan of meeting transcription, I think I still need the physical act of writing things down.
However, after decades of using this method, I might be making a change—or at least an evolution. After trying to go digital for years, AI-empowered notebooks may be the catalyst to push me over the top. The challenge of using pen and paper is retention over longer periods. Locating things in old notebooks is a pain, and not available when I am on the road. The idea of adding LLMs to digital notebooks means I can combine my notes with other artifacts and start to really dig into areas of interest and be more efficient as I develop research. I must be clear: I am not using LLMs to do my writing in any way. But they are a good way to organize thoughts and possibly prompt me to look at some areas in a more nuanced way. So far, so good on making notes; there is a learning curve, but I am starting to get it. I am still testing out what will become my digital notebook, and I’ll keep you posted here. My setup is in the “New Gear” section below.
Last week I attended the IBM Analyst Summit in New York. As typical at these events, much of what was covered was under embargo. The good news is that the embargo will be lifted this week in concert with the IBM TechXChange event in Las Vegas. However, I was pleasantly surprised to hear what was going on in IBM Consulting. This was in terms of both the nature of their work and how they are delivering projects for clients. I will be doing more in-depth research on this topic in the coming weeks.
Over the past couple of weeks, I have gotten a great deal of feedback and dialogue regardingmy recent Forbes column on Agents. The conversations have led to two very interesting pieces of feedback. First, there seems to be a bifurcation between the agent development method and the focus of tooling vendors. To wit, low- and no-code tool makers are pumping out hundreds of general-purpose agents to help knowledge workers empower themselves. This is exemplified by recent announcements from Oracle, Salesforce, ServiceNow, and others. Meanwhile, the vendors more aligned to pro-code tools are more focused on specific problems. A good example of this is AWS with application modernization. AWS is not alone, as other pro-code vendors are lining up around more specific use cases and will unveil their visions this year. This recent turn of events is leading to the second point of feedback, which is how customers should approach starting to use agents. This seems like a great piece of future research, so stay tuned.
On a more personal note, I am experimenting with how I do my research. I have always been a note-taker, carrying my pads, pens, and pencils wherever I go. The physical writing process helps my brain cement ideas in place, and my constant doodling helps me see patterns in my research. I have never been able to type my notes, and while Patrick Moorhead (among others) is a big fan of meeting transcription, I think I still need the physical act of writing things down.
However, after decades of using this method, I might be making a change—or at least an evolution. After trying to go digital for years, AI-empowered notebooks may be the catalyst to push me over the top. The challenge of using pen and paper is retention over longer periods. Locating things in old notebooks is a pain, and not available when I am on the road. The idea of adding LLMs to digital notebooks means I can combine my notes with other artifacts and start to really dig into areas of interest and be more efficient as I develop research. I must be clear: I am not using LLMs to do my writing in any way. But they are a good way to organize thoughts and possibly prompt me to look at some areas in a more nuanced way. So far, so good on making notes; there is a learning curve, but I am starting to get it. I am still testing out what will become my digital notebook, and I’ll keep you posted here. My setup is in the “New Gear” section below.
Dario Amodei, CEO of Anthropic, has written a very long (but interesting) paper on what he believes will be the ultimate gifts of AI. His idea is that after powerful AI is developed, we will, within a few years, make all the progress in biology and medicine that we would have made in the entire 21st century without AI. Amodei said, “I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.” Here’s a list of what he believes AI-enabled biology and medicine will give us in five to 10 years that would otherwise take 50 to 100 years without AI:
Reliable prevention and treatment of nearly all infectious disease
Elimination of most cancer
Very effective prevention and effective cures for genetic disease
Prevention of Alzheimer’s
Improved treatment of most other ailments
Biological freedom (To explain this, he wrote, “I suspect AI-accelerated biology will greatly expand what is possible: weight, physical appearance, reproduction, and other biological processes will be fully under people’s control.”)
Doubling of the human lifespan
The list seems doable based on the work being done with AI, medicine, and healthcare. If you want to read the entire paper, you’ll find it here.
Robots seem to be a hot topic now, probably thanks to Elon Musk. Boston Dynamics and Toyota Research Institute announced a robotics research partnership to combine their expertise in AI and robotics. The partnership plans to accelerate the development of humanoids by integrating TRI’s large behavior models with Boston Dynamics’ Atlas robots. The robots will be the platform for implementing TRI’s advanced AI systems. TRI has expertise in computer vision; LLM training will also be important to develop a multitasking foundation model for robotic manipulation. Once we develop AGI and incorporate it into a humanoid, we might be approaching the danger zone. Then again, that might be a lot of fun.
At the GITEX Global conference in Dubai, Avaya showcased its latest AI-powered solutions aimed at enhancing customer experience and streamlining operations. These included “Amna,” an AI virtual assistant developed in partnership with Sestek and Cognigy for Dubai Police to handle public inquiries. Avaya also introduced a “Virtual Operations Manager” concept, demonstrating how AI can analyze contact center data and provide actionable insights to improve performance and customer journeys. Furthermore, Avaya highlighted a real-time translation solution created with Transcom and Sabio, leveraging Avaya Experience Platform’s open APIs to enable agents to communicate with customers in more than 100 languages. This solution aims to improve scalability and reduce business costs by up to 65% in specific markets and use cases.
I attended GITEX with Avaya last year, and I’ve seen firsthand how the company leverages this event to highlight its commitment to innovation in the CX space. While I couldn’t be there in person this year, I have observed how these announcements reinforce Avaya’s focus on delivering solutions that address real-world challenges. It’s particularly noteworthy to see the company’s emphasis on practical applications of AI, such as the virtual assistant for Dubai Police and the real-time translation solution, which have the potential to significantly impact customer service and operational efficiency.
Although I had to sit in my hotel room and watch the Lenovo Tech World 2024 livestream due to bronchitis, there was a lot of news to digest from this company, which I believe doesn’t get enough credit for its AI programs and the other strong work it has done in the market. One of those areas is an enabling technology: Neptune liquid cooling. Although this technology really has its roots in the IBM era, Lenovo has done a lot to advance it—and indeed has been on the forefront of the liquid cooling trend. In this vein, Lenovo made two major announcements at Tech World:
The new Lenovo N1380 Neptune chassis is designed for 100% heat removal on a greater than 100kW rack consumption, without any specialized air conditioning.
The ThinkSystem SC777 v4 Neptune server supports the NVIDIA Blackwell GPU and platform.
It used to be that a 15kW power budget for a whole server rack was high. Thanks to the accelerated adoption of higher-power-consuming CPUs and GPUs, it is not unusual anymore to see 15kW being consumed by a single server. Liquid cooling is quickly moving from niche usage in the datacenter to much broader adoption. In response, Lenovo has played to its strength in liquid cooling quite effectively.
Speaking of liquid cooling, infrastructure giant Schneider just secured a controlling interest in liquid air player Motivair for $850 million. Schneider does a lot in the datacenter market—electrical distribution, UPS kits, racks, enclosures, etc. This investment is a natural and smart expansion for the company. What makes it more interesting to me is the amount of money pouring into the liquid cooling market—and the innovation that is coming out of these startups.
One of the more interesting liquid cooling companies I’ve seen is JetCool out of Maynard, Massachusetts. This company, founded by a scientist from the MIT Lincoln Laboratory, has been securing more and more partnerships and recently secured tens of millions in investment from Bosch.
To think of liquid cooling as mere plumbing is silly. It has moved from low-tech to high-tech seemingly overnight as it has gone from pumping fluids to advanced physics. Look for a research note from me on this topic in the near future.
Are we living in the era of real-life science fiction? When Oracle CEO Larry Ellison talked about using nuclear power to light up his datacenters, people kind of laughed and thought of it as “Larry being Larry.” Fast-forward a month or two, and Microsoft is wanting to reactivate Three Mile Island to power its Azure datacenters, while Google and AWS have committed to acquiring and deploying small modular reactors (SMRs) that can deliver up to 300 megawatts of power per datacenter.
The power crunch is very real—and very limiting. If SMRs can be deployed and managed properly, they could deliver highly reliable and very clean energy for next-generation datacenters. Hopefully, U.S. regulators will come to their collective senses and catch up with the rest of the world in enabling nuclear power.
Adobe announced GenStudio for Performance Marketing at its AdobeMAX conference in Miami last week. This generative AI-powered application aims to help brands and agencies accelerate the creation and delivery of personalized marketing campaigns, allowing marketers to quickly generate variations of on-brand content for channels such as paid social, display ads, and e-mail. The platform integrates with Adobe Experience Cloud and with popular advertising platforms such as Google, Meta, and TikTok, offering performance insights and streamlined workflows.
This was just one of several announcements at AdobeMAX, including new Adobe Express integrations with popular enterprise apps such as Box and Miro. I’ll soon provide a more detailed analysis of these announcements and my experience at the conference.
Cloudera’s partnership with Snowflake provides enterprises with an open, unified hybrid data lakehouse powered by Apache Iceberg. The goal of this collaboration is to enable enterprises to consolidate their data, analytics, and AI workloads into a single platform, eliminating data silos. This combination could give enterprises a single source of truth for their data, enabling faster queries, real-time insights, and streamlined workflows while maintaining data integrity. “By extending our open data lakehouse capabilities through Apache Iceberg to Snowflake, we’re enabling our customers to not only optimize their data workflows but also unlock new opportunities for innovation, efficiency, and growth,” said Abhas Ricky, chief strategy officer of Cloudera.
Last week I attended IBM’s Analyst Summit in New York, which provided valuable insights into IBM’s vision for the future of data and AI, with an emphasis on accessing enterprise data. Beyond exploring IBM’s impressive new offices, I had the opportunity to hear from key leaders, starting with CEO Arvind Krishna, who outlined IBM’s strategy for AI adoption, sustainability, partnerships, and data management. SVP of software and CCO Rob Thomas detailed IBM’s software approach to leveraging data and AI, while SVP and director of IBM Research Dr. Dario Gil highlighted innovative AI research from IBM labs. I was especially interested to learn more about how IBM’s consulting services are helping clients navigate digital transformation and customer readiness, get the most out of structured and unstructured enterprise data using IBM’s data fabric solutions, and adopt sustainable data practices in anticipation of future regulations..
AMD’s recent announcements of its new DPU and Ultra Ethernet Consortium-ready NIC represent a one-two punch supporting front- and back-end networking that’s optimized for AI workloads. The AMD Pensando Salina DPU marries high-performance network interconnect capabilities and acceleration engines aimed at providing critical offload to improve AI and ML functions. AMD claims that Salina will provide a twofold improvement in overall performance over its prior DPU generations, and if it delivers on this promise, it could further the company’s design wins with public cloud service providers eager to capitalize on the AI gold rush.
The Pensando Pollara 400 NIC is purpose-built for AI workloads, with an architecture based on the latest version of RDMA that can directly connect to host memory without CPU intervention. AMD’s new NIC design could position it favorably relative to Broadcom 400G Thor, especially since the company is the first out of the gate with a design optimized for UEC performance. Both the AMD Pensando Salina DPU and AMD Pensando Pollara 400 NIC are currently sampling with cloud service and infrastructure providers; commercial shipments are expected in the first half of 2025.
SAP and UiPath have formed a partnership to integrate the UiPath enterprise automation platform with SAP Build Process Automation. Among other benefits, this move aims to enable customers to automate more of their business processes, plus it should make things easier for enterprises that are transitioning to SAP S/4HANA Cloud. UiPath touts the collaboration, offered as an SAP Solution Extension starting this month, as providing a holistic view of process automation across both SAP and non-SAP environments to enhance operations and efficiencies.
This is an option worth exploring for enterprises transitioning to SAP S/4HANA Cloud and automating processes across their IT landscape. Both partners have a focus on enabling enterprises to carry out successful business transformation projects while improving data management and reducing risks.
NXP introduced the S32J family of 80 Gbps Ethernet switches, which share a common switch core (NETC) with the NXP S32 automotive processing platform. Designed for high-speed in-vehicle networks, the switch integrates with NXP’s CoreRide platform to provide production-grade network solutions pre-integrated with software and tooling.
Sonatus, a leader in software-defined vehicle technologies, won Autotech Breakthrough’s “Connected Vehicle Innovation of the Year” award for the Sonatus Collector data collection system. Only a fraction of the massive amount of vehicle-generated data is relevant for optimizing customer experiences, improving quality, managing fleets, and ensuring safety. The Collector is a policy-based system that reduces data processing overhead and upload costs by gathering, storing, and uploading only targeted information. This solution is truly innovative, and other industrial applications should use similar design patterns.
Blecon, a new startup out of Cambridge, England, punched above its weight class last week at Embedded World NA with a simple middleware solution that connects Bluetooth Low Energy devices to cloud services—without pairing. The company just closed a $4.6 million seed round led by U.K.-based MMC Ventures. I like simple connectivity schemes, and this is a good one.
Agtonomy closed its $32.8 million Series A round, positioning the company to accelerate AI-driven agriculture automation and expand into autonomous industrial equipment. Agtonomy’s Sonatus-like business model combines advanced software with OEM partnerships to rapidly develop various autonomous, software-defined offroad products.
In recent conversations with various tech vendors, it’s become clear to me that while enterprises are eager to adopt AI, they face many of the same key challenges. IBM recently highlighted five truths about AI adoption, emphasizing the need for:
Targeted AI solutions
Hybrid cloud flexibility
Robust governance
A focus on value-driven use cases
High-quality data
These points resonate with my own observations and are further validated bya recent Cisco study that revealed a disconnect between what tech companies think their customers need and the customers’ actual challenges and needs.
This misalignment is particularly evident in infrastructure scalability, data security, and access to skilled talent. While partners are understandably enthusiastic about the growth potential of the AI market, they need to better understand and address these customer pain points to capitalize on this opportunity. It’s not just about selling the “shiny new object” of AI, but about providing practical solutions that deliver real business value and foster trust in AI systems.
Lenovo held its Tech World 2024 event in conjunction with its global analyst conference. At the event, the company had a Who’s Who of tech executives on stage, including Intel’s Pat Gelsinger, AMD’s Lisa Su, and NVIDIA’s Jensen Huang. (Qualcomm’s Cristiano Amon and Microsoft’s Satya Nadella joined via video.) It was absolutely a tour de force for Lenovo to remind its partners of the company’s influence as the world’s undisputed #1 PC maker. While Lenovo didn’t announce any new consumer products, it did show off many concepts and prototypes. It also announced its foray into automotive electronics in partnership with NVIDIA and Qualcomm.
Amazon overhauled its entire Kindle lineup with new and improved models and the first-ever color Kindle, which it claims will operate in full color with zero impact on battery life. I am glad to see a color Kindle because it improves the reading experience for graphic novels. There are also a bunch of updated models of Kindle Paperwhite, Kindle Scribe and Kindle with new colors and faster page loading. These new Kindles have phased out the previous generation, including the last model with physical buttons.
Quantum Computing Inc. has won its fifth project from NASA. The company is developing quantum remote sensing for space-based lidar imaging. By using QCI’s technology, NASA will lower the cost of lidar missions. This is an important step for QCI that allows it to provide an innovative quantum solution using remote sensing for climate change investigations. Two active NASA climate change projects are (1) ICESat-2, which uses lidar to measure thickness changes in polar ice sheets and sea ice, and (2) GEDI, a test project on the International Space Station that measures forests around the world.
Blackberry recently held its investor day at the New York Stock Exchange. The company has made management changes, as well as divided its cybersecurity and IoT business into what the company calls “virtually autonomous business units”—an unconventional move. However, the strategy is yielding significant operational cost savings, as well as newfound visibility for optimizing investment into its more profitable solutions within both portfolios. Time will tell if Blackberry can improve shareholder value. However, its QNX IoT platform continues to be a bright spot, especially in automotive, as evidenced by more than 100 design wins over the last 18 months, coupled with support commitments from MediaTek, NVIDIA, NXP, Qualcomm, and other silicon providers.
At Lenovo’s Tech World 2024 event, not only did AMD and Intel announce their joint effort to create the x86 Ecosystem Advisory Group (described in the introduction to this weekly update), but they appeared together in photos with our CEO Patrick Moorhead after recording an episode of Moorhead’s podcast. Both chip CEOs spoke highly of the partnership. The advisory group includes a long list of very influential companies, and I believe it serves as a hedge against the growth of Arm in both client and server. Regardless, nobody could have imagined the day when Intel and AMD would really collaborate outside industry standards groups.
Intel CEO Pat Gelsinger also came on stage at Lenovo Tech World to show the world one of the first Panther Lake chips. Panther Lake, which is expected to ship at the end of 2025, is the first Intel product to leverage the company’s 18A process node and feature all of its latest CPU, GPU, and NPU cores. Many people are quite pleased with the just-launched Lunar Lake processor, which shares many design elements with Panther Lake.
T-Mobile’s recent partnership with McLaren Racing is a strategic move aimed at connecting with business decision-makers, who make up 54% of the U.S. Formula 1 fanbase. This collaboration goes beyond branding on McLaren’s race cars and garage headsets; it’s about leveraging a shared passion for technology and performance to showcase T-Mobile’s 5G business solutions.
T-Mobile CMO Mo Katibeh highlights the partnership’s focus on data-driven decision-making and innovation, mirroring the real-time data analysis that’s crucial to both F1 racing and modern business operations. By aligning with McLaren, T-Mobile aims to tell a compelling story that resonates with business leaders and positions it as a critical player in the future of 5G connectivity. This partnership should serve as a platform for showcasing how T-Mobile’s advanced 5G network can enhance business operations and drive innovation.
The partnership is just one example of the growing trend of technology companies investing in F1 sponsorships. I look forward to discussing T-Mobile and other prominent partnerships, such as Google’s Pixel collaboration with McLaren, with Anshel Sag and Robert Kramer on an upcoming Game Time Tech Pod. We’ll investigate how these technologies impact the sport and the vendors’ bottom line.
Globant has been given the #6 spot on Fortune’s “Change the World” list for its work on social and environmental issues. The company supports programs that bring cleaner cookstoves to Peru and help farmers in India switch to green energy. That’s making a real difference in those communities while shrinking carbon footprints. I have followed Globant’s sustainability journey and am pleased to see its efforts acknowledged on a global platform. This recognition underscores the positive impact that technology companies can have when they prioritize social and environmental responsibility alongside business growth.
The FCC has passed a series of new rules, one of which says that all hearing aids must be Bluetooth-compatible in the future. The FCC also says that all smartphones must be compatible with hearing aids for accessibility reasons. Manufacturers will have a couple of years to comply with these new rules, which I think are a step in the right direction, especially now that Apple is bringing hearing aid support to its AirPods Pro 2.
The State Fair of Texas is another example of an event needing to embrace modern connectivity improvements, including private 5G networking. During the most recent Texas-Oklahoma football game in Dallas, concession ticket kiosks were inoperable and wireless point-of-sale terminals used around the Cotton Bowl facility malfunctioned. This all led to a less than desirable experience for football fans and attendees of the Fair, plus the venue lost significant revenue as a result. Certainly, there are challenges for wireless network propagation at the site given the age, construction, and lack of fiber backhaul at the Fair Park and Cotton Bowl venues. However, my personal experience highlights an opportunity for management to consider a private 5G network deployment to not only delight attendees, but also maximize revenue potential. The cost for deploying improved connectivity infrastructure would be significant, but an innovative solution such as T-Mobile’s recently announced 5G on Demand offering could be a cost-effective consideration.
]]>Six Five On the Road at AdobeMAX: Adobe Introduces GenStudio for Performance Marketing
https://moorinsightsstrategy.com/the-six-five/six-five-on-the-road-at-adobemax-adobe-introduces-genstudio-for-performance-marketing/
Mon, 21 Oct 2024 14:53:58 +0000https://moorinsightsstrategy.com/?post_type=six_five&p=43625Vikram Viswanathan, Head of Product Marketing & GTM at Adobe, joins Melody Brue to share his insights on Adobe's GenStudio. This new advancement aims to redefine performance marketing through innovative tools and strategies.
]]>Adobe is making strides for performance marketers. Six Five On the Road makes a stop at AdobeMAX, where host Melody Brue is joined by Adobe‘s Vikram Viswanathan, Head of Product Marketing & GTM, for a conversation on Adobe’s introduction of GenStudio, a new product aimed at revolutionizing performance marketing.
Their discussion covers:
The inception and vision behind GenStudio
How GenStudio stands out in the market of performance marketing tools
The specific benefits and features of GenStudio for marketers
Vikram’s insights on the future of digital marketing and Adobe’s role in it
Live responses to audience questions about GenStudio and performance marketing
Disclaimer: Six Five On the Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Melody Brue: Hi, welcome to a very special edition of Six Five On the Road. I am Melody Brue with Moor Insights & Strategy. I am here at Adobe MAX. Adobe MAX is the creative conference. And I’m here with Vik … You tell me how to pronounce your name because I’m afraid of going to ruin it.
Vikram Viswanathan: No. Vik Viswanathan.
Melody Brue: Vik Viswanathan. It’s such a beautiful name but I didn’t want to ruin it for you.
Vikram Viswanathan: That’s okay.
Melody Brue: You have a really big special announcement here. You announced GenStudio for Performance Marketing. Tell me about that. And also, let’s talk about the problem that it solves for marketers and creative teams.
Vikram Viswanathan: No, absolutely. Thanks, Melody. We are super excited about the launch we had yesterday at MAX. And GenStudio for Performance Marketing is the application as part of the Adobe GenStudio solution. For those of you who’ve been part of Adobe’s journey on content supply chain, we’ve been at the forefront of trying to solve content supply chain problems for our customers. GenStudio for Performance Marketing is the first app which really goes end to end from a workflow standpoint trying to solve it. What it really tries to get at is creating content for personalization. You’ve been a CMO, you know that is a problem which is just getting bigger and bigger. You have more channels to put your content on, you want to personalize to more audiences. And all of that’s creating a massive load on the creative teams.
And the creative teams, budgets aren’t getting bigger so this demand is causing them to not be able to meet what marketers want today. What it ends up happening is marketers are either not able to use the right content they want to personalize for their audiences or they just pick a few segments they go after. But I think with the advent of generative AI we have the opportunity to do things differently. And GenStudio for Performance Marketing is really Adobe’s way of creating an end-to-end workflow that brings both creatives and marketers together to create content and content variations.
Melody Brue: You talk about creatives and marketers, it sort of allows marketers to be more creative. Because I think there’s a little bit of a misunderstanding I think in some ways that marketers aren’t creative or aren’t as creative as what we would consider the real creatives. So it allows non-creatives, or not the traditional creatives, to be more creative, right?
Vikram Viswanathan: Yeah. That’s an interesting one, right? In the scenario I talked about creating a ton of variations, what ends up happening is the creatives spend so time on making edits and resizing for different formats and different channels and … In fact, we did a survey of our creatives, and we found our creative customers spent up to 40% of their time just doing edits and resizing. That’s not why they got into the creative field. At Adobe MAX there’s so much excitement about the creative inspiration. That gets lost, right? This gives time back to the creative because we’re using AI to do some of this stuff which took so much time from them. And we are allowing the marketers to leverage the AI part but also be creative within guardrails so that we can create on-brand content. Because for GenStudio for Performance Marketing, at the foundation we want everything generated to be absolutely on-brand, high performing, right, because this is for performance marketers. We want it to be personalized for all the different audiences you’re trying to meet. And we want it to be compliant for your industries. Every industry has a set of compliance needs before content gets pushed out into marketing.
Melody Brue: So you talked about those guardrails. And that is, I think, from a compliance standpoint but also from a creative standpoint. Controls and guardrails, let’s talk a little bit more about that and why.
Vikram Viswanathan: Yeah, absolutely. I think that’s very important to our enterprise customers, right? AI gives the power for everyone to be able to imagine and create things. That’s an awesome thing. And at MAX we saw all the innovations that … How people can do stuff. But you want to make sure when you take that power beyond just your studio team which used to be … Which is very aware, and thoughtful, and conscious of your brand and you start letting people play with it, you want to have the controls and the guardrails in place so that your brand doesn’t get tarnished. And especially for our large customers, brand is so important. It’s such a big part of the ethos of the company. I mean, you can see at MAX, right, everything we have here is on-brand. There’s a theme, we are in Miami. It’s a very carefully constructed brand.
And so what we do in GenStudio for Performance Marketing is we provide those guardrails in multiple ways. First, all of the brand guidelines, which our customers have, are loaded up in your instance, the customer’s instance. If you have a document of all your brand guidelines, GenStudio for Performance Marketing can read that and understand what it means to be on-brand. Second thing we do is … When the marketer is generating the content, generating the variations, we read those brand guidelines. And the model is actually coming up with results based on those guidelines. So it’s guided to provide stuff on-brand. Once the content is generated we use the same guidelines and run a check against that to see, did you meet these guidelines. And we give a score for each piece of content. You saw the demo on that. Which is actually great. It’s great feedback because marketers sometimes are … They take a piece of content and they’re reusing it. They know it’s the creative content from studio but they’re like “Did I mess it up? Did I make changes? And is the copy really going to stand?”
And now we give you a score to give you the confidence to say, “Hey, this is 90% meeting all the guidelines.” And if it doesn’t, we tell you where it falls apart. So that’s the second way in which we do it. And the third most important one is, you can’t publish anything until you get the right approvals in place. It’s the creative who signs off. It goes back to the creative, every piece of content before publishing, because we have review and approval set up in the product. And the marketer pushes out for a review. And once the creative looks at it, approves it, they can do it in the product, you get an email, and then you’re ready to publish. And you can activate on the channels, we talked about at the launch, as well.
Melody Brue: So you’ve had this in private beta for a couple of months now. What’s the reaction been? What’s the feedback been from those customers?
Vikram Viswanathan: I’d say two things. We’ve been in beta for a little over three months now and we’ve had about 18 customers. It’s been fantastic. We’re very thankful for our beta customers helping us design the product in a way. And I would actually throw Adobe in the mix as well. This is definitely a unique way in which we built the product. Our marketing team co-built the product with us because they told us what they wanted to see, they told us what works for them, what didn’t. And the biggest testament to that is they’re actually using this product live, right? They’re using it for their production workflows. This is a new way of doing things. The fact that you’re taking away some of the mundane, repetitive tasks out of the equation using AI to help with that. And you’re getting marketers and creatives to have a handshake and have a common language in how they operate. Who better to do that than Adobe given our creative heritage and are experience cloud presence?
You need to take small steps, right? What we’ve seen from our customers is to pick the use cases which you want to start with, test it out, right, scale it up, and then start adding the other use cases. When you take Adobe, as an example, we’ve seen great results, right? Our own marketing team, when they used it, they found that they were able … For example, emails. Emails is one of the asset types which you generate within GenStudio for Performance Marketing. They were able to get 10% bump in open rates, 57% jump in click-through rates which is great because … It’s not enough to just create a lot of content, right? For performance marketers especially, purely content for content’s sake is not sufficient. So we heard that from our customers too. They wanted to make sure that we have the rich content insights flow back into the creation, and that’s part of what we’re building into the product as well.
Melody Brue: I think that’s really interesting, it’s Adobe on Adobe. I’ve heard that a lot from your product teams about … Some things were actually born out of an internal need and then they said, “Well, this is something that our customers would really” … “Could really use.” But it was really developed internally. And then realizing that well, the world needs this. And I think that’s a really great way to introduce a product. Some people call that eating your own dog food or drinking your own champagne.
Vikram Viswanathan: Champagne sounds better.
Melody Brue: Champagne sounds better. But sometimes it’s dog food, right, and then you make it into champagne I guess.
VikramViswanathan: That’s right.
Melody Brue: But you also announced some really interesting partnerships, in a way to get this content out, like Meta and Google. Explain how that ecosystem works and why that-
Vikram Viswanathan: Absolutely. The channels we support are starting with email, display ads, banner. It’s paid media, right? And social media. Paid social media is a big one. And we want to be able to push into all those channels. So we entered into partnerships to make sure that when the content gets created you can directly activate it from the product. And so that’s a big part of those partnerships. We are very appreciative of our partners there. And I think our ecosystem goes even beyond that, right? There is an activation part to it. But also there’s a broad ecosystem of our agency and integration partners who we expect to build and continue working with this on. Because again, we’re changing the way … How marketing gets done and we can’t do it alone. Customers need to be thinking about their marketing processes differently. And also we need the help of our agency and system integration partners to help fill in the gaps as well because this will require a village to change how we do marketing differently.
Melody Brue: So what’s next for GenStudio for Performance Marketing? What’s on the roadmap? What’s your vision, the big thing?
Vikram Viswanathan: No, we are very excited. Part of being at MAX you get rich feedback from customers. The launch has been less than 48 hours but we have had tons of good customer conversations and those influence our product roadmap, right? We want to build stuff which customers see value in. And I think we already hear some great feedback on being able to … One of the big pieces of GenStudio for Performance Marketing are the rich content insights which we can generate pulling in the performance data for all the content and campaigns out there. We want to be able to use that to help you generate better. Generate more content which is high-performing. So that’s something we will be investing time in. You can also expect us to expand the partnerships in terms of our ecosystem both in terms of channels as well as in terms of agencies and partners we work with.
And then we want to build on what we do in the product in terms of making it more compliant. And this is where some of our partners come in. A lot of industries have very specific needs for approvals. Approvals don’t just stop with one person reviewing it, they’re multistage. They also have very specific requirements for how … The MLR review or claims review in CPG. We want to build all of those for those different industry verticals in our product. We want to more deeply integrate GenStudio for Performance Marketing within Adobe solutions so that end-to-end workflow, we deliver on that promise. This is a V-one out-of-the-gate product, what you saw yesterday. And it’s a standalone product, it doesn’t require the support of any other tools to start with. But it works better in conjunction with all our other … With AEM Assets, with Workfront. You’ll see us lean into some of those integrations to make sure that we are delivering value on the GenStudio promise.
Melody Brue: Well, we can see by just this studio that we’re sitting in that when you put all the things together things become bigger and better.
Vikram Viswanathan: Yeah, absolutely.
Melody Brue: Well, thank you so much for this conversation. It’s been super informative. And it was really great to see all of the demos yesterday. And for everybody joining us, thank you so much. We are here at Adobe MAX in Miami. And if you want to see any of the things that we’ve been talking about you can go to adobe.com and check them out because there are some things that are really worth checking out. Thanks for joining us.
]]>It’s Episode 37 of the MI&S Hot Desk Podcast! Join Robert Kramer and Melody Brue for a rundown of what’s been happening on the road over the past few weeks with Cadence Fem.AI Summit, Microsoft, Miro, Zoomtopia, AdobeMAX, IBM, Infor, LogicMonitor, Teradata and AWS.
Watch the full episode here:
Listen to the audio here:
00:00 Intro
5:28 Cadence Fem.AI Summit
7:59 Microsoft Industry AI Summit
11:55 Miro Innovation Workspace
13:46 Zoomtopia
19:09 AdobeMAX Content Credentials Web App
22:48 Infor
25:31 Logicmonitor
26:35 Teradata
28:25 AWS
Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.
TRANSCRIPT
Robert Kramer: Hello everyone. Welcome to this week’s edition of Moor Insights & Strategy Hot Desk Podcast. We’re on Episode 37. I’m Robert Kramer and joined by Melody Brue, and we’re both vice presidents and principal analysts, and I handle ERP and Enterprise Data. Mel handles Modern Workplace and HCM CX and a bunch of other things. On this show, we like to talk a little bit about tools, trends, trajectories, and people and marketing technologies, enterprise apps, a lot of other things like ERP, CRM, and the glue that holds it all together is data. So this week, Mel, I’m going to introduce the topics in just a second, but we’re at night. It’s late in New York. I’m on the road. You’re just getting home from the road. We wanted to catch up. We haven’t talked a little bit in a while and we’re going to jump into where we’ve been and we’re going to air tomorrow. But I’m going to talk a little bit about Teradata and LogicMonitor, AWS Infor, and you’re going to go over Adobe, Zoom. What do we have? Cadence and Microsoft…
Melody Brue: Microsoft, Miro, Zoom. Yeah, I mean this is crazy, but when people are watching this, it’s going to be the morning or the afternoon or whatever, but it’s 10:45 at night for us. I just got in from Miami. We’re catching up now, but there’s been so much going on and we’ve both been on the road for weeks in a row and there’s so much content that I feel like we’re publishing, but sometimes talking about it is we just have to share the things and I feel like there’s so much that we kind of have to do this rapid fire.
Robert Kramer: Just real quick, do you sometimes see that there’s some synergy between some of the vendors, like the way that they talk about technologies or the way that they’re presenting it to us, or do you think they’re all mutually exclusive or a little both?
Melody Brue: No, I think there’s a ton of synergy and actually it’s been so interesting to see over the last couple of weeks for me going to events, some in person and some virtually, and I mean I’ll touch on this as I get to them, but even just, I just landed a couple of hours ago from AdobeMAX in Miami and last week I was in San Jose. I’m like, where was I? San Jose for Zoomtopia? But I also was kind of watching what Miro was doing with their announcements. But Miro made an announcement with Adobe integrating Adobe Express into their new platform, and then I saw it in person at Adobe. I was also with Zoom, they integrated with some other partners that I work with. So I’m with people and then I see it happening at another event. I actually had a conversation just today that was really interesting that there’s so much collaboration happening throughout enterprise software that it is actually really satisfying to me and it’s actually not just software, it’s hardware too.
And you see it with chip makers, with PCs, with everything. And a couple of years ago I wrote something in Forbes and I said, collaboration is the new competition and a bunch of trolls on Twitter called me naive and I now want to find these people and say, look, do what Pat calls a victory lap. But it really is. And you look at something like that and you see Miro, this is a collaboration kind of whiteboard platform and it makes a whole lot of sense for Adobe to integrate into Miro because if you are collaborating in that space and you don’t have to leave to edit an image or do something that actually keeps you in your zone, and this is a lot of developers work in this platform, marketers work in this platform, so you don’t have to leave. They’re also integrating with Box and with others. I mean there’s a ton of others. So you start to see all of these things that maybe you wouldn’t have thought of, but then you see it and you’re like, oh, that makes a lot of sense to me. So to answer your question, yes, I’m seeing that a lot.
Robert Kramer: Well, like you said, you build, you buy, or you partner, and then it makes it a lot more efficient and you’re bringing the features today versus later, and it’s all about the customer at the same time and it’s so competitive. Where do you want to choose your battles or how do you continue to innovate at the level that’s required to be competitive or to give the customers what they need? I mean, there’s a lot of discussions of that at the enterprise ERP level, which we can talk about as well, but I think it’s been a change like you’re saying for the better.
Melody Brue: Yeah, so let’s go through super rapid fire where I’ve been, and I’ll go back to the beginning of the month. I was at the Cadence Fem.AI Summit that happened in Palo Alto. So Cadence Design Systems, they put $20 million into this initiative to really support more women in AI. It was an incredible event. I’m actually going to be doing a Six Five video to explain more about the event. It was really incredible. That should air, actually I think October is National Women in AI month, so we can celebrate this all month long, but a ton of really awesome speakers there, a bunch of different initiatives really that are aiming to close the AI gender gap, and it’s really not entirely focused on AI, but STEM. Cadence does a lot of things around this that really support women in technology. But AI initiatives are so important right now because if we don’t catch women and girls now when we know even from research that a bunch of different people, but Cisco has recently done research that have said the people that will be most affected by AI in terms of job loss will be the frontline workers.
And I’m at a loss for words right now because I’m so tired… entry level workers. So if we don’t train people at that entry level, which Cisco is doing with some of their academies, Cadence is doing, I actually found out this week at AdobeMAX that Adobe is training like 3 million people to learn generative AI through their different processes, their tools, because if we don’t start teaching, and it’s not just women, but Cadence is really in this particular Fem.AI initiative, really concentrating on women, we’ve got to get to people now there’s this big upskilling initiative throughout a lot of different organizations and Cadence also announced that companies like Nvidia, Workday, NetApp and others have joined them in this initiative. So I said, I’m going to go through rapid fire on this and I’m going to rapid fire from there, I went to Boston to Microsoft, their Industry AI summit.
This was really interesting because we got to look at AI for specific industries. So my tracks that I went to, I concentrated really on finance, healthcare, and retail. So these solutions that are really built specifically for these verticals, I made a joke of this thing that I thought that Copilot for Finance should be called Bro Pilot. That was my big finance bros get it, but some really, really interesting things happening there. And with Copilot for Finance, that’s a tricky thing because there’s so much regulation around that. So they have to really train Copilot to be incredibly smart around the specific data for the company and in that Microsoft graph where it is very much contained within that data. Then in healthcare we got this amazing demo. It was actually quite impressive where they actually brought in an actual physician and did sort of a patient physician demo where the physician didn’t actually take any notes.
The Copilot was taking the notes for the physician and we saw something similar at Oracle and that really frees up the physician to pay more attention to the patient, to actually think through what’s happening. Also allows the AI to then deconstruct everything that the patient is saying and maybe pull things out that the physician may not have thought through, make sure that everything is compliant. So there’s a lot to that, but this physician that they had there had actually been using it in her clinic. So this was not somebody who was just there to do a demo. She’s actually been using it. So it was a really cool demo to see. I think there’s still a lot that we need to see in that space I think for people to trust it. The demos were great. I think as we move from the Copilot to the agents, Microsoft talks about agents in a slightly different way than some of the other companies, some of the other vendors that we talked to about agents.
So I think the agents being completely autonomous and making decisions on your behalf versus agents that are carrying out automation. Microsoft is still working on that, on their language around that. But the demos I think showed a lot of promise in what they can do and a lot of investment in catering to specific industries and not just saying, this is a catchall AI and it will do what you want it to do. It was very tailored to this is a retail solution, this is a healthcare solution, this is a finance solution and very much tailored to that. And so that was really good to see and looking forward to watching the progress on that onto the next thing, which I was not able to go to live, but Miro introduced their innovation workspace, which is an AI enhanced platform for this innovation process. It’s kind of like a whiteboarding tool for fast prototyping that has smart templates. It’s like this centralized hub for projects workflows.
It is probably – not probably – it is Miro’s biggest update I think in the company’s history on development for new products, launching new products and services. I think that this is kind of pushing Miro over the edge in terms of being competitive with their competitors like Mural and Lucid Spark. Some others. They’re really, I think this was a big turning point for them. And I actually have written up an analysis of this and I feel very fortunate that we now have a DevOps person on our team, Jason Andersen, who actually is helping me with some hands-on and experience, Miro gave us a some access to the platform and so Jason is contributing his hands-on experience with the platform. So we will be publishing our thoughts on that in the next, well, I don’t know if it’ll be by the end of the week or not, but we’ll try maybe early next week after that. I was at Zoomtopia. There was a lot going on there,
Robert Kramer: Swag yet.
Melody Brue: Oh, did you see the swag video?
Robert Kramer: No.
Melody Brue: Oh, that’s so funny that you say that. So I’ll actually link that. So Zoom did a really cool thing… So Zoomtopia sometimes it’s their big user conference and everything. Sometimes Zoomtopia has been quite large; this year they actually made it a little bit smaller. It was a more kind of intimate gathering. They opened this experience center, which was really cool. So it was a little bit smaller, more intimate, but that meant that there wasn’t this big kind of showroom floor experience that you typically have where they would have partners with booths and all the things. And typically those partners would be Logitech, Avaya, the ones you would expect. So rather than having those booths, they had a kind of swag bag with stuff from those vendors.
Robert Kramer: Oh, nice.
Melody Brue: And so my fellow analyst and good friend Craig Durr from The Collab Collective and I did an unboxing of all the swag, so I will link it in the show notes because we had a good time doing it. But Zoom where their big announcements, I mean I think their biggest announcement was around their AI Companion. They made a big upgrade to their AI Companion 2.0. That’s just kind of what you would expect. It summarizes meetings, it generates your tasks, emails, books your meetings, all of that kind stuff. But I think the biggest update on AI Companion is that you’ll be able to personalize it in that’s coming I think early 2025. And again, industry specific. So they will be having AI powered features that are specific for industries like healthcare and education. Zoom does. They are very, very strong in AI and I don’t know that everybody knows that and I think people know that if you’ve ever used their AI summaries, you probably know that they’re quite good.
If you’ve ever been on a Zoom call and somebody says, oh sorry, my dog’s barking, and you’re like, I don’t hear your dog. Their noise cancellation is superior, things like that. But I gave the analogy to the Zoom team that this is sort of a long analogy, so I’m going to make it really quick. I think where we have to kind of tell the Zoom story is their AI is actually, it’s not all about, you can’t just tell the story of the outputs or the outcomes. You have to tell the story of how you get there. And my analogy was like, take two cars. You’re both going to end up there at some point, but my car might be safer and it doesn’t leak oil or gas. And that’s kind of how I look at things like that. And I use this analogy today with Adobe as well. You have to tell that story because eventually all of the tools and all of these AI outcomes are eventually going to be the same. And so the things like how the model is trained, the security of the model, the data that goes into it, that stuff is the important stuff. It’s how the car is built and the security, the safety of the car, what it leaks, what’s put into it, how you get there is actually more important than the fact that you arrived.
So that’s the story that we need to be telling. So they have, Zoom has so much beyond meetings and they are really making a bigger attempt at telling that story. And Zoomtopia was a really big attempt at doing that. And again, talking about partnerships and connections. So with Zoom AI Companion, you can connect your own data with third party apps like Jira and Salesforce, so your AI assistant can pull all that information to help you complete those tasks across different platforms. Again, have to get that story out because otherwise people think like, oh yeah, I just hop on a Zoom meeting. People don’t realize all the other things you can do on Zoom.
But again, that is, I was just there last week and I will be writing up a big much more extensive writeup on all the things there. We’re running out of time already and I still have one more, so I’m going to get to it really quick because I will have both a video and a full recap of AdobeMAX. So I will tell you my big highlight of AdobeMAX is… AdobeMAX is really about the creative community, and I actually published today in Forbes and that will link in the show notes. Last week, Adobe announced a free web app that they launched to is part of their content credentials that really is to design to address this growing threats of misinformation and concerns around unauthorized content used. So it addresses two things. One, it allows creators to embed this content credentials into their work, allows them to say, do not use my work to train your models.
It also allows users or viewers to look at things and understand is this made with ai? Can I trust this image or is AP or New York Times or whatever it is that you’re viewing, however you consume information, you can basically read this as a nutrition label and decide, do I trust this? I look at this as sort of a primer for everything that was announced at AdobeMAX as the fact that they’re enabling these content authenticity credentials into everything that they do and allowing it as a free app to anybody. It is not like you don’t have to be a user of Adobe products.
It’s so important as we get deeper and deeper into AI tools. I will go way deeper into AdobeMAX. It was super great. There’s so many things around Adobe Express that I love that I think just allowing people who aren’t traditional creators or creatives to be a part of the creative community and a part of getting content out there, that content velocity. That’s part of what the video that I did to that will be a Six Five video that was the gen studio for performance marketing. So so many announcements, but I think that was the crux of it that I think is super important, which is how are we going to use AI moving forward and how do we protect both creatives and consumers of AI content to know what you’re looking at and feel that that’s safe for you?
Robert Kramer: Wonderful.
Melody Brue: I think that was longer than I wanted it to be, so you’re going to have to even go faster. So where have you been?
Robert Kramer: I’m in New York right now for IBM’s Analyst Forum, and not to give too much away, but we’re talking to the leaders and learning their strategies and their visions around AI. I’m putting my opinions in where some of the core of this data comes from, which is ERP and how ERP influences a lot of the trends that I’m going to talk about in a few seconds on a lot of these innovations. So I was in Las Vegas I think about two and a half weeks ago for Infor, and Info is one of the leaders in ERP, and actually I stayed at the new Fontainebleau in Las Vegas, which I highly recommend. It was really nice and quiet, but they’re gearing towards, like you mentioned, industries with Microsoft. They’re gearing towards industry specific for their cloud platform. Now, Infor’s story is that they started as they’re owned by the Koch Industries Investment Group, which actually I think owns or has a stake in the Fontainebleau, by the way.
So anyway, they put together just an enormous amount of ERP companies into this Infor product, but they’re gearing to try and get companies from legacy into the cloud, which is really a major talkative item because most enterprises are still on-prem. I don’t have the exact numbers, but I’d say maybe two thirds of enterprise ERPs might be on-prem or hybrid, so like that, and then a third are in the cloud. But all this is great technology. We’re talking about needs to have some capabilities to access the data in the cloud. And so the core functionalities that they’re adding for this cloud platform are AI assistance and process mining. But I really like that they are going towards the sustainability modules and tenant to support production and inventory and operations and compliance and environmental goals. But I still think there’s a major issue with how do we get these legacy companies or legacy softwares from these enterprises off-prem into the cloud.
And one of the things, I’ll move on to the next one after this, Infor has created processes for each industry to utilize. So back in the day when we were doing a lot of on-premise, they had what we call switches and then we did customizations. And so they were unique for each company, but if you have it by industry, you learn something from the other companies that are in that industry, and it might be a better way to do it than you’re doing because once you start bastardizing the system, then you have inabilities to migrate to the newer version. So they’re actually really focused on it. And I will go back to what I talked about a long time ago is data management and change management are major components here and trusting your vendors to actually to get to that next step, which is the cloud. Then I went to LogicMonitor, which is in Austin, which was a great place and home of where you live in our company,
Melody Brue: My hometown, although I wasn’t here,
Robert Kramer: They’re not directly associated with the ERP, but they are observability platform. They had some great customers there at McKesson and Topgolf and AppDirect, and they’re putting AI embedded throughout their whole product and it’s allowing the agents also AI agents to actually work to actually help with some of the operation and picking up anomalies and things like that. But one of the things that they are going to try and get to because they have had a lot of growth, is how do they get to the next level? Not substantial compared to some of their competition, but they have the capabilities because they have some really good customers. So my advice to them was to focus on how you can actually take some of this ERP data and actually do some things with some of the observability with it, and it could actually tackle some of the sustainability that is going to be required.
That’s another subject. Let’s go keep on going. And I was at Teradata, which is they were actually in LA, a data intelligence platform, and they released some really neat things. They’re transitioning to a hybrid model with AI embedded, and they have some great new partnerships. You mentioned some of the partnerships like with Miro and Adobe, then they are doing ones with Google and Amazon and also Nvidia. And I think it’s super important to have these partnerships. You can recreate things so quickly, like we talked about with the customers. They say that they’re embedding or going to have modules for sustainability. So I’d like to see some of those use cases. I do think the customers will in the end really need the sustainability and that’s another discussion, but I think it’s a gigantic thing going forward. But one of the things that I did talk to them also about is where the executives are looking for their priorities.
Is that AI, is it data management? Is it sustainability? And most of them are looking at things like data management, and it does fall back on the ERP because I think the ERP has to be modernized before you start taking advantage of some of these amazing innovations. So I think that’s really important. And I mentioned this to you before, I do think you can tell the innovation capabilities of a company of how their implementation is right now with their ERP. Are they utilizing onsite less than 50% of their ERP? Are they completely innovated in the cloud and maximizing it where it’s a competitive advantage? I think that kind of shows the capabilities of what ERP sets the trends. From there, I went to AWS and we actually did an amazing Game Time Tech Podcast about it. Yeah. Where Jason Andersen, who you mentioned, the two of us went to a football game with AWS. We met the team that handles the sports marketing or sports technology, and they just had a new partnership that they actually extended with the Seattle Seahawks and how they’re utilizing AI and ML for some of the things for drafts and statistics and things of that nature. So everybody should take a look at that. And I think I went pretty quick,
Melody Brue: Super fast. That was way faster than me. I think I’ve talked too much. Maybe I’m just too tired. I want to go back to Teradata because I feel like that’s so important and it’s not really talked about enough because everybody thinks AI is magic and there is some magic to it for sure. I mean, believe me, at Adobe I was like, wow, there are some things where I’m like, I don’t know how that’s possible, honestly, especially with image creation and everything with video. But with Teradata, I think the big thing that I see with them is you have to be data ready for AI and a lot of companies are not. And I think that’s where do you see that big kind of gap where people are like, well, the AI is going to take care of this, but if their data’s not ready,
Robert Kramer: Well, that’s why I kind of go back to the ERP. I know that it’s not everything, but it’s the anchor. So you need to modernize your ERP data from the get-go to maximize. But I think what Teradata and some of Databricks and Cloudera and Snowflake are doing is they’re saying, Hey, just move some of this data or leverage that data where it’s not with us, you still own it, but we can take it and do intelligence on it, but then you’re doing it outside of that main system. So just my opinion for whatever it’s worth, I think you want to minimize, you want a turnkey solution, you want to keep it as simple as possible and maximize the technology inherent in the vendor that you have. So I’ll bring up a vendor in just a second, but you want to be able to modernize your ERP.
You want to build to first look at change management. Second, look at data management, third modernize. Then where can the AI help out? And then we get to the AI agents, and I think that’s just my perception from a traditional perspective. But the AI agents, obviously they’re going to work automatically on mundane, maybe non-transactional maybe tasks, but the transactional, I think there has to be a trust and transparency of these systems. Companies like Oracle and Microsoft own the data kind of from the, and also even IBM own the data from the beginning to the end. They have all these applications that talk to each other. So I think they’re in very good positions, but I think they’re selling off of features versus business impact and these customers aren’t necessarily ready for, because the data, like you just said…
A lot of it’s, so if it’s on-prem, I think the Teradatas in the world are successful and they’re great for the helping these companies with the data management and also the analytics and conforming to the requirements that they have and pulling this information together to execute for them. But at the same time, I think you need to be able to maximize your ERP and make sure that what you have is not just an operational ERP, but a competitive advantage. ERP. When I was selling ERP 25 years ago, that’s how we sold it. This was going to change your business. This is going to show you how to buy and sell based on supply and demand and also based on cross-selling and give you these features and these functionality to be more on the offense versus the defense. And I think that this is one way to look at the ERPs. What do you think?
Melody Brue: Yeah, I was going to say, so when you look at something like that, what would your advice be to companies now that are looking at adopting ai? How do they look at that moving forward? What would your advice be to them in terms of how to manage their data?
Robert Kramer: Well, I think that first of all, a lot of companies are doing silo projects where they have one branch that’s doing this or they’re converting the data. They’re one company within the overall structure. I think it has to be a company move. And I think there has to be amazing change management. I’ve said this before, I think the companies are complex, not the vendors. So the companies have to this change management under control or it’s transparent. It’s an initiative that they know is going to require a lot of hard work and understand their data, understand their vendors, the innovations of those vendors, and then choose the road forward and get their modernization approach for what they can maximize being in the cloud with this specific ERP vendor. Many of these companies don’t want to go to the cloud because of the fear factor and owning it on-prem.
And they’ve been on-prem forever. So a two tier, which would be the next step, my advice, where they would have part of it in the cloud to take advantage of the modernization. And that’s why you’re seeing these hybrid models constantly that they’re able to offer that. So you can take advantage of the AI capabilities. So it’s critical. First you have to get to change the data and then what’s your approach, hybrid or cloud. But you can’t remain on-prem. It’s just not going to work to take advantage of the modernization. So at the end, the modernization is the key and you have to look at what the company’s capabilities are. But I mean if you came home, I’ll leave it at this. You came home from your trip and I did too, and your husband turned your house where he put the dining room in the living room, the living room and the dining room. Would you be like, oh my god, honey, thank you for doing this. This is amazing. You’d be like, what is going on here? And then it took you two weeks to get over it. Well, this is what these companies are going through. I know we’re, we’re kind of rambling on here, but change is a big deal. Most people don’t like it. So it has to be transparent, it has to be laid out, the benefits.
Lastly, I think that the ERPs should really think many of them are doing it, but go to a consumption base because it brings more users into the fold where somebody at a high level or even at a low level, they’re all more involved in using it. People on the dock, people in the plant, people on RF devices, people that are doing all different functions are able to look at the system and use the system, and that gives more data at the same time. And I think that these older ERPs were only user base and the users were less involved. So the transparency was only focused on them. So if you do the consumption and there’s more people involved and the change management, I think it gives a little bit more of a plethora of just information and abilities to take on what I’m talking about. It’s massive, so it’s daunting. There’s no easy way to put it.
Melody Brue: Yeah, I can see that. And I think that’s great advice. What do you have coming up next? I think we’re well over time, but that’s because I went rambling on, but what’s coming up next for you?
Robert Kramer: So we have a lot of writing to do. I have all these projects and then I’m going to be at Veeam, which is a data protection company, and I’ll be out there in Arizona. And then I’m also going to be at Amazon re:Invent. So that’s what I have coming up over the next 30 days and decided to get some time at home and with the family. What about you? What did you think about
Melody Brue: I’m headed back to Miami next week. I don’t know, apparently I live in Miami now, but yeah, I’m back in Miami for Cisco WebEx, I’m excited about that. As I actually talked to you on my WebEx Pro. I love, I’m in love with this device, the
Robert Kramer: Clearness of it’s phenomenal.
Melody Brue: Yeah, I’m going to actually be publishing my little review of this soon. I actually didn’t really have enough time with it at home because I’ve been on the road so much. But anyway, thanks for all your great insights on all of your travels. And it’s crazy that we haven’t even run into each other on all of our travels, but I love hearing about all the great events that you’ve been at, and I feel like there’s just been a whole bunch of things happening throughout all of the different analysts at the firm. And it’s fun when we get to hop on a podcast and share all the things like we did with Jason the other day. So for everybody who’s watching, thanks for joining us. Check out our Game Time Tech podcast as well for all the things from AWS, and thanks for joining us. If you have any feedback for us, if you want to hear more about any of the things we’ve talked about or if you want to hear about any of the things we haven’t talked about, hit us up on Twitter, hit the subscribe button, check out our other podcast, and we hope we will see you again soon. Thanks for joining us.
]]>Game Time Tech: Ready for Some (Tech-Driven) Football! NFL 2024-2025 Tech!
https://moorinsightsstrategy.com/game-time-tech/game-time-tech-ready-for-some-tech-driven-football-nfl-2024-2025-tech/
Wed, 16 Oct 2024 16:30:11 +0000https://moorinsightsstrategy.com/?post_type=gametimetech&p=43513On this episode of Game Time Tech, hosts Melody Brue and Robert Kramer discuss the tech on display for the 2024-2025 NFL season & more
]]>On this episode of Game Time Tech, Hosts Melody Brue and Robert Kramer are joined by Jason Andersen to discuss the tech on display for the 2024-2025 NFL season. They also cover AWS, Cisco, Sony Electronics, Cadence Design Systems, and more!
Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.
The Game Time Tech Podcast is a Moor Insights & Strategy Six Five Media production.
Transcript:
Melody Brue: Hi, welcome to this week’s edition of Moor Insights & Strategy Game Time Tech podcast. I am Melody Brue and I’m joined this week by my co-host, Robert Kramer. And we have a special guest this week, Jason Andersen. We are all actually vice presidents and principal analysts of different verticals but we all are here to talk about sports today, specifically NFL. And before we get started a little disclaimer, this show is for information and entertainment purposes only. While we will discuss publicly traded companies, its contents should not be taken as investment advice. So let’s get started. So topic number one, AWS. You two were just in Seattle with AWS for analyst event and a football game. Having a little FOMO, I have to admit. I was actually at a different event. I was at a very cool event in San Jose as … With Zoom. You guys I think maybe got the good end of the stick. Tell me about it.
Robert Kramer: I mean, I thought it was a great time. It’s always fun when you go to these events to meet people but also to see the game, but how the game’s using technology. Jason and I got to see it firsthand which I thought was fantastic. AWS’s Global Head of Sports, Julie Souza, who was actually in the booth. She talked a little bit to us about … I think it was a multi-year partnership with the Seahawks, they just did an extension with Amazon. They’re already using Amazon Web Services for a lot of the data analytics, the performance, things of that nature that helps them with the drafts. And we can talk about that in just a minute. This deal is going to take in consideration using Bedrock which is their AI-powered system, to automate content. I think, Jason, maybe you can talk about it a little bit more. To actually take that content and use it for broadcast, for fans, for social media in a manner that’s expedited in different languages. We had a really good time and we got to see some … A few other technologies that we can talk about in just a second.
Jason Andersen: There’s a couple things. The first is it was a nice event from an analyst perspective because we got a preview of what’s coming down the road at re:Invent which is in December. It’s not always typical for analyst days to be so far in advance of an event so it was cool because we were able to give her some of our feedback. It’s not cool because I can’t tell you about a lot of the cool stuff we learned technologically. Go to re:Invent I guess is the message. But one of the cases they did talk about with the Seahawks, which was super interesting, was how they have to distribute so much content so quickly after a game. From a press conference to hype videos to all these different types of content.
And what they’ve recently done is injected AI into that process. They started out with a really manual process in the beginning, which sounded like an incredibly stressful job for the people who did it, and then they’ve begun to, over the years, start to use Amazon technologies to simplify that. The last step has been adding, as Robert mentioned, Bedrock, their AI front end, if you will, or mid-plane, if you will, to large language models. It’s pretty cool. It’s a very cool use case in terms of how it’s able to take all of these disparate sources of content, format them correctly, make sure they’re consumable by whatever platform they’re pushing that content out to, but also delivers summaries, and transcripts, and translations to that stuff which is also a big part of the process. Which are all things that gen AI is great for so it seems to be a really good usage.
And I think it’s also cool that they’ve now layered that on top of a process. I think so many people think of artificial intelligence as blow everything up and do it from scratch. Whereas this is a case where it’s just been added and extended the value of an application which I think is pretty cool. Hopefully people take this idea in other places because I think that’s a lot of what’s going on. And application development’s going to start to show up in other … From other vendors and other platforms and AWS won’t be alone here. But, certainly, this was something that was pretty innovative.
Melody Brue: Right now I’m in Miami for Adobe MAX. And one of the big things that has been discussed for the past year plus is that content velocity. And sports teams, we’ve heard about this from IBM with Wimbledon, with most recently the US Open, that AI is really helping with content velocity. It used to be that they could only do match summaries for the major events, now they can do match summaries for all of the singles events. So AI is really helping with that content velocity. And it’s definitely something we’re talking about a lot here at Adobe MAX as well. You guys also mentioned Amazon’s Just Walk Out technology. A little bit different in terms of how that affects, not necessarily the game but the fans. I find that really interesting and I actually wrote about that.
They do have one … A Just Walk Out technology at Globe Life Field which is where the MLB All-Star game was held. There’s a couple of different ways that technology is used. One is for grabbing drinks, and snacks, and stuff. And that’s really important for fans. Because when you want to get up and go grab something you don’t want to miss any part of the game. You pay a lot of money for those tickets. And it could be any moment that you might miss that game-changing play. And so people want to get in and out. The other part that Amazon has been focusing on more recently is merchandise. Go to try to get your favorite jersey and it’s not there.
So they’re able to track the merchandise by knowing what’s come in and out … What’s left the store. So it’s not just a matter of oh, it’s easy to check out because you can just scan your credit card and walk out but they’re able to keep track of the inventory more accurately. They’ve got that inventory down in the bowels of the stadium and they’re able to say like “Okay, we’re out of size medium get that up there quickly.” That also keeps the fans in their seats longer. It seems like it’s one of these novelty technologies but it really is major to the fan experience in terms of keeping them in their seats.
Jason Andersen: Right. I think a lot of what we’re seeing now with the AI use cases, stuff around agents and so forth, is really around … You mentioned velocity or urgency but it’s also stuff that’s fairly robotic, right? Having inventory checks is the spreadsheet activity. Not having to have humans keep it in their heads or just running around without purpose, that’s a good thing, right? And I think it’s the same thing with video content, right? If you’re pushing something to say Instagram versus YouTube versus whatever, you want it to be formatted correctly to come across correctly and be on brand, quite honestly. It does have an impact of velocity but it’s also consistency and quality that’s coming into it as well. In some cases AI can do that better than people.
Robert Kramer: And even the data like you’re talking about, the efficiency. The drafts used to be manual now they can put together a model, run it against all the players, and then they can capture those players instead of the ones that stand out the most from the Big 12, or the SEC, or the larger schools. They can kick it against … This model against all players. If a guy’s had 12 interceptions they can instantly see that player and then video of those 12 interceptions. And that’s fantastic.
Jason Andersen: And then even just scheduling, right? Billions of options, or billions of different parameters that come into now scheduling for games.
Robert Kramer: Oh, yeah.
Jason Andersen: Because now football isn’t on Sunday, right, it’s on … Only two or three days off a week now from football so it’s a much more difficult task to figure out not only logistics but also just the schedule, right?
Robert Kramer: A good point, yeah.
Melody Brue: That’s actually a good segue to our next topic which … Because scheduling now has gone international and there are games in … Throughout the world. Sticking with our NFL topic for today, Cisco announced a partnership with the NFL in their league operations across the … Across Brazil, the UK, and Germany. And then I think they’ve added Spain in 2025 to use Cisco’s networking and cyber security solutions to connect and protect the international games and the … So also the 2025 Madrid game. So, Jason, you had some pretty good thoughts on this one.
Jason Andersen: Even the point about before with AWS and translation, right? Sports are increasingly becoming international. I mean, we have the World Cup coming here yet again, it’s been a 30-year break, in 2026. But the NFL I think is maybe not quite as far ahead maybe in terms of internationalization as say baseball and maybe even basketball have been as far as US sports are concerned. I think it’s great the NFL is investing in it and working with great partners. And the thing to think about with it is really the infrastructure component, right?
So working with a partner like Cisco is great because not everybody in Brazil has a high-speed internet connection to their house or 75 inch TV in their living room, right? You’re going to see consumption of sports outside of the US on so many more different formats and across so many different types of networks, including just plain old internet. So cybersecurity and extending the reach of secure delivery of content is going to be a big, big deal in those countries. Whereas in the US it’s a little bit more still through streaming apps and so forth or nfl.com. It just may not be that situation in those other countries. As these sports expand they’re starting now … Especially digitally, they’re having to really figure out not only how they get it out there but also how they keep it tied down when there’s so many different formats out there now.
Robert Kramer: They’re putting in like you said, the cybersecurity and observability. There has to be a lot of presence of this. This is a big deal, as they go international and global, to have the security tighter. I think the Olympics was a good way to take a look at what they did because they did a pretty good job. And what’s needed for what was faltered. Anyway, this is a good point.
Jason Andersen: Sure.
Melody Brue: All right. We ready to move on to the next?
Jason Andersen: Yeah.
Melody Brue: Sony and their partnership with the NFL, this is sort of a combination I think of more a consumer brand but also really they’re using … Not only a consumer brand but they are using consumer products for … In a partnership. But then they also have this HawkEye technology which I think probably everybody is familiar with in some way but maybe doesn’t actually understand that that is what this thing is basically calling the … Where the ball lands on the line in tennis or whatever. One of the interesting things about this partnership with Sony and the NFL is all of the communication with the sideline headsets. That will be actually starting in 2025. When you see all these players coming off the bus onto the field, or even on the field, they’re always wearing these headsets. The cameramen, the … Even actually the cameras so part of the broadcast. So much of this is powered by Sony. So making this partnership official … And, actually, they are becoming the official headphones of the NFL. So this is actually really expanding their sports technology through Hawkeye and then on the field and sideline technology.
I think the Hawkeye technology, this is an interesting thing and I think could spark a little bit of debate with sort of the pure sports fans who want the game to just be played. If you introduce technology that’s going to call the … Every single strike exactly as it is, or every ball exactly as it is, or every offsides, or whatever sport we’re talking about, it changes the game. We have had discussions with people who think these athletes train their whole lives to play at this level and they should get the benefit of the right call. I get that argument. At the same time, there is some fun and some sort of … It’s part of the game that you get a bad call, that you get to challenge the call. That’s part of the reason why that those challenges exist in sports, in my opinion. I have a son who’s a pitcher. Believe me, it drives me absolutely crazy when I see a strike and it’s called a ball, especially when it turns into a walk. But it’s part of the game. People love to get mad at the ump. You know what I mean? I appreciate the Hawkeye technology, especially when it is for a challenge, but I don’t think that … I wouldn’t appreciate a sporting event that is entirely reffed by AI. How about you guys?
Robert Kramer: I mean, do you want to use it to disrupt the game or help the game? Does it interfere with the nostalgia or does it help? So for a line management, so is it a first down by an inch or is it not a first down by an inch? I think it works in that scenario. For baseball, it does hurt the nostalgia with the balls and strikes. In some cases it does help with extra angles, right, so it gives it different … Additional camera views that you can’t see and maybe it is needed. There could be a situation where you throw a flag and you get to use it. The red flag in football, you get three per game, and maybe it’s used at that moment. But for the soccer World Cup, I think it was the … No, it was the European Cup, there was a ton of offsides. I’m not sure if the technology was used at that point. It was like a toenail was over the line and it was called Offsides. At some point you have to let them play like you’re saying. And I think it disrupts the game. But Jason, what do you think?
Jason Andersen: What’s really slowing the game down is the whole appeals process, right? If a referee makes a call or a machine makes a call, what’s slowing down the game is the fact that somebody’s raising their hand and saying, “Hey, this is the wrong call,” and it just slows down the entire thing. The question is that where are you going to put that type of knowledge, right? Hawkeye seems to be more focused on the replay booth than the … Then on the ground. But all those things change. And there are evidence in other sports that it can happen where you could have real-time, right … Real time types of judgments made. Baseball comes up a lot with balls, and strikes, and so forth. I think there’s a couple things to think about in addition to it. I lean a little bit more towards in favor of it. But again, in favor of it in so much that it can reduce the number of appeals which is what makes … For me, the fact that every time a touchdown is scored they have to call it up to New York doesn’t seem to make sense. It just slows the game down. I think some of the sports we have on TV now are too long as it is. So I guess if you can speed the game up somehow that might be good.
But the other thing is that I think about is also how technology has now come into certain sports, particularly racing sports whether it’s cars or bikes or whatever. How the science and the analytics are coming out of it and it’s really making it harder to enjoy because people aren’t taking the types of risks they used to, right? These guys have the analytics in their head so much that they know they’re not going to pass the next car in front of them unless these conditions exist. And nobody’s really sticking their neck out. Sometimes that’s a bit of a drag, right? I mean, it’s become a little too robotic. On the one hand I guess I can see it for some forms of officiating. Sometimes I really wish some of the players had less data so they might … Or maybe the coaches had less data in telling the players what to do and let the players play, to your point.
Melody Brue: Yeah, I agree with that that there are certain things that should be challenged and can be challenged, but if you start to challenge too much it just becomes like everything has to be bad on.
Jason Andersen: I think I told you this, Robert. But I was telling a couple of the people from AWS that there’s a snippet going around online that’s the opening credits to Monday Night Football from the ’70s. And it’s showing this absolute just bloodbath of highlight reel. And then it says underneath in the captions that every single play in that credits is now a penalty in the NFL. So 40-something years later all the stuff that was a highlight reel is now a penalty, right? And I can appreciate the game … The safety improvements and all these other things. And the fact that, quite frankly, these athletes are so much more athletic and they can move so much faster is why they had to put those rules in place. I have a lot of appreciation for, say, my generation or my father’s generation who constantly complain that the games are different. Well, they are, right, they are. The good news is they’re safer. The bad news is that maybe some of the excitement or maybe even bloodlust is gone so that’s a bit of a bummer, right?
Robert Kramer: Well, maybe there needs to be boundaries with this technology so it’s just not-
Jason Andersen: And again, I don’t know what the guardrails would be, yeah.
Robert Kramer: It’s getting out of hand I think. Mel, you talked about the balls and strikes with your son’s game. How far do you take it?
Jason Andersen: Right. Look at gymnastics or skating, what would you do there, right? I mean, that’s the sport that’s 100% judged by human judgment, not ball strikes or crossing a finish line. What are you going to do put a bunch of cameras on the table? This model comes from Romania, and this model comes from the … What you put different LLMs now in there? There are machine learning models to judge the sport. Like I said, I think there is going to have to be some sort of guardrails. I think those are two sports especially that are like … I don’t think they’re going to want that.
Melody Brue: Yeah, we talked about that during the Olympics that … The break dancing in the Olympics, the judging was just so-
Jason Andersen: Right.
Melody Brue: Odd because it’s a very subjective thing that you’ve judged. You talked about racing. The technology that goes into these cars now that allow them to make those judgment calls about whether they’re going to pass or not pass, they’ve actually started to limit what racing teams can put in the cars because it gives them such an advantage. I wonder if we’ll start to see a little bit of a correction on that. I think in football and in some of these contact sports it was a safety thing that they had to change some of the rules because of all the head injuries and all of the things. If we’re over-engineering it to the point where it’s now just … We might as well be playing a video game. At what point is it no longer a sport?
Jason Andersen: Right. That’s a big part of it too, you’re right. I mean, the cars one is funny because some of it is safety but a lot of it is just aero, right? It’s just aerodynamics.
Melody Brue: You keep creating these perfect transitions because you talked about aerodynamics and F1, and Cadence actually works on F1 and aerodynamics, and that’s our next topic except for it’s not for F1, although they are very heavily involved in F1. I spent some time the week before last … No. Yeah, a week before last. Actually, the last time I saw you Jason in Boston I was with Cadence in Palo Alto and so talked a little bit with them. Four years and their sustainability effort. A year partnership that they have formed with the 49ers to address some of their sustainable operations both in their … In the stadium. And I think this is really interesting because it really incorporates a lot of the technology. And it’s a really great way just to show one, their commitment to the community and the environment but also the types of things that Cadence does for its actual customers. It’s all around sustainable design center, design and operations, visualizing the data center design and performance, encouraging that sort of proactive software-based approach.
So I think the resource utilization planning, all of those things, they’re doing this for stadiums. I think, Robert, you’ve talked about this a little as well. Just the sustainability element throughout stadiums and how important that is. The carbon footprint of a stadium is actually quite massive. I saw something really cool when I was at the MLB headquarters in New York. They have a room where they have screens up. And one of the screens is all of their servers across every single ballpark. And it shows the temperature of each server in every ballpark so they’re not over-cooling each server in the ballparks. They’re actually monitoring it and cooling it as they need to. Obviously, they’re going to have a different sort of cooling mechanism for a ballpark in Texas versus a ballpark in Minnesota at different times of the year. So those types of sustainability efforts are so important in these stadiums in … I mean, think about everything that goes into that in-
Robert Kramer: Oh, yeah.
Melody Brue: Waste, in … I mean, even ticketing. I mean, think about the transitioning to mobile tickets, the paper waste that eliminates, I know there are a lot of people that are really disappointed about not having paper tickets anymore because people like that … The nostalgia of saving their tickets. But the paper waste and things like that, I think that there’s a … There’s something really important about thinking about that … Everything that you can do within the stadium to save that. And looking at where are the places that we can-
Robert Kramer: I think sustainability is a big topic, specifically going into 2025, with all types of applications, enterprises, with whatever they do. So specifically with these stadiums, there needs to be a practice for the end-to-end carbon footprint. It’s the sourcing, like you said, of the products that they’re bringing in. It’s the operational processes, it’s the transportation to and from, it’s the waste management affecting all the departments, suppliers, employees, customers. Whether companies or facilities are ready, they’re going to need to be ready because I think it’s just a gigantic topic going forward. It’s a great partnership that they are putting in place here.
Melody Brue: They have some other really cool partnerships in place too for just some community efforts and everything, which I love to see stuff like that, but this is not really what we’re talking about. But I think we could do a whole other thing on NFL and their community partnerships. Let me tell you a little something about myself. A long, long time ago … Well, it wasn’t that long ago, maybe 10 years ago, I was given an honor by the NFL as a community quarterback for some work that I was doing in the community in San Diego for fostering youth sports in underprivileged communities. Something I’m very proud of.
Jason Andersen: Cool. Congratulations. Very cool.
Melody Brue: That just came up when we were talking about community stuff. I wasn’t planning on talking about that but it’s just something that was cool to receive.
Jason Andersen: Yeah, very cool. That’s great.
Melody Brue: Well, I think that wraps it up. This was a great conversation, I think, focusing on one sport. There’s probably 50 other things we could talk about. I am so interested in how they isolate sound. I talked a little bit about that with the MLB. I feel like we maybe should make this a little bit of a series and pick four or five things each time because we have a lot more football over the next few months. And actually, we have a lot more baseball, even though the Padres are out so boo for that.
Robert Kramer: It was a good series though.
Melody Brue: It was.
Robert Kramer: I think they did well.
Melody Brue: Well, thank you, guys, for joining, thanks for all of your insights. Thank you everybody for watching. If there’s anything that you’d like us to talk about on a next episode please hit us up on Twitter. Please hit the subscribe button and join us each time as we talk about all the different technologies in sports. Thanks again for joining and we will see you next time.
]]>MI&S Weekly Analyst Insights — Week Ending October 11, 2024
https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-october-11-2024/
Mon, 14 Oct 2024 13:00:18 +0000https://moorinsightsstrategy.com/?p=43391MI&S Weekly Analyst Insights — Week Ending October 11, 2024. A wrap up of what our team published during the last week.
Welcome to this edition of our analyst insights roundup, collecting some of the key insights our analysts have developed based on the past week’s news.
Anshel Sag, our principal analyst for mobile devices and personal computing—and a terrific photographer—made this glorious panoramic photo of Petco Park in San Diego using Google’s new Pixel 9 phone. (You can read his review here.) Unfortunately, his beloved Padres—also the hometown team of our VP and principal analyst Melody Brue—lost their National League Division Series to the Dodgers.
It’s another busy week for our team!
This week, the team is attending various tech events nationwide. Melody is at AdobeMAX in Miami, while Matt, Paul, and Anshel are in Bellevue, Washington for Lenovo’s Global Analyst Summit & Tech World. New York is home to two significant events: Will is at Blackberry’s Analyst Day, and Matt, Robert, and Jason are participating in IBM’s Analyst Day.
On Thursday, October 17, Melody will join the RingCentral team on the webinar “Revealing the AI Communications Strategies That Work” where she’ll share her vision for the future of AI in UC. It’s free to attend!
Last week was very productive, with team members covering multiple events. Robert visited Los Angeles for Teradata, and Melody attended Zoomtopia in San Jose and SAP TechEd virtually. Bill was in Austin for Embedded World NA. Will traveled to Las Vegas for MWC Americas and the T-Mobile for Business Unconventional Awards. Patrick, Anshel and Matt took part in AMD’s Advancing AI Event in San Francisco, while Jason and Robert were in Seattle for the AWS GenAI Summit.
Looking ahead to next week, the team continues its tech event travels. Patrick and Will are set to attend Qualcomm’s Snapdragon Summit in Maui, Melody returns to Florida for WebexOne in Ft Lauderdale, and Matt will attend the RISC-V Summit virtually.
Over the last week, our analysts have been quoted multiple times in top-tier international publications with our thoughts on Adobe Express, AI networking, AMD, Astera Labs, AI, Marriott, cybersecurity, NVIDIA, Samsung, the 5G Americas Summit, and T-Mobile.
The other day I had a great talk withDiya Wynn from Amazon Web Services. Wynn has been a key evangelist on setting up guardrails for generative AI. AWS’s ownAmazon Bedrock Guardrails is a very interesting service that enables responsible AI that can span multiple LLMs in an enterprise. However, Wynn recently took on an expanded role in AWS’s advocacy for responsible AI, in which she is helping educate both federal and state governments in shaping good AI policies. What stuck out from the conversation is that AI has some unique properties when it comes to governmental policies. The first is that the pace of innovation for GenAI has been faster than many new technologies, which is tough for governments to handle as they tend to move much slower. The second are concerns associated with possible AI future outcomes including job losses or civil upheaval. The best part was a discussion about the role the government plays in innovation and the potential for providing the right infrastructure (such as an updated power grid) so that AI can continue to grow. Technologists don’t always appreciate this type of collaboration, but I think it’s great that AWS is taking this on.
A hot topic last week was the pricing models used for AI agents. There are many different approaches out there. For instance, with its Agentforce offering, Salesforce will be charging a fee for every time an agent runs. Others will still use a capacity-based model or a per-user subscription. While the merits of each of these can be debated, a more critical nuance in all of this is whether vendors will be able to execute these strategies from a systems or relationship management perspective. It’s going to be a challenge for everyone moving forward.
The other day I had a great talk withDiya Wynn from Amazon Web Services. Wynn has been a key evangelist on setting up guardrails for generative AI. AWS’s ownAmazon Bedrock Guardrails is a very interesting service that enables responsible AI that can span multiple LLMs in an enterprise. However, Wynn recently took on an expanded role in AWS’s advocacy for responsible AI, in which she is helping educate both federal and state governments in shaping good AI policies. What stuck out from the conversation is that AI has some unique properties when it comes to governmental policies. The first is that the pace of innovation for GenAI has been faster than many new technologies, which is tough for governments to handle as they tend to move much slower. The second are concerns associated with possible AI future outcomes including job losses or civil upheaval. The best part was a discussion about the role the government plays in innovation and the potential for providing the right infrastructure (such as an updated power grid) so that AI can continue to grow. Technologists don’t always appreciate this type of collaboration, but I think it’s great that AWS is taking this on.
A hot topic last week was the pricing models used for AI agents. There are many different approaches out there. For instance, with its Agentforce offering, Salesforce will be charging a fee for every time an agent runs. Others will still use a capacity-based model or a per-user subscription. While the merits of each of these can be debated, a more critical nuance in all of this is whether vendors will be able to execute these strategies from a systems or relationship management perspective. It’s going to be a challenge for everyone moving forward.
Evaluating large language models is important for determining their capabilities and effectiveness. Traditionally, though, this evaluation has required a reliance on human judgment or expensive manual annotations. A group of academics have now published a research paper that addresses these challenges with a process called TICK (for “Targeted Instruct-evaluation with Checklists”). TICK is an automated and interpretable evaluation protocol that utilizes LLMs to generate instruction-specific checklists that break down complex instructions into yes/no questions, making the evaluation process more structured and objective.
The checklist format provides a clear and understandable breakdown of the evaluation criteria. TICK has been shown to significantly increase the agreement between LLM judgments and human preferences. It also streamlines the evaluation process by automating checklist generation. Having a structured checklist format reduces subjectivity, improves consistency in evaluations, and provides insights into the LLM’s reasoning and understanding of instructions.
Tesla’s We Robot event finally showed to the world where Tesla is going with its autonomous vehicles and its robotics Optimus platform. Based on the market’s reception on Friday, it seems that people are not convinced of Tesla’s timelines or the viability of its autonomous vehicles, especially since the vehicles will be two-seaters—compared to Waymo’s five-seaters. Additionally, Waymo is already delivering 100,000 rides per month and continues to scale up every month at an even higher pace. I believe that Tesla’s offering is too little too late, and that a two-seater is not a great fit for many applications. That said, pricing will be important. Additionally, Tesla neglected to mention that most of the Optimus robot demos it showed people were not powered by AI, but instead teleoperated by pilots offsite. Robotics has a long way to go, but it’s quite disingenuous of Tesla to present its robots that way.
AWS and Salesforce have teamed up to offer a new contact center solution that integrates Salesforce Contact Center with Amazon Connect. This partnership aims to make it easier for businesses to implement and manage their contact center operations, with a focus on faster deployment, reduced complexity, and improved AI capabilities. Essentially, it combines the strengths of Salesforce’s CRM with Amazon’s cloud-based contact center technology. This move also reflects a broader trend of closer integration between CCaaS (contact center as a service) and CRM platforms, driven by customer demand for more unified and efficient solutions.
At its Advancing AI 2024 Event, AMD officially launched the 5th Generation EPYC processor, codenamed “Turin.” Turin launches at a time when AMD has seen its share of the server CPU market increase to 34% and its chief rival, Intel, looking to find its footing. 5th Gen EPYC will launch in two variants. One of them, labeled 5c, targets scale-out and cloud workloads with up to 192 cores; the other, labeled 5, addresses traditional scale-up workloads with up to 128 cores. As expected, the chip will ship with a richness of capabilities: 12 channels of memory, 128 lanes of PCIe, enhanced security, and up to 5GHz clock speed. Also as expected, the OEM community was lined up to talk about their partnerships with AMD.
Having gone through the initial launch of EPYC back in 2017, I think it’s incredible to see how the tides have turned. In 2017, Opteron (EPYC’s predecessor) had less than 2% market share. The number of OEM platforms it secured were just a few. Enterprise customers wouldn’t even take a meeting with the company. Fast-forward seven years and this 34% market share means that AMD’s datacenter business is contributing half of the revenue to the company. More than 950 cloud instances are powered by EPYC, and more than 350 OEM platforms are built on this processor. And the EPYC is just starting to penetrate the enterprise, as most of its success so far has been found in the hyperscale space.
The wind is at AMD’s back. Congratulations to the team—from the design engineers to the marketeers.
In addition to EPYC, AMD also launched its Instinct MI325X GPU, targeting the AI training and inference space as well as the HPC market. Along with the MI325X comes ROCm 6.2—the company’s software stack that enables customers and ecosystem partners to build on top of the Instinct GPU. Like 5th Gen EPYC, the MI325X ships with lots of memory (256GB HBM3E), lots of memory throughput (6TB/s), and incredible performance. So much so that the company is able to demonstrate inference advantages over the market leader, NVIDIA. Additionally, the company is showing near parity on the training front.
Even though we always view benchmark and performance claims with a tinge of cynicism, the fact that AMD is able to demonstrate leadership in some applications is a big deal.
I think that what the company is doing with ROCm is perhaps the biggest enabler for Instinct MI325X’s success. With ROCm 6.2, the company has not only simplified the process of developing software for AMD GPUs, but also greatly increased performance. In fact, when comparing against ROCm 6.0, the company is claiming a 1.8x improvement in training performance and a 2.4x improvement in inference.
At the event, AMD brought out partners such as Oracle and Meta to demonstrate the growth of Instinct in the market. It is clear this GPU is making both performance and market share gains against NVIDIA.
At the Commvault SHIFT event in London, the company made several announcements, including the launch of Cloud Rewind, a cyber resilience solution built on technology from its acquisition of Appranix. This feature gives organizations enhanced, automated recovery capabilities, allowing them to quickly rebuild cloud applications after an attack. Commvault also introduced enhanced solutions for Amazon Web Services users, offering direct support for AWS environments to improve the protection of Amazon S3 data, as well as protection for Google Workspace, including Gmail, Google Drive, and shared drives. Additionally, Commvault’s partnership with Pure Storage adds an extra layer of security for enterprises using Pure’s storage solutions, while the company’s recent acquisition of Clumio further strengthens its capabilities in AWS environments. For more details, check out my latest Forbes article, co-authored with Patrick Moorhead, CEO and chief analyst of Moor Insights & Strategy: Commvault Enhances Cyber Resilience With Cloud-First Focus.
Marriott Hotels suffered three significant data breaches between 2014 and 2020, affecting over 344 million customers, partially due to its acquisition of Starwood Hotels & Resorts. The company has since settled with the Federal Trade Commission and nearly all U.S. states. However, some cybersecurity experts are raising concerns over the terms of these settlements. Check out the linked article, which includes my thoughts on the impact and broader implications of the Marriott breaches.
At SAP TechEd 2024, SAP announced updates to its AI capabilities, focusing on its generative AI copilot, Joule. Joule will now include AI agents that can collaborate to automate complex tasks such as dispute resolution and financial accounting. This move towards increased automation aligns with the broader trend of AI impacting entry-level jobs; McKinsey estimates that 12 million jobs may be affected by 2030. While SAP emphasizes increased efficiency and employee focus on less repetitive tasks, the potential for job displacement due to AI, even in white-collar roles, should be considered. SAP is also introducing a Knowledge Graph solution to link data with business context, aiming to improve decision-making and AI development. These changes and new AI features for developers in SAP Build show SAP’s ongoing efforts in business AI.
In addition, SAP has already achieved its goal of upskilling 2 million people worldwide by 2025. This milestone suggests a commitment to addressing the digital skills gap and preparing the workforce for a future where AI plays a more significant role in various jobs, potentially mitigating some of the displacement caused by AI-driven automation.
Adobe has introduced a new tool to increase transparency and trust in digital content. The Content Authenticity web app, scheduled for public beta release in Q1 2025, allows creators to attach Adobe’s Content Credentials to their work, providing verifiable information about the content’s origin and edit history. With this initiative, Adobe seeks to address concerns surrounding misinformation and unauthorized content use, particularly in the context of rising AI-generated content and deepfakes. The app also offers creators greater control over how their work is used, including the ability to specify whether it can be used for AI model training. Additionally, Adobe is releasing a Content Authenticity extension for Chrome (available in beta now) to enable users to view these credentials easily. While this tool’s full impact and uptake remain to be seen, the tool represents a significant step towards fostering a more accountable and transparent digital media landscape. Currently, creators can utilize Content Credentials within existing Adobe Creative Cloud applications.
Smartsheet has updated its work management platform with a focus on improving user experience and adding new features such as “collections” for secure file sharing and a “file library” to simplify collaboration. The platform has a new look, with better data visualization tools and an improved table view for working together in real time. These changes align with Smartsheet’s focus on growing subscription revenue and expanding its customer base. By making the platform more user-friendly and efficient, the company should attract new users and encourage existing ones to upgrade or renew their subscriptions to access advanced features.
Oracle announced new AI features for its Fusion Cloud Service and Field Service, emphasizing a shift in service organizations. Jeffrey Wartgow, VP of product management for the Oracle CX Service, stated that these AI tools will transform, not replace, service teams. “Workers will curate knowledge, optimize automation, and address AI failures,” Wartgow explained, highlighting the need for human intervention in complex situations. This marks a shift towards proactive service design, demanding more strategic and analytical service teams.
Oracle also affirmed its commitment to accessible AI, including these advancements in existing service licenses. “We want service costs to go down,” Wartgow said. These new capabilities empower organizations to balance automation with a human touch, which should provide efficient customer service.
Last week I attended Teradata’s Possible 2024 event in Los Angeles as well as the AWS Analyst Summit in Seattle. Part of the focus was on managing the challenges that AI and data present across different industries. AI is projected to contribute $15.7 trillion to the global economy by 2030. At the same time, 65% of executives prioritize sustainability, emphasizing the need to align AI’s growth with environmental goals. Effective data management is huge, as 80% of businesses report revenue increases from real-time analytics. While many vendors claim to offer sustainability solutions, the question remains whether these solutions address the full scope of customer needs for end-to-end carbon footprint transformation. This involves the entire production cycle—from sourcing raw materials to operational processes, transportation, and waste management—affecting all departments, suppliers, partners, employees, and customers. Additionally, companies must navigate the external factors of regulations and public reputation. I’ll be providing further analysis on sustainability’s impact on industries in my areas of specialty.
The AWS Analyst Summit was a great preparation for the upcoming AWS re:Invent conference. There was an informative discussion on Amazon Q, AI, data, ERP, SCM, and industries (specifically automotive). More to come on this in December when re:Invent rolls around.
Cloudera has announced its AI Inference service, powered by NVIDIA NIM microservices as part of the NVIDIA AI Enterprise platform. This service enables enterprises to efficiently deploy and manage large-scale AI models for both on-prem and cloud workloads to deliver on the potential of GenAI from pilot phases to production. Key features include auto-scaling, high availability, real-time performance monitoring, and integration with CI/CD pipelines via open APIs. The service also ensures strong enterprise security with access control and auditing and supports controlled updates through A/B testing and canary rollouts, providing a scalable and secure AI deployment solution.
Qualcomm recently announced its Networking Pro A7 Elite platform, which infuses GenAI and Edge AI with Wi-Fi 7. Users stand to benefit from performance improvements as well as personalized application and service delivery. What stands out for me is the ability to use the Edge AI feature to support privacy controls on infrastructure, potentially enhancing security outcomes by complementing endpoint protection.
XBOX Cloud gaming will let users stream their own games starting in November. This means that users will be able to stream games beyond the XBOX Game Pass Library, making the service even more useful to gamers who might have quite a broad library of titles. I believe that this is a sensible continuation of Microsoft’s expansion of capabilities for its XBOX gaming services. It also comes right on the heels of a court ruling in an Epic Games case that forces Google to stop requiring Google Play billing for apps in the Play Store starting on November 1.
EWNA — I attended the inaugural Embedded World North America conference in Austin last week. Embedded World is now international, with 2024 conferences in Nuremberg, Shanghai, and Austin. With about 3,500 attendees and 180 exhibitors, the inaugural EWNA offshoot was much smaller than its parent Nuremberg conference (32,000 attendees). Still, I was impressed with the coverage and quality of EWNA presentations, exhibitors, and attendees. In engineering terms, the conference’s signal-to-noise ratio was excellent. The second EWNA conference is slated for next year in Anaheim, California, and I plan to be there.
Silicon Labs CEO Matt Johnson and CTO Daniel Cooley delivered the opening keynote at EWNA. I agree with Johnson’s list of four developments that determine IoT’s potential: (1) robust platforms, (2) business models with significant ROI, (3) connectivity (with Matter and Sidewalk as examples), and (4) symbiosis between AI and IoT. This analysis set the stage for the introduction of the company’s Series 3 SoCs. Series 1 optimized embedded processing, Series 2 added connectivity, and Series 3 is a complete IoT platform built for inferencing, with post-quantum security and extensible memory and storage. Cooley gave us one of the best quotes from EWNA: “You can’t scale IoT on bare metal.” To show that Silicon Labs is all-in on platform-based IoT, he held up a sample of a new 22nm Series 3 chip. Embedded product companies that use off-the-shelf RTOSes (and OSes) pre-integrated with silicon platforms can concentrate on writing application code and minimize (or eliminate) the cost, time, security risks, and technical debt of creating custom system software. The economic benefits of this strategy outweigh the additional hardware cost for all but the most cost-constrained, power-limited, or air-gapped products. The company published technical details about Series 3, and I’ll provide insights in future posts and papers.
Qualcomm hosted an Embedded World NA event to introduce “The Age of Industrial Intelligence.” Nakul Duggal, general manager of the company’s automotive, industrial, and cloud business, walked the audience through the company’s industrial IoT strategy in detail—architecture, technologies, connectivity, processors, and AI platforms (Qualcomm IQ series). I was impressed with the company’s sharp focus on key vertical industries. IoT is a large set of horizontal technologies that are customized and sold into vertical markets. Most of the ingredient technologies are mature, but not the customization step. Customization is responsible for most of the cost and complexity of IoT deployments. To address this shortcoming, Mr. Duggal introduced the “chassis” concept—a set of use cases, products, enabling technologies, development tools, and system software unique to each vertical industry. The catchphrase “Industrial chassis for every vertical” means that each chassis supports customer-specific adaptation and differentiation, much like a car chassis supports multiple bodies. This approach reduces the need for extensive industry-specific and customer-specific development, and Qualcomm’s impressive list of “scaling partners” confirms the attractiveness of this approach. I’ll have much more to say about this in a future article.
Qualcomm and STMicroelectronics announced a strategic collaboration agreement that combines STM’s microcontrollers with Qualcomm’s wireless connectivity solutions. STM plans to start with a modular approach, integrating Qualcomm Wi-Fi/Bluetooth/Thread combo SoCs with various STM32 microcontrollers. While STM’s existing portfolio offers Thread and Bluetooth combinations, Qualcomm integrates all three into a single solution with coexistence logic. The first wave of collaborative products hits the market early next year, and STM aims to extend the roadmap “over time” to include cellular connectivity for industrial IoT applications. The combined products fill STM’s connectivity gaps and add mature microcontroller options to Qualcomm’s portfolio.
NXP recently hosted a Smart Home Innovation Lab tour on the company’s Austin campus. NXP has long recognized the importance of multi-vendor interoperability; it sponsored Thread and Matter from the start, and is now funding the hard work required to break down the deployment and usability barriers that impede growth in smart home technology.
Google has launched NotebookLM, an experimental AI tool that converts documents into engaging podcasts, offering a new way to consume information. The AI technology summarizes documents and generates discussions hosted by AI voices, making even complex texts such as legal briefs and academic papers more accessible for those who prefer auditory learning or have limited time. However, users should be aware of potential inaccuracies and biases in AI-generated summaries. Inaccuracies can range from subtle misinterpretations of the original text to outright hallucinations of information, particularly with nonfiction content. Beyond addressing these concerns, it seems like a fun tool, and I’m looking forward to trying it out.
Many in the tech industry, including at Google itself, seem to think that the breakup of the company is coming due to actions by regulators in the U.S. and Europe. This breakup would force Chrome and Android to be set apart from the company’s search business to avoid anticompetitive behavior where the company may prefer its own services above others. While it remains to be seen how this would work, I have been getting a sense that the company is already compartmentalizing certain apps and services in a way that would prepare it for such a split. Although Google would become a smaller company if this did come to pass, I also think it would potentially allow the company to focus on other businesses and give it a chance for more growth.
To demonstrate the superior speed and circuit quality of its Qiskit software stack for quantum computing, IBM recently conducted extensive tests against leading quantum software development kits. Qiskit was the overall obvious winner; it was faster, successfully completed more tests than any other SDK, and created circuits with fewer two-qubit gates. More specifically, Qiskit was 13x faster and 24% more efficient than TKET, which was the second-best-performing SDK. Even better, IBM is releasing a benchmarking open source suite called Benchpress that will allow users to perform their own performance evaluations and gain important insights about how other SDKs perform relative to Qiskit. (For more on IBM’s work with Qiskit, you can check out my recent article in Forbes.)
Quantinuum researchers have developed a method for gradient computation of quantum algorithms implemented on linear optical quantum computing platforms. Photonic quantum computers use photons to perform calculations, and it is difficult to calculate the mathematical gradient values needed to find the best way to improve the performance of these computers. Methods normally used for calculating gradients in gate-based quantum computers don’t work with photonic computers because of the special properties of light being used. Quantinuum researchers used a photonic parameter-shift rule to overcome this limitation and provide gradient computation for linear optical quantum processors.
The new method is efficient because the amount of work required is directly proportional to the number of photons being used. It also works well with VQA, an algorithm that can be optimized using gradients. The researchers tested the new method on quantum-chemistry and generative-modeling tasks and determined it performed better than other gradient-based and gradient-free methods. Although Quantinuum’s primary interest is trapped-ion quantum computers, it is possible it could be interested in using photonics for transmitting quantum information over long distances. Quantum computers can act as powerful nodes in a quantum network.
Intel is leaning into its extensive research efforts, silicon depth, and strong ecosystem and partnerships to deliver silicon-level secure AI at scale. In doing so, the company is providing enterprises with the ability to extend protection for datacenters and clients from cloud to network edge with both hardware and software. The success of Intel’s efforts can be measured by significant improvements in security controls, as well as the higher resilience of new AI PCs and datacenter applications. I recently published a Moor Insights & Strategy research paper that goes into more depth on Intel’s secure AI efforts.
MediaTek’s new Dimensity 9400 has adopted the latest Arm v9.2 CPU cores as well as GPU IP from Arm. The new chip also follows the Dimensity 9300 in abandoning the “little” cores and going with all-“big”-core designs. This does have a small impact on battery life, but because the big cores have become so power-efficient, the difference is negligible and results in better CPU benchmark performance, with MediaTek clocking up to 3.63 GHz on TSMC’s N3E process node. The result is a staggering increase over the Dimensity 9300 of up to 35% in single-core performance and 28% in multicore performance. The GPU is also expected to be up to 41% faster, while also boosting ray tracing performance by 40%. Although it isn’t built on Arm IP, the NPU is also improved, with 35% better energy consumption. Overall, the Dimensity 9400 looks to be yet another competitive flagship offering from MediaTek, and I expect we’ll see designs from Chinese OEMs using this chip very soon.
AMD launched the new Ryzen AI Pro 300 series at its Advancing AI event, after it sort of already launched it with HP last month without fully unveiling it. At the event, AMD also announced a new design with Lenovo for the storied ThinkPad line, which is a huge win for AMD to improve the enterprise credibility of the Ryzen Pro line. AMD also says that it has more than 100 design wins with the Ryzen AI Pro line through the end of 2025, which could potentially take a chunk of market share from Intel if units move in volume next year, aligned with the end of Windows 10 support.
Amazon Web Services has announced that it will remain the Seattle Seahawks’ official cloud provider, as well as its partner for machine learning, artificial intelligence, and generative AI. As the world of sports continues to embrace tech innovation, the Seahawks can take advantage of AWS’s breadth and depth of technologies. The Seahawks will use AWS’s Bedrock AI-powered system to automate content distribution by transcribing, summarizing, and distributing press conferences to millions of fans across online, social, and mobile channels in English, German, and Spanish.
T-Mobile recently announced a 5G on Demand solution that is designed to make it easier to deploy cellular infrastructure for portable use cases. The applications are limitless, including pop-up retail, special events, and more. The company claims that the private cellular network platform can be deployed in under 48 hours, and it is expected to be commercially available by the end of the year. T-Mobile has already leveraged the core components of 5G on Demand to support recent PGA men’s and women’s events, and at MWC Las Vegas 2024, T-Mobile for Business awarded CBS and Sony a first-place prize at its Unconventional Awards event to recognize the accomplishment.
]]>MI&S Weekly Analyst Insights — Week Ending October 4, 2024
https://moorinsightsstrategy.com/mis-weekly-analyst-insights-week-ending-october-4-2024/
Mon, 07 Oct 2024 18:52:06 +0000https://moorinsightsstrategy.com/?p=43114MI&S Weekly Analyst Insights — Week Ending October 4, 2024. A wrap up of what our team published during the last week.
Welcome to this edition of our analyst insights roundup, collecting some of the key insights our analysts have developed based on the past week’s news.
This is a quantum computer chip from Quantinuum, one of Microsoft’s partners in its Azure Quantum project. This collaborative effort brings together quantum, AI, and high-performance computing to accelerate breakthroughs in quantum computing as well as other areas such as chemistry and materials science. Our own Paul Smith-Goodson has been covering this area for years, from household names such as Microsoft and IBM to startups like Quantinuum and Atom Computing.
As usual, our team is busy this week! Robert is in Los Angeles at Teradata. Melody is attending SAP’s TechEd event virtually and will be in San Jose for Zoomtopia. Bill is in Austin for Embedded World NA. Will is attending the MWC Americas and serving as a judge for the T-Mobile for Business Unconventional Awards event in Las Vegas. Patrick and Matt are attending AMD’s Advancing AI Event in San Francisco, and Jason and Robert will be at the AWS GenAI Summit in Seattle.
Last week, Robert attended the Infor Annual Summit in Las Vegas and LogicMonitor’s event in Austin. Melody was at the Cadence Fem.AI Summit in Menlo Park, California, and Microsoft’s Industry Analyst Event in Burlington, Massachusetts.
Next week, Melody will be at AdobeMAX in Miami. Matt, Paul, and Anshel will be attending Lenovo’s Global Analyst Summit & Tech World in Bellevue, Washington. Will is headed to New York for Blackberry’s Analyst Day, while Matt, Robert, and Jason will be in NYC for IBM’s Analyst Summit. Stay tuned for updates from these events!
Over the last week, our analysts have been quoted multiple times in top-tier international publications with our thoughts on Accenture, Nvidia, China’s AI breakthrough, Meta, Microsoft, Pure Storage, Vast, and the WordPress and WP Engine lawsuit.
Last week I got to spend some time withJohn Capobianco fromSelector AI. Selector is a company that is developing a number of AI-based network monitoring and management tools. In particular the Selector team has been creating AI agents and embeddings. Notably, they can show you how a network ops person can use conversational AI to fix network problems from Slack. I was very impressed since what’s being done is very job-contextual and easy to understand. If you are managing networks, you should check it out. But if you don’t manage networks and want to see how someone builds and hacks on agents, you really need to seeCapobianco’s YouTube Channel. What’s great is that the videos do a better job of showing how agents actually work than the more polished vendor versions you might see at a show or a demo pod.
Also last week I published a piece onCodeSignal and its developer benchmark. One of the items that stuck out to me was how OpenAI’s new Strawberry model has similar performance across the larger and mini sizes. It was an outlier versus the competition, for which smaller models did not perform as well. After digging into Strawberry a bit, I learned that the model is being positioned as a deeper reasoning model. This does mean it’s moving slower at times, but it’s also “thinking” more. The underlying action driving the reasoning is that the model is performing a chain of thought prompts upon itself as it performs a task. So the model is prompting itself to look for other answers. It’s an interesting departure from what we all have been seeing in the model space. Model size used to be a determining factor in response accuracy—but if the model can reason with itself, what will be the response speed? This is something to keep an eye on, because smaller models are gaining momentum thanks to their lower costs.
Last week I got to spend some time withJohn Capobianco fromSelector AI. Selector is a company that is developing a number of AI-based network monitoring and management tools. In particular the Selector team has been creating AI agents and embeddings. Notably, they can show you how a network ops person can use conversational AI to fix network problems from Slack. I was very impressed since what’s being done is very job-contextual and easy to understand. If you are managing networks, you should check it out. But if you don’t manage networks and want to see how someone builds and hacks on agents, you really need to seeCapobianco’s YouTube Channel. What’s great is that the videos do a better job of showing how agents actually work than the more polished vendor versions you might see at a show or a demo pod.
Also last week I published a piece onCodeSignal and its developer benchmark. One of the items that stuck out to me was how OpenAI’s new Strawberry model has similar performance across the larger and mini sizes. It was an outlier versus the competition, for which smaller models did not perform as well. After digging into Strawberry a bit, I learned that the model is being positioned as a deeper reasoning model. This does mean it’s moving slower at times, but it’s also “thinking” more. The underlying action driving the reasoning is that the model is performing a chain of thought prompts upon itself as it performs a task. So the model is prompting itself to look for other answers. It’s an interesting departure from what we all have been seeing in the model space. Model size used to be a determining factor in response accuracy—but if the model can reason with itself, what will be the response speed? This is something to keep an eye on, because smaller models are gaining momentum thanks to their lower costs.
A recent study is centered on an AI model called Future You, which may reduce anxiety by helping people feel better about how they might look and talk at a future age. A chatbot allows subjects to have realistic conversations with a future version of themselves. Researchers concluded that when test subjects interacted with a Future You version of themselves, it reduced their anxiety about getting older.
While there are positive aspects of the Future You, the researchers also have some cautions:
It is possible that the AI Future You won’t represent the real person and may alter the real, present-day person’s behavior.
Some personalities may become overly dependent on AI for decision-making, causing them to ignore their own judgment and intuition.
The scientists believe further research is needed to study the possibility of these negative issues and ensure the promotion of ethical AI development.
Many readers here will know that OpenAI’s long-range goal is to develop AGI. Although it has already made amazing progress with ChatGPT, the company continues to create models with more capabilities, such as its recent preview o1 model with benchmarked increased reasoning. Developing even larger models with increased capabilities requires huge amounts of funding. Toward that end, OpenAI just secured a staggering $6.6 billion in funding, which puts the company’s valuation at around $157 billion. Look for OpenAI to build more powerful models over the next 12 to 18 months. In addition, that staggering amount of funding will no doubt set the stage for more AI companies to bring in extraordinary funding rounds of their own.
Waymo is adding the Hyundai Ioniq 5 to its fleet of self-driving vehicles. This means that Waymo is seeing continuing demand and also likely wants a more modern EV platform to work with in the form of Hyundai/Kia’s E-GMP platform. The new Ioniq 5 also brings NACS charging, self-closing doors, and 800-volt charging, all of which are desirable features in an EV that could also make running a self-driving fleet even easier. NACS charging also means that Hyundai’s cars could theoretically take advantage of Tesla’s supercharging network without any adapters—potentially expanding Waymo’s network potential.
VAST Data announced InsightEngine, a solution aimed at delivering real-time retrieval augmented generation (RAG) in collaboration with NVIDIA NIM. InsightEngine builds on the company’s previously released Data Platform, which is designed to streamline the AI pipeline. By delivering a disaggregated, scalable architecture with a global namespace, the Data Platform removes data tiering and enables fast access. InsightEngine embeds vectorized data in the Data Platform’s scalable DataBase every time new data is inserted, which ensures that RAG happens in real time and the data is current.
Why is this important? For functions like support chatbots and other customer-facing interactions, is this “real-time” RAG—down to milliseconds—as important? Probably not. However, in the agentic era of AI, where application-specific agents are working together for more critical functions, this real-time nature is an absolute must. And VAST is unique in delivering this capability in conjunction with NVIDIA.
Equally important is VAST’s announcement of its Cosmos community. Cosmos is where AI practitioners can connect with peers, VAST, and industry experts to help plan and drive AI projects. If Cosmos realizes its potential, it could be a big win for customers—and for VAST.
MongoDB is the fifth most popular database distribution on the market, and by far the most popular NoSQL distribution. It is used by some of the largest organizations on the planet, and the company just released MongoDB 8.0. Yet many still view it as not ready for mission critical duty. Is this a fair argument to make? Or is it just the traditional players sowing doubt to protect their market positions?
The challenges center on scale and reliability—at the heart of MongoDB’s sharding capabilities built into v 8.0. The document database architecture and its loose schema with no normalization typically does not serve the needs of an organization like a transactional database. These databases are flexible and very good for mobile use cases—less so for the entrenched OLTP use cases. While MongoDB would maybe argue this and point to a customer or two, I find it difficult to see, say, an Oracle customer migrating away. Especially as Oracle has opened up its database to support document, graph, and key.
Maybe MongoDB challenges the OLTP giants at some point—but the market isn’t yet ready.
Barcelona-based liquid cooling vendor Submer has secured $55 million in funding as the hype—and genuine need—for alternative cooling methods has exploded in the AI era. In fact, studies by the International Energy Agency and other organizations show that datacenter energy consumption will more than double between now and 2030. However, for organizations looking to employ liquid cooling in their datacenters, the path is not so simple. There are multiple ways to cool infrastructure with varying degrees of efficiency.
Submer delivers a single-phase immersion-based cooling solution to the market. By this method, infrastructure is fully immersed in tanks filled with dielectric fluid that moves over the surface of equipment with the aid of pumps.
Power usage efficiency (PUE) of single-phase cooling averages roughly 1.1 (1.0 is optimal). For reference, air cooling delivers a PUE of roughly 1.5. While this PUE number is attractive, immersion cooling is disruptive. From deployment to IT operations, utilizing immersion cooling forces major changes for facilities teams, datacenter architects, and IT organizations.
Direct liquid cooling (DLC), otherwise known as direct-to-chip (D2C), is far less disruptive to deploy for datacenter operators. It doesn’t require specialized plumbing, reinforced floors, special tanks, and equipment to deploy and remove infrastructure from tanks. The flipside to DLC is that its PUE isn’t quite as good as immersion cooling, averaging between 1.15 and 1.2, depending on the subtype.
What’s the message from all of this? Despite the difficulty in deploying immersion cooling (and the smaller market opportunity), Submer secured $55 million. The cooling market is real, and datacenter operators and architects are still looking for the right solution—and vendor.
At its Dreamforce 2024 conference, Salesforce introduced Agentforce, a platform for AI agents designed to automate business tasks. Salesforce CEO Marc Benioff emphasized, “This is about humans and robots driving customer success together.” The success of AI depends not just on data but on having the right data, making effective data management critical. Agentforce, built on Salesforce’s Data Cloud, integrates data from internal and external sources, including ERP and SCM systems, to improve workflows—while also presenting unique benefits and challenges for businesses. Read more in my latest Forbes article.
The LogicMonitor Analyst Conference 2024 took place last week. It was an intimate gathering of analysts and customers, offering a closer look at the company’s strategies, innovations, and market directions. It felt more personal, with candid discussions covering key topics such as hybrid cloud monitoring/observability, platform vision, DevOps, AI and ML, and security. Customer success stories from McKesson, TopGolf, and AppDirect really brought these concepts to life, showing how LogicMonitor’s solutions make an impact. I also had some valuable face time with the executive team. LogicMonitor operates at the infrastructure level, providing monitoring and observability for IT environments. While not directly associated with ERP systems, LogicMonitor’s technology plays a complementary role by monitoring the infrastructure that ERP systems rely on. I.e., it ensures the uptime and performance of the underlying systems that support ERP platforms. There could be a potential for future integration with ERP environments.
Cisco recently announced that it is planning to wind down its support for LoRaWAN by the end of 2029. The news comes on the heels of the company’s restructuring plans and is likely an effort to direct more resources to shore up the recent decline in its networking business. IoT is a tricky segment to monetize, and the growing momentum for 5G RedCap—given its reduced power and ability to support industrial sensors—may also factor into Cisco’s decision to end its investment in and eventual support for the LoRaWAN standard.
I attended Infor’s Velocity Summit in Las Vegas last week, where the company introduced several updates and features to its industry-specific CloudSuite platform. The updates focused on refining core functionalities and adding tools such as AI-powered assistants and process mining. With me personally, the company also reviewed the details for the soon-to-be-released sustainability modules intended to support production, inventory, operational, compliance, and environmental goals. Infor also emphasized the importance of helping clients understand the business impact of adopting these new technologies.
Many ERP customers face the challenge of running legacy systems while wanting to transform to the vendor’s modern cloud-based version. This transition takes time and requires careful planning, updated processes, and the right team. Effective change management is essential to help employees adapt. Trusting your vendor and improving data quality is also key. Without clean data and a good partnership with the vendor, it’s hard to fully benefit from the new features offered by modern ERP systems.
Acumatica has released its Acumatica Cloud ERP 2024 R2 update with 350 new features based on feedback from over 26,000 users. The update includes a new user interface, AI integration, automation features, and industry-specific improvements that apply to the construction, distribution, and manufacturing industries as well as general business.
In a conversation I had with Acumatica’s chief product officer, Ali Jani, he said, “We prioritize understanding customer problems and align those requirements with our product strategy. We have built a vibrant customer community through communication and collaboration so that customers can engage with us and vote on features. Many of our product managers visit customers on-site to learn more about their needs.”
Change is challenging, but with transparency and trust, it can be managed. In my discussion with Acumatica, I emphasized how these elements are critical for customers to adapt to new or updated systems.
Oracle aims to transform Imperial College London with its Oracle Cloud ERP and Oracle Cloud HCM. By shifting from a legacy on-premises system to Oracle’s solutions, Imperial hopes to eliminate manual tasks, reduce costs, and improve employees’ overall experience. This change is necessary for Imperial College and other organizations in similar situations. Though the transformation may be challenging, modernizing these systems is crucial for maximizing an ERP solution and improving overall operations.
There is a lot of talk about the university researchers that used Meta’s Ray-Ban smart glasses to dox people in real time in public spaces. This is, first and foremost, well outside of Meta’s ToS—clearly a way to hack the glasses to enable a use case that isn’t authorized. That said, these privacy issues will continue to arise as wearable cameras on smart glasses become more prominent. In this context, we as a society should have more discussions about how and where they are used.
Hasbro is working with Epic Games to bring classic board games to Fortnite. The first game it is launching is Clue, which should be one of the most fun board games to play as a 3-D character. This is an extension of what Epic Games had discussed earlier in the week during Epic Games Fest in Seattle, where it talked about unifying Unreal Engine and Fortnite’s development environments to make it easier to ship games on both platforms. This is also how I believe Epic Games plans to build up its Launch Everywhere on Epic platform where you get lower royalties (from 5% down to 3.5%) for launching once on Epic on all platforms.
Every week brings a wave of new AI agent announcements, and Workday is the latest to join the trend. The company says that its new AI agents are designed to revolutionize HR and finance departments across various industries. These agents aim to automate routine tasks, such as generating onboarding materials and drafting financial reports, to free professionals for more strategic work. Workday also reports that AI can provide valuable insights to improve decision-making, like predicting employee attrition or identifying potential budget issues by analyzing data. The company claims that the agents can enhance employee experiences by personalizing communications, answering questions, and offering career guidance.
Workday believes that industries with complex HR and finance needs, including healthcare, financial services, and education, are poised to benefit significantly. According to Workday, with a focus on streamlining processes and improving efficiency, these AI agents can potentially transform how HR and finance departments operate. HR is a probable place for companies to start testing agents. Although there is some risk with compliance issues, HR workflows are typically very defined around a clear set of rules, policies, and procedures, with access only given to approved roles—a good fit for how AI agents work.
Cisco LoRaWAN EOL — On October 1, Cisco announced an abrupt exit from the LoraWAN space. Sales end 1 January 2025, and maintenance stops in 2026. The company offers no product migration path for any LoRaWAN products, including gateways. Fortunately, Cisco customers can easily find alternate suppliers, and replacement products are not expensive. I advise our clients not to read too much into this announcement. It’s most likely a cost-saving move as Cisco doubles down on faster-growing markets. Although LoRaWAN faces more competition from 5G RedCap, Bluetooth Class 1, LEO satellite constellations, and low-power mesh networks (e.g., Thread), the technology is still expanding in low bandwidth use cases where low cost and long range are deciding factors.
RPi, Sony AI camera — I’m impressed with Raspberry Pi’s new $70 AI camera. It uses Sony’s IMX500 intelligent vision sensor with on-board inferencing. The camera connects to any RPi board with a standard flat cable and uses the well-known libcamera vision stack. Sony’s AI tools can convert TensorFlow or PyTorch models to run on the camera.
Honeywell — At the company’s user group meeting last week, Jason Urso, CTO of Industrial Automation, described the confluence of process digitization and AI as “digital cognition.” More sensors (10x, he reckons) coupled with AI, more processing power, and 5G connectivity lets customers see what they could not see before. It’s a compelling vision that aligns with my observation thatAI is IoT’s killer app.
It appears that Apple’s first iPhone with its own 5G modem will indeed be the iPhone SE 4, which should sport the same processor as the iPhone 16, but with an all-new Apple Silicon 5G modem. While details are fairly limited on what the 5G modem’s specs will be, it is very unlikely that it will match the Snapdragon X71 modem currently in the iPhone 16. That said, it will probably support fewer bands and very likely not have mmWave support like its iPhone SE predecessors. Having the new 5G modem launch on Apple’s cheapest and lowest-stakes product is a good move for the company and will give it a much lower risk profile for testing out the new 5G chip. I expect that over the course of the next year or two we’ll see Apple phase out Qualcomm’s chips for its own 5G chip—if this launch is successful.
October is International Women In AI month, and I was fortunate to attend Cadence Design Systems’Fem.AI event in Palo Alto last week. As we talk about bringing more women into the field of AI, Cadence is speaking loud and clear through a $20 million investment and leading the CadenceFem.AI initiative. There was an incredible lineup of women and allies in AI who spoke at the conference about the challenges and opportunities for gender parity in STEM and AI degrees, what can happen if students are supported and mentored in their AI journeys, intentionality, responsible AI, venture funding for women in AI, and more. I will publish a complete analysis of the event and initiative shortly. I will also have Nicole Johnson, president of the Cadence Giving Foundation, on a Six Five podcast in the coming weeks to dig into the program and what’s next, including some great new founding partner companies that have joined Cadence in supporting women in AI. Stay tuned for a great discussion!
Microsoft has announced significant improvements to Copilot’s capabilities and a redesign to make it more user-friendly and better looking. Microsoft also talked about Copilot+ improvements to Windows 11 PCs that have compatible hardware, including the launch of Recall, which I believe is one of Microsoft’s most compelling AI features. While Copilot is getting many improvements, Microsoft is also making lots of adjustments to Windows 11 as well, basically rebuilding the operating system while removing some apps and updating and enhancing others.
Europe is one of the key global players in quantum computing. IBM recently announced the establishment of a quantum datacenter in Ehningen, Germany, to provide easier access to cutting-edge quantum computing resources for the ecosystem of more than 80 European organizations using quantum computing and almost 1,000 Europeans with IBM Quantum learning badges. This center will provide companies, researchers, and governments the capability to run their workloads on utility-scale Eagle QPUs, which are planned to be upgraded to 156-qubit Heron processors later this year. It is important to note that client user workflow data (circuit inputs and outputs) will stay in the EU for regional services. The new datacenter is part of IBM’s long-term worldwide plan for quantum.
As IT infrastructure vendors investigate nuclear power to feed hungry next-generation AI applications, it potentially represents a new cyberthreat. Recently, the U.K. nuclear site Sellafield was fined nearly half a million dollars for inadequate cybersecurity controls, and penalties for other sites could follow. Nuclear energy is a promising power alternative for datacenters given its clean energy footprint, but the obvious danger in disrupting operations will require stringent protection and possibly new cybersecurity tools.
WNS, a global business process management provider, and Uniqus Consultech, a consulting firm specializing in accounting, ESG, and technology, have partnered to offer clients a comprehensive suite of sustainability and technical accounting services. This collaboration leverages WNS’s expertise in finance and accounting, including AI capabilities, and combines it with Uniqus’s areas of specialized knowledge. The partnership aims to address the growing demand for integrated sustainability reporting and complex accounting solutions. This includes services ranging from ESG compliance and decarbonization strategies to technical accounting advisory and financial system integration. The joint offering is designed to provide clients with a one-stop solution for streamlining data management, optimizing decision-making, and achieving sustainability and accounting goals. The companies report that the alliance has already yielded successful outcomes, such as assisting a biopharma company with post-acquisition integration of financial and accounting systems across multiple countries.
Oura finally announced the new Oura Ring 4—after two years—and it’s a little underwhelming. Yes, the company has introduced slimmer sensors, increased accuracy, and more sizes. But after trying the Samsung Galaxy Ring, I believe that Oura should have targeted a slimmer ring. Samsung’s ring is noticeably thinner and lighter than the Oura Ring 3, and based on the images of the Oura Ring 4, there doesn’t seem to be much of an improvement on thickness other than for the sensors, which never bothered me.
Ericsson announced the integration of Cradlepoint into its overall private 5G network portfolio on September 16 with the creation of a new business unit dubbed Ericsson Enterprise Wireless Solutions. It is a smart move, one that is intended to provide a broad set of services that span neutral host, wireless WAN for fixed locations, IoT, and vehicles, and cellular-optimized zero trust, SASE, and SDWAN. The consolidation should also improve Ericsson’s route to market, leveraging Cradlepoint’s established channel sales footprint and access to enterprise customers.
]]>Ep 36: Hot Desk Podcast: Hurricane Helene Disaster Response, T-Mobile First Responder Solutions, Verizon Relief Grants, Red Cross Donations
https://moorinsightsstrategy.com/hot-desk-pod/ep-36-hot-desk-podcast-hurricane-helene-disaster-response-t-mobile-first-responder-solutions-verizon-relief-grants-red-cross-donations/
Mon, 07 Oct 2024 13:00:53 +0000https://moorinsightsstrategy.com/?post_type=hot_desk_pod&p=43119Melody Brue and Telco/5G SME guests Anshel Sag and Will Townsend discuss Hurricane Helene disaster response, Verizon, T-Mobile & more
]]>It’s Episode 36 of the Hot Desk Podcast. Melody Brue has Telco/5G SME guestsAnshel Sag and Will Townsend to discuss Hurricane Helene disaster response, T-Mobile’s first responder solutions, Verizon’s relief grants, and how to help in Helene recovery. Come along!
Watch the full episode here:
Listen to the audio here:
00:36 Hurricane Helene Disaster Relief
12:32 T-Mobile First Responders Solutions – Can it Complement AT&T FirstNet?
17:52 Verizon Disaster Relief Grants
21:33 Red Cross Donation
Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.
TRANSCRIPT
Melody Brue: Hi, welcome to this week’s edition of More Insights and Strategy Hot Desk podcast. I’m Melody Brue and I am joined today by a couple of subject matter experts. I have Anshel Sag Will Townsend. Hi guys, how are you doing?
Will Townsend: Hey, Mel.
Melody Brue: Glad to have you. So this is really kind of a special treat. We usually, on this show, we talk about all the tools, trends, trajectories in people and marketing technology, enterprise apps, E-R-P-C-R-M, all those things. But there’s a lot of it, right?
Will Townsend: A long list.
Melody Brue: But today I really wanted for you guys to help shed some light on what’s been happening in terms of Hurricane Helene and the disaster recovery. I recall when the Maui fires were happening, you guys were really on top of what the carriers were doing, how they were keeping people connected, some of the kind of relief efforts. And I’ve noticed some things. I’ve also seen just friends and family who weren’t able to reach their loved ones for days Posting on Facebook like, please, if you’ve seen these people, and I continue to see the death toll rising. I mean, it’s just really tragic. But there are solutions, and I think even, I hate to say this, but from a business opportunity, there are opportunities for companies to, one, offer their services, but really make kind of an impact on both their customers and the goodwill, if you would, because these are really important things when we talk about even stadiums, that they have to be connected in case of a disaster. Well, now we really have a disaster and these companies are showing up and doing various things. So I’d love to hear from you guys if you want to start maybe, I don’t know which one of you wants to take this, whoever, but I have seen quite a bit from T-Mobile and their response. And tell me, let’s talk about what’s going on there and also the importance of the need.
Anshel Sag: Do you want me to, I’ll chime in with some of the non-career specific things I’ve seen and then I’ll let will fill in and then if there’s anything I have additional thoughts on, I’ll add more. I’ll say non-career specific. I’ve seen a lot of iPhone users using the new satellite communications feature to communicate with friends and family. When cell networks were down, that was the first phase of what I saw. I saw a lot of people using Starlink to communicate with people to get internet access even when there wasn’t power. And then more so satellite has become a big supporter for both individuals and for the carriers to bring data, network access where power lines are severed, fiber lines are severed, all these things. So I’ll say as setting the stage, cellular is great, but satellite in these situations becomes like a lifeline. And we’ve seen a lot of talk about Starlink, a little bit of maybe over talk about Starlink when it comes to this could have solved every problem under the rural sun. But the reality is in these situations, satellite technologies, whether it’s Starlink or Global Star, any other satellite provider that’s being offered today, they’re all options. Even Google Pixel now has satellite communications on the Pixel nine series. So if you’re a Google Pixel nine user, iPhone 16- 15, those are all going to have satellite communications when the cell services are down. And as a slight note, when Verizon Network services were down this week, I had cell communications on my iPhone, so I wasn’t totally out of service there, but I’d love to let will fill in on the carrier side.
Will Townsend: Yeah, first and foremost, it’s just tragic to see what’s happened. Helene was a category four. I have a home in the Florida Keys, and I’m always very nervous when it’s hurricane season. And again, my heart goes out to those individuals that have been affected. I mean, Asheville, North Carolina has been wiped off the face of the earth. It’s a beautiful place and it’s just sad. Let me start first on the satellite piece that Anshel brought up and then I’ll go specifically into some of the efforts that the mobile network operators are employing to restore service. And the good news is, as of today, the majority of service has been restored. But number one, it’s clear to me that the knowledge about satellite communications in low earth orbit is very high. Now, Starlink, Elon Musk was on X talking about turning on service, not charging customers as a stop gap. I agree with Anshel I mean, Starlink has a lot of limitations from a bandwidth perspective and a cost perspective to make it something that could serve as sort of a redundancy, a cost effective redundancy. But I think that’s great. I’ve also seen a lot of activity on social media with people talking about AST SpaceMobile and Anshel and I, we’ve had time with AT & T and AST and I did a podcast last week with Chris Sambar and with Abel Avellan from AST Space Mobile. A lot of folks on social media, it’s like, Hey, let’s get AST out there. Hey, shouldn’t AST functionality be integrated into a notebook? It’s still verying, it’s still very, very new. But satellite can become a very critical element to helping bridge digital divide. And we’re going to go into a little more detail there, but specifically, I was very impressed with all the mobile network operators and their efforts.
T-Mobile is an example, bringing in satellite cell on light trucks, very small aperture terminals to restore connectivity very quickly, Verizon providing charging centers AT & T, leveraging a huge drone to basically use a drone to function as a radio access network tower I, and to see all of the equipment that’s marshaled to do this. But you also have to remember, there’s got to be a concerted coordination effort as well. You don’t want people bumping into one another. And these are times when even these mobile network operators, T-Mobile at t and Verizon, they’re so competitive, but they come together and they work together. And I think it’s a fantastic demonstration of the relief effort.
Melody Brue: Yeah, I agree. I think that’s great. One thing that came up, I think in both things that you talked about, device side and the network side was bridging the digital divide and the cost. So Anshel, a lot of the phones that you talked about, these are higher end phones, but a lot of these areas that were impacted are rural areas where people are already strapped financially. So this is even devastating for them. So what are some of the efforts that these companies, I mean besides giving people better phones because that’s just not an option, where are they helping to connect people who don’t have the means for a Starlink?
Anshel Sag: Yeah, so a good example of that is actually T-Mobile. They actually set up wifi and device charging stations. So that way if you don’t have cell service or satellite service, you can go to a place where there is wifi that they’ve set up. Obviously, that’s not the end all, be all solution. I actually believe, and this is something Will and I have talked about for quite some time, that as satellite communications becomes more prevalent at the high end, it will become table stakes and it’ll become necessary and mandatory. I believe European and American regulators will mandate cellular service, I mean satellite service as a safety feature for all smartphones. I don’t know when that will happen, but I believe within the next five to 10 years that will be mandatory. And as a result of that, we’ll have a lot more people getting emergency services help where they need it, whether it’s a catastrophe like Helene or somebody in the middle of nowhere where there’s no cell service. One of the big things that T-Mobile actually did recently with Starlink was an emergency message test basically across the entire country. Basically showing how if there is a fire in your area, a wildfire and you happen to be in that zip code, they can fire off a wildfire warning to people who might otherwise have absolutely no idea there’s a wildfire and potentially save lives.
So you can only get those text message notifications if you have cell service, but if you have satellite, you can still get those warnings pretty much anywhere you have line of sight to the sky. So I think with time, this capability will move down the chain. I think satellite communications needs to become more prevalent. It can’t just be a premium feature to your point, but I think that might take a few more years for it to become real. I believe 20 26, 20 27 will likely be when it becomes mainstream. I think a lot of people in the industry have that view. But other than that, I think there’s a lot of opportunities for first responders to use satellite as well and potentially help families connect with their loved ones that way. But right now, you’re right, it is definitely a gap. And my sister lives in South Carolina, so I’ve been tracking a lot of the flooding pretty closely. And she told me that one of her favorite towns in North Carolina was completely wiped off the map. It doesn’t exist anymore, so it’s just horrible. I’m really thankful that where she lives in Greenville, I believe has power now. So it’s difficult in the south because it gets pretty hot and AC is super important, especially with her and her two cats.
Will Townsend: But hey, Anshel, just to kind of follow up on your point, the reason why it’s going to take a while to find its footing is that we’re still in the very early stages. So AST Space mobile just launched its first five commercial satellites and going to be a number of waves of satellites that are going to be launched to provide the necessary coverage and support for Leo. Starlink is still in its testing phase with direct to cell unmodified cell phone. So I agree with Anshel, it’s, it’s going to take a few years to get there once we get there. It’s also going to be interesting to determine how the mobile network operators will monetize this. I mean, certainly in times of natural disaster, there should be some sort of suspension of charges. And we’ve seen that happen with mobile network operators where they suspend billing and that’s the right thing to do, but there is quite a bit of investment to make NTN integrate into these terrestrial networks. And so it is going to be interesting to see how the AT & T’s and the T-Mobile’s and the Verizon’s of the world and also Vodafone, because Vodafone is now a big investor in AST space mobile along with AT & T, it’s going to be interesting to see how they monetize those investments.
Anshel Sag: And I’ll just add to your point, adding to my point that the chip sets in the phones will soon have satellite connectivity integrated into them at the chip level. So it won’t be a separate chip like it is today. And it will be the NTN network standard as opposed to proprietary stuff. So that’s probably going to be coming the next year or two. So if device makers don’t really have to do much in terms of spend on chips and software and it’s just integrated in the operating system, Google supports it, apple supports it. So we’re getting there. It’ll be a couple of years, but it’ll be one of those things where it’s part of the feature list, it’s not even a special thing anymore.
Melody Brue: Yeah. So we talked a lot about how this affects consumers and probably businesses as well, but I think one of the really important things that I’ve seen is how, and this was announced before the hurricane, but how some of the network operators are looking are prioritizing first responders communication. So T-Mobile with their T priority network slice for first responders, that to me kind of hits home. I have a brother who’s a law enforcement officer in Florida, actually. I’ve seen him go through many of these events. And I also have school aged children, and I hate to say it, but those are the things that freak me out when if there’s a major disaster that the people are jamming the networks and the first responders need to have, they’re communication prioritized. So talk about what T-Mobile and others are doing on that front.
Will Townsend: I’ll take that Mel, and then let Anshel comment. So he and I were at the T-Mobile Market Capital Day Capital Markets Day,
Melody Brue: Capital Markets Day, yeah.
Will Townsend: A few weeks ago, and this is when T priority was rolled out. It’s interesting because T-Mobile has the most ubiquitous standalone network, they can actually start network slicing. And this isn’t the first slice that they’ve announced earlier in the year, they did announce a security slice that compliments their SIM based SASS e offering. So this is another example of how T-Mobile is leaning into its standalone leadership to slice a network to deliver deterministic connectivity. And it’s interesting because there’s also AT & T FirstNet, and that’s something that Anshel and I have talked about on many occasions, that is a separate dedicated network for first responders. And so the first question I had when this was announced, when T-Mobile announced this was like, okay, is this kind of redundant? Is it going to compliment what AT & T FirstNet has accomplished? And I think it’s a good thing. I think it can compliment FirstNet. I mean, FirstNet is a dedicated network for first responders, but in these situations where there may be local law enforcement are not on, and fire and EMS that are not on FirstNet, many are the T-Mobile priority slice can really serve a great purpose and again, deliver that deterministic support. Because you think about a lot of drones now, and I’ll let Anshel take this one because he’s a drone fanatic. A lot of drones now are used to fly in initially to these areas to survey the damage and determine where are they going to bring mobile satellite infrastructure. Anshel, you want to talk about that a little bit?
Anshel Sag: Yeah. So first of all, AT &T does this on the commercial network regularly. Whenever there’s an outage, they verify with a drone and then they send out a work order, someone works on it, and then they send a drone to verify. But for this specific application, I believe AT & T also sent out drones for damage assessments for inaccessible areas. Because think about all the roads that are washed out right now. You can’t send a truck out there. So they’re sending out drones to verify which cells need work and which ones are operational, and then they can start to plan how they’re going to recover once they do have access. So this was a conversation I had with AT &T’s like COO or CTO six years ago, if not longer. So they’ve really kind of been working on this for a long time, and they’re kind of the experts, in my opinion, when it comes to utilizing drones for this application. So I wouldn’t be surprised if Verizon and T-Mobile are doing the same thing. But I have a feeling that AT & T’s is probably the most mature use of drones.
Will Townsend: Yeah, I would agree.
Melody Brue: It’s amazing how far things have come. I mean, I’ve been in and around the wireless industry for 25 years now, about as long as you’ve been in the business since you started your tech career at what four? The youngster as a toddler-
Anshel Sag: The bald one.
Melody Brue: But we saw the first introductions of the ruggedized phones, and then it was being able to-
Anshel Sag: Get the flip out antenna for satellite.
Melody Brue: And so it’s amazing to see the advancements and how much it actually affects the public when something like this happens. And I know Will, you had said or written something about how a lot of these companies are really transforming from consumer oriented businesses to addressing public sector mobility service needs. And I think that’s a really important thing. Obviously with the disaster stuff. How else are they kind of helping with this beyond that? I know Verizon is doing some grants and T-Mobile is offering some, like you said, some people are suspending, billing, things like that. These are important things right now because some people have lost everything. But let’s talk a little bit about what those efforts have been.
Will Townsend: So Verizon in particular, this was announced a few days ago. They are earmarking a significant amount of money, $325,000 in a combined number of grants to the relief. And this includes the Red Cross. And at the end of our podcast, I think we’re going to share that link and we’re going to encourage our viewers and listeners to donate to that. But I mean, it’s pretty impressive. A hundred thousand to the United Way of North Carolina, 50,000 to Central Carolina Community Foundation feeding Tampa Bay, 25,000, 50,000 volunteer Florida. I mean, this was a widespread effect. It affected Florida, it affected Georgia, Tennessee, the Carolina. So it’s super impressive to see Verizon in particular really step up with these grants because I mean, the devastation is going to be enormous. I mean, it’s already been declared a complete national disaster. It’s going to be in the multiple billions of dollars to repair infrastructure to fully restore service, not only utilities, but mobile network operations as well. But it’s really encouraging to see companies like Verizon really putting wood behind the arrow and donating money. T-Mobile has a long history of doing this. They’ve been very focused most recently on rural America and bringing grant opportunities to smaller municipalities that don’t have the resources to invest in infrastructure, because often mobile infrastructure, it’s chicken and egg. You need subscribers in order to justify the cost of building cell towers and deploying core infrastructure and having switch locations and that sort of thing. And so more is needed, but I really kind of applaud the effort with Verizon and what they’re doing with the grants.
Melody Brue: Anybody else that you know of that are, I mean, I know that often a lot of these companies do these things kind of on a regular basis, but not tied to a particular disaster or major event like this. You’re actually judging the Unconventional awards in a week or so.
Will Townsend: I am. So thanks for that gratuitous plug. Mel.
Melody Brue: We’re leaving. Anshel leaving us. Thanks so much, Anshel, for joining us. I know you got another thing. We’ll talk about this.
Will Townsend: Yeah, so this will be the third year that I judge the Unconventional Awards. This is something that T-Mobile for business has been focused on, and it’s really recognizing companies in the public sector, Mel, to your point, as well as enterprises that take a very different and innovative approach to mobility. And at the end of the day, to deliver delightful experiences for customers and for the users of these services.
Melody Brue: I think that’s a really interesting, I mean, there’s a lot of companies that have foundations and that they give grants or they do things that, I think that the interesting thing about the Unconventional awards, it allows T-Mobile to highlight their customers and how they’re using the solutions. So there’s a double outcome there, I think. So you talked about the Red Cross, how people can help. You’ve obviously looked into that. Here’s the link. It’ll also be in the show notes. What do you see them doing that Obviously this is an important thing for you personally as well as in what you cover and why this is needed. But
Will Townsend: It’s all about the humanitarian effort. Some people have completely lost everything. I’ve seen videos of cars and trucks washing down streets in Asheville, homes that have been completely just carried away. And these people need, they’re going to have to find a way to start over. And again, my heart goes out to folks that have suffered so greatly in this tragedy. But what’s been really amazing, Mel, is just to see in these different areas, people coming together to help one another. I have a good friend that lives in Tampa, but he’s out there. He’s helping people remove drywall, and he’s a handyman. So he’s providing tips and tricks on what folks can do to get things into some sort of semblance of organization. And the Red Cross has been doing this for many, many decades and just providing a meal water comfort to these families that have been displaced. And again, it’s encouraging. And by the way, all the mobile network operators are making contributions, not just Verizon to the Red Cross. And so I’m planning to do that as well this weekend. And I would encourage all of our viewers and listeners to do the same.
Melody Brue: So there are multiple ways to give the URL is up on the screen now. There’s a lot there, but really it’s a very simple Google search as well. There’s also a text to donate, which is you just text Red Cross 2 9 0 9 9 9, and then it pops up a mobile friendly donation form. So it makes it really easy for people to give. Well, thank you so much Will for coming on, for explaining some of these things also for your generous heart and making people aware of the fact that people are in need and that it would be really great for people to do what they can. And Anshels gone. But thanks also to Anshel for his amazing expertise. He’s always so good with the specifics of devices and how they work. And so I think this is something I’d like to do again, which is in certain situations, ask the experts. I know a little, but you guys know a lot. So thanks so much for joining us and for everybody who’s watching, please do what you can, even if it’s just sending love and prayers and hope and happiness to people everywhere, actually. But thanks for joining us and we will see you next time.
]]>Zoho Analytics Advances Self-Service BI With AI-Powered Tools
https://moorinsightsstrategy.com/zoho-analytics-advances-self-service-bi-with-ai-powered-tools/
Tue, 01 Oct 2024 23:06:55 +0000https://moorinsightsstrategy.com/?p=43085Zoho has invested in automation, low-code and AI to create a more powerful and user-friendly version of its Zoho Analytics data analysis suite for a range of user types.
Zoho Analytics’ latest release marks a significant milestone in its journey from a reporting tool to a comprehensive, AI-powered business intelligence platform. The update introduces more than 100 enhancements, emphasizing AI and machine learning capabilities. With this release, Zoho aims to make complex data analysis accessible and insightful for users across various roles and skill levels.
Key additions include generative AI capabilities for streamlined BI use, a data science and machine learning studio and improved data management features. These updates position Zoho Analytics as a strong competitor in this space, offering advanced functionalities at a competitive price point compared to competitors such as Tableau and Qlik.
AI-Powered Insights For All
Zoho Analytics’ updates include the platform’s AI assistant, Zia, which is now more deeply integrated into workflows. Zia offers assistance based on natural language queries, provides context, and enables users to trigger actions or develop custom models. This could further democratize data analysis, allowing a wider range of users to explore and gain insights from prepared datasets.
Zia Insights now delivers diagnostic analytics contextually, aiming to help users understand not only what’s happening but also why it’s happening. This shift towards “decision intelligence” seeks to empower users to make more informed, strategic choices. Additionally, with support for multiple languages and integration into popular collaboration platforms such as Microsoft Teams, Zoho is trying to ensure that insights are readily available regardless of what other tools a user may already be using.
The addition of the Auto Analysis feature further streamlines tasks automating metrics calculations, report creation and dashboard generation, potentially shortening time-to-value. Moreover, the integration with OpenAI, enhanced by retrieval-augmented generation, should enable Zoho users to leverage both public and proprietary data to generate more accurate and contextually relevant insights.
Bridging The Gap: Analytics And Machine Learning
The Data Science and Machine Learning Studio is another key addition to Zoho Analytics. Designed to integrate machine learning into analytics workflows seamlessly, it should empower both technical users such as data scientists and non-technical users like business analysts to build and deploy custom machine learning models without extensive coding. This integration should enable organizations to go beyond descriptive analytics, using the power of predictive modeling and forecasting to uncover deeper, forward-looking insights.
Features like AutoML (a no-code assistant for model building) and Code Studio (a Python coding environment for more advanced users) ensure that individuals with varying skill sets can contribute to developing and utilizing machine learning models. This again broadens access to analytical tools and ought to foster organizational collaboration across functional roles.
Empowering Multiple Organizational Personas With Data
Zoho Analytics’ latest updates deliver targeted benefits to various organizational roles. Data analysts gain a more robust toolkit in the Data Science and Machine Learning Studio, where they can build custom models and automate tasks so they can focus on more strategic work. Business analysts can leverage Zia’s natural-language processing to interact with data conversationally. IT architects should gain greater control over data integration and governance, while the platform’s extensibility should enable more seamless integration with other BI tools. Executives should gain a centralized view of key insights from multiple BI applications through the BI Fabric.
These updates can be particularly beneficial for marketing teams. Enhanced data management and AI-powered insights should allow marketers to consolidate data from various sources, track campaign performance and identify high-performing segments. By understanding the “why” behind trends, marketers can make data-driven decisions and optimize campaigns for better results.
Has Zoho’s Investment In BI Paid Off?
Zoho Analytics’ transformation from its origins as Zoho Reports in 2009 reflects Zoho’s long-term vision for business intelligence. The platform leverages years of investment in automation, low-code development and AI. The result looks to be a powerful, intelligent and user-friendly solution that should simplify complex data analysis for more types of users and businesses of all sizes.
These improvements will likely help Zoho Analytics compete more effectively against advanced analytics tools by empowering citizen data scientists to leverage data effectively without needing extensive technical knowledge. Zoho has positioned itself well in the market by offering advanced AI and machine learning capabilities while still maintaining its strength in self-service BI.
Welcome to this week’s edition of the Moor Insights & Strategy analyst insights roundup. Conference season is heating up, which means we are crisscrossing the country to see and hear the latest from Microsoft, SAP, Teradata, and more, on top of our usual briefings and advisory sessions. If you’re wondering where we’ll be, check out the event listing toward the bottom of this update — and please don’t hesitate to reach out if you’d like to book a meeting, or just to arrange a face-to-face hello.
This week, we’re going to start you off with AST SpaceMobile’s satellite telecom technology before we take our usual tour of the many industry segments we cover. Enjoy!
Last week, Will Townsend (with content partner and podcast editor Anshel Sag behind the scenes) hosted Chris Sambar, president of Network for AT&T, and Abel Avellan, CEO at AST SpaceMobile, for a standalone “G2 on 5G” podcast: AT&T and AST SpaceMobile’s Vision to Bridge the Digital Divide.
Get a front-row view into the vision for a world where broadband connectivity is accessible to everyone, everywhere, through a revolutionary network of large satellites. Discover how AST SpaceMobile is pushing the boundaries of space-based connectivity, aiming to bridge the digital divide and bring high-speed internet access to even the most remote corners of the globe.
Don’t miss this opportunity to hear from true visionaries in the field!
This week, Robert will be at the Infor Annual Summit in Las Vegas, and LogicMonitor’s event in Austin. Melody will be attending the Fem.AI Summit in Menlo Park, and Microsoft’s Industry Analyst Event in Burlington, Massachusetts.
Last week, Patrick and Anshel attended HP Imagine in Palo Alto while Melody attended virtually. Anshel also traveled to San Jose for Meta Connect, and Melody attended Verint Engage in Orlando and SAP CX Live virtually.
Our team will be busy next week! Robert will be in Los Angeles at Teradata. Melody will be attending SAP’s TechEd event virtually and in San Jose for Zoomtopia. Bill will be in Austin for Embedded World NA. Will is attending the MWC Americas and serving as a judge for the T-Mobile for Business Unconventional Awards event in Las Vegas. Matt will be in San Francisco for AMD’s event, and Jason and Robert are attending the AWS GenAI Summit in Seattle. Stay tuned for updates from all of those exciting events!
Over the last week, our analysts have been quoted multiple times in top-tier international publications with our thoughts on Apple, Box, HP, Hybrid Cloud Infrastructure, IBM, Infoblox, Intel, Meta, and Pure Storage.
This week I published a primer on AI agents. This is already an area of intense activity for many of our clients which I believe is ushering in a new generation of AI capabilities in the enterprise. As opposed to training an AI model, an agent actually constrains it to follow a specific set of rules or processes. While that may sound limiting, it’s actually the opposite because it allows an enterprise to dictate aspects of how it wants AI to behave and execute a process. That’s important because then a business can figure out and measure the business impact that AI will provide. ROI is still the key to all technology decisions in the business world, and agents may be the key to building an ROI-centric narrative.
One of my favorite things about IBM is how committed it has been to open source. That commitment is demonstrated by mountains of contributions before and after acquiring Red Hat. Eclipse, Tomcat, and Redshift are all examples of how IBM has contributed to open source in a non-commercial way for the benefit of the entire industry. This week I took a look at AI Fairness 360, which IBM recently committed to the Linux Foundation’s LF AI projects. It’s an open source toolkit designed to help place better guardrails on bias and hate speech. It’s intriguing for three key reasons. First, it’s open source so anyone can contribute and use it, which provides a common shareable platform for this important aspect of AI. Second, it goes beyond words by using over 70 fairness metrics to understand if there is bias in underlying machine learning processes like credit scoring or fraud detection. And third, unlike a lot of AI tech, it’s not a black box, which means that its workings are open to public scrutiny. This should be a welcome aspect for privacy advocates. While many companies—such as AWS with its Bedrock Guardrails service—are also doing work in this area, the notion of a common cross-industry capability is quite interesting.
Recently, CodeSignal released one of the most interesting AI developer benchmark studies that I have seen. Like many great studies, it not only informs the reader but also prompts more questions for further research. CodeSignal sells a skills framework that many enterprises use to evaluate developers during the hiring process. The company now has more than 500,000 test results, so it has a very good feel for a wide range of developers and their relative skills. Now CodeSignal has let a bunch of different LLMs take the test to see what happened. I have a piece on this coming out next week, but the two big takeaways are that (1) AI is keeping up pretty well with humans and (2) the selection of LLM has a big impact on the results. Stay tuned for more on this one.
This week I published a primer on AI agents. This is already an area of intense activity for many of our clients which I believe is ushering in a new generation of AI capabilities in the enterprise. As opposed to training an AI model, an agent actually constrains it to follow a specific set of rules or processes. While that may sound limiting, it’s actually the opposite because it allows an enterprise to dictate aspects of how it wants AI to behave and execute a process. That’s important because then a business can figure out and measure the business impact that AI will provide. ROI is still the key to all technology decisions in the business world, and agents may be the key to building an ROI-centric narrative.
One of my favorite things about IBM is how committed it has been to open source. That commitment is demonstrated by mountains of contributions before and after acquiring Red Hat. Eclipse, Tomcat, and Redshift are all examples of how IBM has contributed to open source in a non-commercial way for the benefit of the entire industry. This week I took a look at AI Fairness 360, which IBM recently committed to the Linux Foundation’s LF AI projects. It’s an open source toolkit designed to help place better guardrails on bias and hate speech. It’s intriguing for three key reasons. First, it’s open source so anyone can contribute and use it, which provides a common shareable platform for this important aspect of AI. Second, it goes beyond words by using over 70 fairness metrics to understand if there is bias in underlying machine learning processes like credit scoring or fraud detection. And third, unlike a lot of AI tech, it’s not a black box, which means that its workings are open to public scrutiny. This should be a welcome aspect for privacy advocates. While many companies—such as AWS with its Bedrock Guardrails service—are also doing work in this area, the notion of a common cross-industry capability is quite interesting.
Recently, CodeSignal released one of the most interesting AI developer benchmark studies that I have seen. Like many great studies, it not only informs the reader but also prompts more questions for further research. CodeSignal sells a skills framework that many enterprises use to evaluate developers during the hiring process. The company now has more than 500,000 test results, so it has a very good feel for a wide range of developers and their relative skills. Now CodeSignal has let a bunch of different LLMs take the test to see what happened. I have a piece on this coming out next week, but the two big takeaways are that (1) AI is keeping up pretty well with humans and (2) the selection of LLM has a big impact on the results. Stay tuned for more on this one.
Meta AI announced the Llama 3.2 model with new 1B and 3B model sizes, which will be absolutely crucial for wearables and other consumer products that want to leverage LLMs but don’t have the memory or processing footprint to run 70B- or 90B-parameter models. The company also announced an 11B multi-modal version of Llama 3.2, which Qualcomm says it already has running on its latest smartphone SoC. For the 1B and 3B models, Meta has already qualified the model with Arm, Qualcomm, and MediaTek.
It may seem too early to replace a CEO with an AI model, but three Harvard professors ran an experiment along those lines that involved 344 students and executives versus GPT-4o, a new LLM from OpenAI. In the simulation of the U.S. automotive industry, the people made strategic decisions that spanned several simulated years. The objectives were to maximize the company’s market cap and to remain employed.
GPT-4o performed well on most metrics and efficiently responded to the market by designing products. However, the model didn’t respond well to “black swan” events. That failure caused the AI CEOs to be dismissed more than the human players. Unpredictable events like market collapse and things that require human intuition and foresight led to GPT-4o’s dismissal by the virtual board faster than the top human players. The experiment nevertheless showed that AI is a strategic resource, although it was decided that accountability is a human requirement.
Considering the overall results, our executives are probably safe for another decade.
Intel held what may be the most unsurprising launch event ever with the release of Xeon 6P (for performance) and Gaudi 3. I say this jokingly, as these products have both been talked about and covered for some time. This launch was and is critical because the company desperately needs to re-establish itself in the datacenter. The question is, did Intel succeed?
Xeon 6P is a performance beast. While I take any company-produced benchmarking with a grain of salt, the spirit behind the advantages Intel demonstrated against its rival AMD hit the spot. From IPC to performance per watt to raw performance, Xeon 6P is a significant leap forward. This chiplet design includes components at 3nm (compute) and 7nm (I/O) to deliver a 1.9x performance-per-watt improvement over its previous generation. Further, the company did a good job of demonstrating Xeon 6P performance across the datacenter, from traditional virtualized workloads to HPC and analytics to AI. One more thing Xeon 6P does is to match AMD’s gaudy specs. Cores, memory, I/O—it’s all there and at or near parity with EPYC. This takes away one of the biggest sticks the EPYC marketing team has been using to beat Xeon.
Gaudi 3 is what we expected it to be. That means a good enterprise inference platform that delivers strong performance-per-dollar value. Gaudi is an ASIC, not a GPU, but it is an ASIC with a strong software toolchain and ecosystem that will grow over time. And when Gaudi gives way to Falcon Shores (Intel’s GPU), that software ecosystem will move with it, putting the company in a better position on the AI training front. However, Gaudi 3 will not compare with NVIDIA or AMD until that time comes.
Pure Storage announced a number of updates to its portfolio as it kicked off its Accelerate London event. These included Real-time Enterprise File (with zero-move tiering), a new entry level storage server (FlashBlade//S100), universal credits, and a VM assessment tool. I have a few thoughts on these.
I like how the company bundles its updates and releases them like a grouping of cloud services. It’s not just clever—it conditions customers to consume services like the cloud.
The company clearly still uses “simplicity” as a mantra and design principle. These updates really focus on abstracting complexity across three vectors: product, operations, and finances.
Zero-tier moving is in particular a great feature to incorporate as it flips tiering from storage-class to compute and network resource allocation.
Copilot for File continues with this “remove complexity” theme by enabling natural language management of the storage environment. This means that smart people in IT organizations can focus on doing smart things and not focus on specific syntax and semantics.
The VM assessment tool is another understated gem, as it allows IT orgs to rationalize their virtualization deployments and fully explore the what-if scenarios that every virtualization administrator is exploring.
Universal credits allows customers to spread their Pure Storage spend across services without leaving any budget on the table.
While other storage companies want to run away from their primary function to focus messaging and product on AI and nothing else, Pure continues to focus on solving the enterprise storage challenges that virtually every organization has. And when the AI craze has given way to the next big inflection point in tech, companies like Pure will still be relevant.
Cohesity plans to create a new data-security powerhouse through its business combination with Veritas. The global data protection and management sector is undergoing rapid change, driven by rising cyber threats, stricter regulations, and the increasing use of cloud services. In my new research paper, I explore how Cohesity has put itself at the forefront of this transformation. Its platform leverages AI and machine learning to detect threats, classify data, and protect critical workflows while utilizing RAG AI through its Gaia insights assistant.
Talking about the business combination, Cohesity’s president and CEO, Sanjay Poonen, noted, “This deal combines Cohesity’s speed and innovation with Veritas’ global presence and installed base.” The combined entity will serve more than 13,000 customers, including more than 85 of the Fortune 100, with projected revenues of around $2 billion for the 2025 fiscal year.
Data security ecosystems have been a key focus this year, with vendors in the space making strategic moves to enhance their technology and operations. Commvault has been active in this regard, acquiring Appranix, which offers technology for recovering cloud resources. Building on that move, Commvault has recently announced the acquisition of Clumio, strengthening its capabilities in cloud-based cyber resilience, particularly for AWS customers.
Clumio specializes in protecting AWS cloud data, including services such as Amazon S3, and will help Commvault improve its data protection and recovery offerings. The acquisition is expected to close in October 2024. The data backup and recovery market, valued at $12.9 billion in 2023, is expected to grow at a 10.9% annual rate.
I have followed the development of Box Hubs closely, and wrote about it when Box first announced the product, so I was glad to see Box Hubs become generally available recently. Hubs aims to address a common challenge enterprises face: organizing and publishing critical information so it’s easy to find and accessible to the right people inside and outside the company. The Box Hubs press release includes my thoughts on modern businesses’ challenges in managing and utilizing their growing volumes of data and content—and how AI-powered solutions like Box Hubs should improve content accessibility and value.
AWS unveiled the first-ever generative AI-inspired trophy at the Formula 1 AWS Grand Prix Du Canada. Engineers and creatives designed the trophy using the Amazon Bedrock managed service and Amazon Titan models, marking a pioneering instance of harnessing generative AI for trophy design. Inspired by the airflow dynamics of an F1 car, the design features a unique, wing-like shape that went through hundreds of iterations using GenAI. After the design was in place, a traditional silversmith in the U.K. crafted the silver trophy.
The associated PartyRock Sweepstakes, which invites participants to create their own trophy designs using a custom generative AI app, further highlights the innovative spirit of this endeavor. PartyRock is a broader initiative by AWS that seeks to democratize access to generative AI, enabling individuals and businesses to leverage its capabilities.The winner of this sweepstakes will receive a VIP trip to a 2025 F1 race. This initiative aims to showcase the transformative potential of generative AI in creative fields and actively engages the audience, inviting them to experience the possibilities of the technology in a fun and rewarding context.
Salesforce is acquiring Zoomin to enhance its Data Cloud by integrating new features and functionalities. Zoomin is known for organizing and delivering unstructured data across multiple platforms. By incorporating Zoomin, Salesforce looks to increase the use of unstructured enterprise data, which is often underutilized, to enhance the intelligence of its AI agents.
This acquisition has the potential to give businesses using Salesforce a deeper understanding of their enterprise data, leading to smarter interactions and better business outcomes, including improved customer experiences. The acquisition is expected to be finalized in the fourth quarter of Salesforce’s fiscal year 2025.
Infoblox recently announced its Universal DDI Product Suite; on the surface, it looks like it could deliver significant management simplification for hybrid multi-cloud services. It offers an orchestration capability that allows IT operators to streamline historically disparate DNS, DHCP, and IP address management processes across public cloud providers and on-premises deployments. It also has the potential to eliminate manual errors, lower operational cost, improve network availability, and reduce exposure to security risks through three new services.
Infoblox appears to be the first company to bring this level of consolidation to market, and it could provide the company with revenue upside in the near term as a first mover.
Microsoft announced that it is phasing out Microsoft Dynamics GP to make way for its successor, Microsoft Dynamics 365 Business Central. I appreciate how Microsoft has set clear, reasonable timelines for this transition. It’s also reassuring that many GP business partners are already well-versed in Business Central, which should ease the migration process for customers.
Dynamics GP product support ends on September 30, 2029, and security updates end on April 30, 2031. For most SMBs, Dynamics 365 Business Central is the logical next step because it offers a modern, cloud-based solution that enhances GP’s capabilities with advanced AI and seamless integration across the Microsoft ecosystem. For enterprises with more complex requirements, Microsoft also offers an alternative in Microsoft Dynamics Finance and Supply Chain Management. This option offers an extensive ERP/SCM platform capable of handling more intricate needs, ensuring that businesses of all sizes can find the right fit as they move forward.
Meta announced the Quest 3S, which returns Meta to the $299 price point but now allows the company to unify its low-cost offering with its high-end offering (Quest 3). While the Quest 3S doesn’t have the same optics or design as the Quest 3, it does have many of the same capabilities at a lower cost and using the same processor. This makes things much easier for developers when building for Horizon OS, Meta’s software platform for its headsets. Meta also announced that it would be opening up its passthrough cameras with API access—a much-requested capability for mixed-reality headsets.
Meta also announced Orion, its AR glasses prototype. Last week I had the pleasure of trying out these glasses, which have refined the augmented reality category with an incredible form factor and wide horizontal field of view of 60 degrees. I had the opportunity to demo many apps on Meta Orion, including the use of the EMG wearable for neural inputs combined with eye-tracking inside the glasses and hand-tracking. Meta has successfully combined many of the breakthroughs it has achieved through its research and trial and error in Orion. While Orion is not yet a consumer product—and still has some shortcomings in resolution and its chunky form factor—it has finally shown the industry and the world the level of functionality that’s coming soon. I suspect this product will reinvigorate the AR space.
Edge Impulse CEO Zach Shelby opened the company’s Imagine conference last week with a keynote covering the future of edge AI. He addressed the three big challenges holding companies back from shipping AI- and ML-based edge applications at scale: (1) generating industry-specific data, (2) optimizing AI and ML production workloads, and (3) deploying at very large scale (millions, not dozens or hundreds). While LLMs capture the headlines, it is domain-specific machine-learning techniques that are quietly revolutionizing edge application deployments. Shelby (and Gartner, by the way) predict that the majority of edge computing deployments (not devices) will use ML techniques by 2026, and I think that number is low.
Dave Kranzler, general manager of AWS IoT, joined Shelby on stage to emphasize the importance of edge intelligence and explain the cyclical nature of edge inference and cloud training. Edge ML provides detailed real-world data for training and updating domain-specific cloud models. Enhanced models improve edge inference, generate more high-quality data, and the cycle repeats. At the end of the talk, Kranzler and Shelby announced that Edge Impulse is now available in the AWS marketplace.
The Google TV Streamer is now available, and here’s my first take. At $100, it competes with the Apple TV 4K ($149 with Ethernet) and the $100 Roku Ultra. Compared with the Chromecast device it replaces, the new streamer offers a big step up in performance and capabilities—but for twice the price. In particular, it’s a Matter controller and a Thread border router (hence my interest in the box). I’m testing the Matter features now, but it’s too early to offer an analysis. So far, I haven’t encountered any big surprises. Installation is easy, the streamer supports all my subscribed apps, the UI is snappy and less cluttered than most other streamers, and the 4K video quality is comparable to its competitors. Also, it appears to convert surround sound formats to the ones your AV system supports, similar to the Apple TV.
I’m impressed with the streamer, even though the Apple TV 4K (still my favorite) has better usability, slightly faster performance, and TV apps with fewer bugs. Although the box is sleek and attractive, Google didn’t get the memo that the default color for AV equipment is black. Fortunately, I have a can of black spray paint in the garage. One more thing: I hear persistent rumors of a “pro” version of the streamer, but I can’t confirm them yet. Stay tuned!
The second annual HP Work Relationship Index (WRI), a global study examining how people feel about their work, reveals that, despite a slight improvement, most knowledge workers still don’t have a healthy relationship with their jobs. The survey, which involved over 15,000 individuals across various industries and countries, suggests that AI and personalized work experiences may offer solutions to improve this situation. The WRI findings offer valuable insights into the evolving needs and expectations of the workforce. In my upcoming write-up from last week’s HP Imagine conference, I will delve deeper into some critical points. You can also read my colleague Anshel Sag’s initial thoughts on the event in the “Personal Computing” section of this MI&S Weekly.
Also, at its Imagine event, HP announced the acquisition of Vyopta, an analytics and monitoring provider for unified communications and collaboration networks. It represents a strategic move aimed at enhancing HP’s Workforce Experience Platform. This acquisition has the potential to provide HP’s customers with a more comprehensive understanding of their collaboration ecosystem, thereby facilitating data-driven decision-making to optimize employee experiences and productivity. By incorporating Vyopta’s features, HP could offer enhanced fleet management, comprehensive insights into device and application usage, and AI-powered recommendations. Integrating Vyopta’s extensive dataset may further differentiate HP’s Workforce Experience Platform, contributing to its ability to provide intelligent and productive workplace solutions.
Microsoft is establishing a dedicated Security Skilling Academy to invest in its employees’ ability to stay ahead of evolving threats and prioritize security in their roles, regardless of their technical background. This emphasis on continuous learning acknowledges the rapidly changing landscape of cybersecurity and equips employees to make security-conscious decisions. I also appreciate the company tying senior leadership compensation to security performance. These initiatives demonstrate Microsoft’s investment in its employees’ security ownership and cultivating a workforce that is well-informed, empowered, and accountable for maintaining a secure environment.
HP announced a bunch of new AI PCs including the HP OmniBook Ultra Flip, which uses Intel’s latest Lunar Lake chipset and is a convertible version of the AMD-based OmniBook Ultra that HP announced during Imagine AI a little over a month ago. It also announced the EliteBook X, which slots in right below the EliteBook Ultra I reviewed as part of my Copilot+ PC roundup. The new model features the AMD Ryzen Pro processor, giving it a 55 TOPS NPU and helping to fill out HP’s consumer and enterprise notebook offerings. HP is demonstrating its ability to handle silicon diversity while keeping its new lineup coherent.
In addition to new PCs, HP also announced a new printer, the Envy 6100/6500, which launched alongside the company’s new Print AI feature. I believe that HP is innovating with this new Print AI feature and I think it will significantly improve the printing experience with its ability to understand what output the user is looking for even if the formatting is completely wrong. My biggest problem with it is that I believe HP still has to overcome printer driver issues and should prioritize the reliability of those drivers over enabling new AI features.
HP also announced a new software feature for its commercial clients called Z by HP Boost, which helps data scientists and other knowledge workers access otherwise idle GPUs from other workstations or laptops that might not have discrete GPUs. While HP currently supports only up to 4 GPUs per workstation, I believe that the full potential of Z by HP Boost is realized when many systems can be utilized together. This should be a very strong complement to HP’s other AI services it offers as part of its AI Studio.
Google Quantum scientists have created a new type of quantum memory that can reduce error rates in quantum computers. The research uses a surface code algorithm to correct errors by increasing the number of logical qubits, from 72 to 105. By adding even more qubits, error correction could be increased further, which might create a quantum computer with low-enough error rates to build a practical quantum computer. The researchers also discovered that logical qubits in their system remained coherent longer than the physical qubits, creating the potential for quantum memory. This research brings us closer to a quantum computer that could outperform classical supercomputers.
The governor of Illinois, JB Pritzker, announced that the state has made another quantum investment, this time in EeroQ Corporation, which is based in Chicago. EeroQ will be investing $1.1 million in its headquarters located in the Humboldt Park area. The State of Illinois will provide tax credits to support these efforts. EeroQ is developing a quantum computer based on electrons on liquid helium. This new technology is yet to be proven. Illinois previously struck a deal with PsiQuantum, which uses a quantum technology based on photonics. The state is counting on advanced companies like EeroQ to create more jobs that will grow the Illinois economy.
I continue to be impressed with what Microsoft is doing with its Secure Future Initiative. Prioritizing security over new features and functions and building accountability and measurement into the product development process is not a trivial undertaking. What the company is doing goes far beyond current CISA Secure by Design pledges (which are great first steps) and is a model for others to follow.
Microsoft’s initial SFI progress report shared recently demonstrates an incredible level of transparency for the company. The addition of a new cybersecurity governance council and security skilling for all employees has the potential to level the playing field against bad actors and put defenders in the cyber defense driver’s seat.
The Carolina Hurricanes’ home arena has been renamed “Lenovo Center” thanks to a 10-year naming rights agreement with Lenovo. This expanded collaboration, building on an existing relationship since 2010 when Lenovo served as the team’s helmet decal sponsor, underscores the growing role of technology in shaping the modern sports fan experience. This partnership extends beyond a simple name change; as the hockey team’s official technology partner, Lenovo will integrate its technology throughout the arena to enhance the fan experience across the 150 events the arena hosts annually, including major concerts, comedy tours, and family shows that cumulatively draw in about 1.5 million guests each year. Fans can anticipate upgraded digital signage, interactive displays providing real-time stats and replays, and potentially even immersive experiences like augmented reality incorporated into the gameday experience. The facility will also integrate Lenovo technology to help streamline arena operations, which should improve ticketing, concessions, and overall venue management.
If you’ve been watching a lot of baseball before the postseason like my family has, you may have noticed an increase in Google ads. Google Cloud AI is working to enhance the baseball fan experience by using advanced analytics and real-time data processing. Baseball produces an astonishing 15 million data points per game, which helps teams strategize and gives fans many different real-time statistics to better understand the game. The technology provides in-depth data analysis, including how weather affects player performance. It also offers personalized content on the MLB Film Room and Gameday 3D sites, and improves broadcasts with real-time insights during games. My colleague Robert Kramer and I are following along to see how these technologies are deployed leading up to the World Series; check out future installments of our Game Time Tech podcast for more.
PUMA Group is partnering with Google Cloud to enhance its digital shopping experience. Using Google Cloud’s Imagen 2 on Vertex AI, PUMA creates personalized product images based on customer locations, with the aim of improving engagement and accelerating digital campaign launches. PUMA plans to further explore Google Cloud’s AI tools to continue improving personalization and customer experience.
“Google Cloud is helping companies in every industry improve the customer experience with GenAI-powered agents, and our partnership with Puma is an excellent example of this. The creative agent Puma has built with our leading Imagen technology is taking personalization to a new level—and driving real business results,” says Thomas Kurian, CEO of Google Cloud.
This highlights how retailers can use AI to sharpen their understanding of consumer behaviors, allowing them to adjust their products to fit what customers really want, while also making sure they have the right stock available when needed.
T-Mobile recently held its first capital markets day event since the pandemic. The operator has accomplished a lot over the past three years, growing its 5G fixed wireless access business by nearly 3x since launch and announcing strategic partnerships with OpenAI and NVIDIA to improve its customer services and mobile network operations.
The company has also transformed itself from a consumer-oriented business to one that addresses enterprise and public sector mobility service needs with first-responder and security services anchored to slices of its public network. T-Mobile appears to be taking full advantage of its complete, 5G Standalone network and continues to use it as competitive differentiation.
]]>We’re live for Episode 35 of the MI&S Hot Desk Podcast. Join Robert Kramer and Melody Brue for a rundown of what’s up this week in collaboration, CX, ERP & SCM, and Enterprise apps. In this episode we’re talking Verint Engage, Microsoft Dynamics, T-Mobile, Cohesity, Salesforce, Commvault, Box, and a little sports tech
Watch the full episode here:
Listen to the audio here:
3:41 Verint Engage
11:48 Microsoft Dynamics
16:51 Cohesity
18:12 Commvault
23:12 Salesforce
30:39 Box Hubs Goes GA
Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.
The Moor Insights & Strategy team hopes you had a great weekend!
This week, Patrick, Anshel, and Melody (virtually) will be in Palo Alto at HP Imagine, Anshel will be in San Jose at Meta Connect, and Melody will be at Verint Engage in Orlando.
Last week, Anshel attended the Snap Partner Summit in Santa Monica and Patrick, Jason, Melody, and Robert attended Salesforce Dreamforce in San Francisco (and virtually).
If you missed Will Townsend’s webinar with Zayo, “What’s Next for Your Network’s Foundation?”It is now available on demand!
Over the last week, our analysts have been quoted multiple times in top-tier international publications with our thoughts on Apple, IBM, Intel, and Nokia.
Patrick was on Yahoo! Finance with the Morning Brief team to talk about Intel’s AI chipmaking partnerships, and joined CNBC to discuss recent reports that Qualcomm approached Intel about a takeover.
Microsoft, BlackRock, Global Infrastructure Partners, and MGX have partnered to raise $100 billion to build AI infrastructure; the group will invest in datacenters and energy infrastructure to support demand for AI computing power, primarily in the United States. The group’s initial objective is to raise $30 billion, with long-term expectations of expanding it to $100 billion with additional debt financing. The partnership’s main focus will be on datacenters and the power supply needed to run giant AI applications.
A group of Chinese researchers published a paper exploring memory in large language models. The scientists believe that LLMs have a unique type of memory similar to Schrödinger’s cat. The memory can only be observed when a question is asked. The universal approximation theorem (UAT) was used to explain how LLMs can dynamically fit inputs to outputs, making it appear to remember information.
Experiments were run on LLMs by training them on poems, then testing the LLM’s ability to recall the poems based on very little information. It surprised me that the LLMs could remember entire poems based only on titles and authors, even though LLMs don’t store information in a traditional memory structure. The scientists wrapped up the experiment by comparing LLM memory to human cognition. They highlighted similarities and differences and emphasized the potential of the dynamic fitting capability for creativity and innovation.
Microsoft, BlackRock, Global Infrastructure Partners, and MGX have partnered to raise $100 billion to build AI infrastructure; the group will invest in datacenters and energy infrastructure to support demand for AI computing power, primarily in the United States. The group’s initial objective is to raise $30 billion, with long-term expectations of expanding it to $100 billion with additional debt financing. The partnership’s main focus will be on datacenters and the power supply needed to run giant AI applications.
A group of Chinese researchers published a paper exploring memory in large language models. The scientists believe that LLMs have a unique type of memory similar to Schrödinger’s cat. The memory can only be observed when a question is asked. The universal approximation theorem (UAT) was used to explain how LLMs can dynamically fit inputs to outputs, making it appear to remember information.
Experiments were run on LLMs by training them on poems, then testing the LLM’s ability to recall the poems based on very little information. It surprised me that the LLMs could remember entire poems based only on titles and authors, even though LLMs don’t store information in a traditional memory structure. The scientists wrapped up the experiment by comparing LLM memory to human cognition. They highlighted similarities and differences and emphasized the potential of the dynamic fitting capability for creativity and innovation.
Salesforce hosted its Dreamforce event last week, and the big story was Agentforce—its portfolio of tools and capabilities that enable business users to create highly productive AI agents. Agents are pretty exciting AI technology in that they can leverage AI and deterministic programming to let an AI drive a business process and minimize human intervention. Salesforce was able to articulate a set of existing technologies (Mulesoft, Prompt Builder) and new ones (Data Cloud) as well as no-code tools that enable users to easily build agents. While this is very promising, I will caution that, like many developer toolsets embedded in application platforms, Agentforce will still need more work when it comes to enterprise or external deployment. We will need to look closely at how testing and maintenance will function in the new world of agents—and figure out what the right business model is.
A few days prior to Dreamforce, ServiceNow announced its latest AI capabilities in its Xanadu release. In addition to a raft of new features similar to what we are now seeing from Agentforce, Servicenow is releasing a new database to improve performance and scaling, plus a host of new features in its Integration Hub. This aligns well with the thoughts on ServiceNow’s AI aspirations that Melody Brue, Robert Kramer, and I published in June.
IBM is continuing its strategy to cultivate a broad and deep IT automation portfolio. This week it announced its intention to acquire Kubecost, aligning with its Apptio acquisition of 2023. IBM is betting big on FinOps, and Kubecost has the ability to deliver optimized insights to improve the efficiency and costing of Kubernetes infrastructure.
Salesforce has launched Agentforce, an AI-powered suite designed to automate various tasks across an enterprise. Agentforce utilizes autonomous agents to improve efficiency in sales, service, marketing, and commerce. Salesforce emphasizes the platform’s ease of use, accuracy, and ability to deliver immediate results. The AI agents within Agentforce can perform tasks such as drafting e-mails, scheduling meetings, and offering recommendations based on customer data. Salesforce provides pre-built agents such as Service, Sales Development Representative, Sales Coach, and Campaign, while also allowing users to configure their own custom agents. The overarching goals of Agentforce are to empower sales teams, elevate customer experiences, optimize marketing campaigns, and streamline commerce operations through AI-driven automation. Robert Kramer and I talked about Agentforce on the latest episode of the Hot Desk Podcast, and I’ll have more to say in an upcoming analysis article.
The semiconductor space is about as hot as I’ve ever seen it, and it’s only getting hotter. We should be seeing new server CPUs from the two x86 giants hitting the market soon, and of course the AI accelerator market seems to have new, well-funded startups jumping in the game every day. Finally, Arm has driven a new dynamic through its penetration into the cloud (CSP) market that I believe will move downmarket to tier-2 cloud providers and eventually the enterprise.
The CPU is not commoditized. However, the server market is overserved from a scalar compute perspective. Core counts are ridiculously high and the integer performance of chips is beyond what traditional datacenter workloads require. Yes, more is good. And yes, faster is better. But for the enterprise IT organization, we have seen this “cores war” and billboard-style specification comparisons giving way to real value markers such as performance per watt (sustainability, datacenter capacity) and performance per dollar (ROI, TCO).
CSPs have very specific requirements around performance and power which translate into the very specific core counts and performance levels that the CPU makers tout. These are often not CPUs that will be found on price sheets. Furthermore, CSPs require a multi-vendor market. By having more than one supplier, prices are more competitive and different services can be offered.
CPU vendors need to focus marketing spend on real differentiation if they hope to play and win in the enterprise. That differentiation can be virtual machine density or it could be from application acceleration or something else—but the discussion needs to move beyond core counts and memory.
Finally, it is critical to understand that the IT consumer has little faith in published benchmarks from vendors, be it CPU vendors or server vendors. When comparisons are made between your latest technology and a technology that is a generation (or often two) behind—buyers see this. Or when a company publishes a benchmark that shows them with two, three, four orders of magnitude better performance than the competition, their audience realizes it’s synthetic. Let’s move beyond the “benchmarketing” era and into some truth in advertising, so to speak.
Interesting numbers from the last quarter’s financial reporting would indicate that there is softness in storage for some of the major OEMs, despite incredible revenue increases for server sales. Why is this? First, it’s worth picking at those numbers a little more closely. While server revenue numbers were up dramatically across the board, these increases are attributed to AI sales. Non-AI business continues to be flat for most.
What we’ve seen in the market over the last few quarters is storage companies such as Pure and NetApp growing their business as companies like Dell and HPE have seen a flat market. I believe we can thank AI, even if indirectly, for this growth. The focus on AI has led to a focus on data, and this has led organizations to re-examine their storage environments and move toward storage solutions from companies that solely focus on storage and data management. Want more proof? Look at the incredible growth of companies like VAST and Weka—companies that don’t even put an emphasis on the storage element of their solutions.
Lenovo has been the outlier and has seen strong growth. This is due in part to its relatively small customer base and its footprint in the hyperscalers. While I don’t have specific insights, I suspect its enterprise storage business is in line with what we’ve seen from HPE and Dell.
I am certain the OEMs will regain their footing in the storage market. But I don’t believe it will happen until each company examines the way companies like Pure position their products and message to the market.
Veeam Software has acquired AI-powered startup Alcion, which focuses on cyber resilience for Microsoft 365. Alcion’s co-founder, Niraj Tolia, who previously played a key role in Veeam’s Kubernetes data resilience solution Veeam Kasten, has been appointed as Veeam’s new CTO. Tolia will lead the company’s product strategy for Veeam’s new Data Cloud, integrating Alcion’s AI and security features to enhance data resilience. This acquisition is part of Veeam’s broader expansion, which also includes a recent partnership with Lenovo to provide the TruScale Backup Service.
A recent Adobe study highlights the escalating concerns of U.S. consumers regarding misinformation in the lead-up to the 2024 presidential election. The findings reveal that most respondents are worried about the impact of misinformation on the election and have become less trusting of online content.
The study also found a growing demand for transparency in how digital content is created and edited, with a large majority (93%) of consumers emphasizing the importance of understanding content origins and modifications. This demand is particularly strong for election-related content. 95% of respondents said they wanted to see attribution details attached to such information. A significant portion of respondents (48%) have reduced their social media usage due to the prevalence of misinformation, with 89% believing social media platforms should take more decisive action. Most (74%) feel the U.S. government’s efforts to combat online misinformation are inadequate.
Adobe has done a nice job of calling attention to the need for more transparency in digital content—particularly for the company whose tools are designed to manipulate images (yet not in a harmful way). These types of studies are a good way to educate people about the rise in misinformation while promoting the Adobe-led Content Authenticity Initiative.
ServiceNow has introduced AI Agents for automation and intelligent problem-solving to change customer and employee experiences. ServiceNow’s vision for AI Agents is not entirely unique. It is to leverage increasingly powerful AI models to create agents capable of independently identifying and resolving problems. These agents are built to operate within predefined company parameters and with human oversight, ensuring a mix of autonomy and control. It is the human oversight part that I think sets ServiceNow apart from competitors in these early days of AI agent announcements.
Ultimately, ServiceNow envisions a future where humans act as supervisors, guiding teams of AI agents that proactively manage workflows across departments. This represents a significant shift in the human-AI relationship, with AI agents taking on—not taking over—a more active and collaborative role in driving business productivity and transformation. ServiceNow’s initial focus is on customer service management and IT service management.
Introduced last week, Salesforce’s Agentforce is a suite of AI-powered agents designed to enhance business functions. Let’s review a few benefits and challenges when integrating AgentForce with ERP and SCM systems. First, the benefits:
Automation — AI agents can handle repetitive ERP and SCM tasks such as order processing, inventory management, customer service, procurement processes, etc.
Data Integration — Tools such as Salesforce’s MuleSoft allow data to flow between systems, although this can also pose challenges.
Scalability — Agentforce supports increased ERP and SCM workloads without the need for additional human resources.
Predictive Analytics — Salesforce can enable AI-driven insights drawn from enterprise data that resides in ERP and SCM systems to improve decision-making.
Here are some of the challenges:
Integration Complexity — Integrating Agentforce with ERP and SCM systems often requires IT expertise; trust in the integration process is critical to avoid operational disruptions.
Security — Ensuring the protection of sensitive ERP and SCM data when using AI agents deserves significant attention.
Trust — Users must trust that the data handled by AI agents in ERP and SCM systems is used properly. Errors could impact key functions in the enterprise systems.
Transparency — It’s important to understand how AI agents make decisions. Transparent AI processes can build trust by helping users understand how decision-making happens in areas such as supply chain optimization and demand forecasting.
More to come on all of this in my upcoming article digging into the details of Agentforce’s impacts on enterprise systems.
Cisco recently announced a second round of layoffs for the year, affecting 5,600 team members, or 7% of its overall workforce. It was a widely anticipated move, given the softness in Cisco’s networking business and an uncertain economy heading into a U.S. presidential election. I expect that the company will use the cost savings to reinvigorate demand for all its infrastructure. This applies especially to cybersecurity, as the integration of Splunk continues to strengthen Cisco’s offering to the market.
Globant is acquiring Blankfactor, a U.S.-based IT consulting firm specializing in payments, banking, and capital markets. This acquisition should strengthen Globant’s financial services offerings, particularly in card issuing, merchant acquiring, and securities finance. Blankfactor’s expertise in consulting-led product engineering, cloud technologies, and AI solutions should complement Globant’s capabilities and help it better serve clients in the rapidly evolving financial services industry.
Amazon has added PayPal as a Buy with Prime checkout option. This builds on last week’s news of PayPal’s expanded partnership with Shopify. This is noteworthy because it expands PayPal’s reach in the e-commerce space, given that it is currently not a payment option on Amazon’s main platform. This strategic move strengthens PayPal’s position in the market and offers more choices for online shoppers. Under the leadership of new CEO Alex Chriss, PayPal seems to be making some strides in creating products and services that compete with newer rivals such as Stripe for payments and Apple for mobile wallets.
HTC has announced a new VR headset, the VIVE Focus Vision. HTC appears to be building this headset for both wireless and wired streaming from a desktop PC—while also enabling it to function as a fully standalone headset. It features mixed reality passthrough for AR-like experiences thanks to two RGB front-facing cameras. It seems that HTC wants this headset to become the standard for PC VR applications; it’s equipped with a 120-degree FoV, 90 Hz LCD panels, and a Qualcomm Snapdragon XR2 chipset. It also has ample RAM (12GB) and storage (128GB, with up to 2TB of expandable storage via MicroSD), as well as swappable batteries. At $999, it will have a hard time competing with Meta’s Quest 3, but given its eye-tracking capabilities and arguably better ergonomics, there is a bit of premium capability. That said, I believe the market fit for this headset is fairly small at the price.
Snap, Inc., parent company of Snapchat, announced a new pair of AR glasses, the fifth generation of its Spectacles family. This is the second generation of Spectacles to have dual see-through waveguide displays. These new Spectacles are powered by a Qualcomm Snapdragon processor running a new Snap OS operating system designed to work with Snap’s developer tools and be compatible with the Snapchat app. While the horizontal field of view is only 46 degrees, the vertical FoV is much taller and seems to lend well to porting Snapchat AR lenses. While the glasses themselves appear quite bulky, I do believe that Snap is taking the right approach to AR by embracing AI and natural interfaces like hand tracking and voice. Other than their appearance and limited FoV, these glasses are still very much targeted towards developers at $99 a month for 12 months.
Matter at the tipping point — At CES 2023, I predicted that Matter, the smart home standard from the Connectivity Standards Alliance, would hit its tipping point in 2025. I figured the CSA and its member companies would iron out the first wave of Matter and Thread bugs during the first year—improving usability, adding more device types, and paving the way for the second wave of commercial products. As it played out, first-year deployments revealed new (but not unexpected) concerns about usability and deployment. This year, Matter and its members addressed the second-order problems, and the Thread Group released Thread 1.4 with essential Matter-related enhancements. I covered Thread 1.4 in these pages in my September 6 weekly update.
Meanwhile, Apple, Amazon, Google, and Samsung have turbocharged Matter’s market acceptance by incorporating Thread and Matter into high-volume consumer products. Google, Apple, and Amazon smart speakers and hubs have Thread and Matter support built-in, so millions of consumers already have the technologies in-house, even though they might not be aware of it. Also, if you have an iPhone 15 Pro or newer with iOS 18, your phone can directly connect with Thread-based Matter devices via its built-in Thread radio—no hub required.
Better usability, increased ecosystem support, and direct device connections combine to reduce initial adoption barriers and improve user experiences. So, as CES 2025 approaches, CE manufacturers are rolling out waves of new products, such as Eve’s recently released wall-mounted, Thread-connected light switch. Using that switch with a Thread-enabled iPhone is the closest thing to a one-click Matter installation I’ve seen. It does look like 2025 will be the tipping point where Matter’s market share accelerates on its way to becoming the leading smart home ecosystem for new products by the end of 2026.
Ikea and Samsung collaborate on Matter support — Last week, Ikea added Matter support to DIRIGERA smart home hubs via a software update. Ikea’s smart home product line, introduced in 2012, includes lighting, remote switches, air purifiers, motorized blinds, and Wi-Fi speakers. Ikea has always used the well-established Zigbee protocol for these products. In 2022, the company launched DIRIGERA for smartphone integration.
Ikea is on the CSA board of directors and a strong Matter supporter, so I wasn’t surprised that the new hub was “Matter-ready” from the start. In this case, Matter-ready meant that a future software update could add “bridging” support, which the Matter specification defines in detail. Bridges translate Zigbee protocol to and from Matter protocol, allowing Matter ecosystems such as Amazon, Apple, Google, and Samsung to control the Ikea non-Matter (Zigbee) devices. Ikea followed through, making good on the promise of a Matter upgrade.
Also, last week, Samsung announced native SmartThings support for DIRIGERA and its Matter bridge. Of course, customers may choose a different smart home system (Apple, Google, Amazon, or other), but I assume Samsung has thoroughly tested SmartThings with Ikea’s bridging. Hear that alarm bell? It’s Ikea and Samsung with a wake-up call for non-Matter smart home suppliers to offer Matter bridges as soon as possible. Proprietary hubs are rapidly becoming obsolete.
T-Mobile launched a new network slice called T-Priority, which is specifically designed to be prioritized above all other users on its network. This service depends on the company’s 5G Standalone network, which it will be upgrading to 5G Advanced by the end of this year. Additionally, this gives it a service to compete with AT&T’s FirstNet, which has been the standard for most first responders. I believe this service will be complementary in many ways and will potentially serve as a backup in some applications as well as a primary line for new 5G applications thanks to its prioritization and larger bandwidth resources. I believe that T-Priority could be very powerful when many emergency services are sharing the same limited FirstNet spectrum and could benefit from added capacity on demand.
The new iPhone 16 might not be selling as well as Apple had anticipated, but the reality is that plenty of consumers are aware that many Apple Intelligence features, including the much-hyped new Siri, won’t be available until next year. Based on comments from T-Mobile’s CEO, it makes sense that there might be a slower start for iPhone sales in Q4, but an eventual ramp-up once the AI features become available broadly. I also believe that this could explain why so many carriers have offered such sweetheart deals on the new iPhone 16 Pro series. This is partially because of the slow rollout of Apple intelligence, but I believe it’s also because the base series iPhone 16 is the closest to the Pro in terms of specs that it has ever been.
8×8 announced the availability of its Video Elevation feature for 8×8 Contact Center. This capability enables contact center agents to initiate one-way video interactions with customers so agents can help quickly resolve issues that may otherwise require a service call or a lengthy discussion. I really like this “show me what you see” functionality, which I have used in different scenarios—most recently when diagnosing a router issue with AT&T. The solution should ensure that the caller and agent are discussing the same thing when trying to resolve an issue. As AI agents start proliferating in the contact center, it might be some time before they can diagnose issues using multimodal recognition.
Zoom has expanded its contact center offerings with three new tools to streamline agent workflows and boost efficiency. Zoom Virtual Agent uses conversational AI to handle routine customer inquiries, freeing human agents for more complex tasks. Agent Assist leverages generative AI to provide real-time support and guidance during customer interactions. The Quality Management tool offers automated transcription and scoring of interactions to facilitate performance evaluation and coaching. These features have become more or less table stakes in contact centers. The significance in this case is that they show Zoom’s strategic focus on leveraging AI to optimize contact center operations and enhance both agent and customer experiences. In addition, Zoom’s AI is quite good. This is suggested by the number of awards and accolades it has received, but also from my own experience. For example, Zoom’s AI noise cancelation is so good that I’ve had people in Zoom meetings apologize for their barking dogs or other background noise that I couldn’t even hear. And more than once when I’ve had to miss or join a meeting late, I’ve been able to accurately and quickly get up to speed thanks to Zoom’s AI meeting summary.
Qedma is one of IBM’s application partners in the initial release of the Qiskit Functions Catalog. Its QESEM (Quantum Error Suppression and Error Mitigation) product is designed to suppress noise created by decoherence and calibration errors in QPUs. That means users can accurately run quantum algorithms on noisy QPUs. According to Qedma, QESEM achieves better results than algorithms that are run without error mitigation.
The QESEM workflow begins by compiling quantum circuits into operations compatible with the QPU. It uses both native and additional operations calibrated by Qedma. Following that, Qedma characterizes errors in the newly compiled circuits. Based on error data, the circuits are reconfigured for optimal QPU execution and then run on the QPU. Lastly, classical postprocessing refines the results and provides estimations with error bars for measured observables.
QESEM provides unbiased output with errors that are primarily statistical and reducible by increasing QPU time. It offers scalability across different qubit numbers without a proportional increase in required QPU time. It also supports several state-of-the-art QPUs, including superconducting qubits and trapped ions. Even though those two cover the biggest part of the quantum market, I expect this will be expanded to other modalities over time.
Ivanti is the latest cybersecurity company to expose a vulnerability that has been exploited in a cyberattack. The endpoint protection provider recently revealed a critical security flaw impacting its cloud service appliance that allows remote access to restricted functionality. Ivanti reports that a limited number of customers have been affected, but regardless of the blast radius it points to broader concerns about the company’s software development process. The timing is not ideal, given the scrutiny over CrowdStrike’s flubbed endpoint protection update. Consequently, Ivanti would be wise to provide additional details and deeper transparency about what it is doing to prevent future vulnerabilities.
I do not believe that most of the talk about Qualcomm acquiring Intel is credible. While I do believe that Qualcomm could potentially absorb or acquire Mobileye, even that would be questionable considering the current FTC climate. Realistically, there’s no way that Intel would sell its PC division, Wi-Fi business, or any of its other businesses—other than potentially its networking business—to Qualcomm. Intel’s PC business is keeping the company afloat right now, and selling it would be corporate suicide. I don’t know which divisions Qualcomm has expressed interest in, but this rumor has been bubbling up for weeks. Frankly, I believe that Qualcomm’s greatest interest in Intel is in supporting its foundry business to enable it to be a more competitive player to challenge TSMC.
The National Football League and Amazon Web Serviceshave renewed their technology partnership, which began in 2017. A key development is the introduction of a new AI-powered Next Gen Stat that changes how tackles in football are analyzed. The Tackle Probability machine-learning model predicts the likelihood of a defender successfully making a tackle during a play, helping to identify the most reliable tacklers and the most elusive ball carriers. The Next Gen Stats platform, supported by AWS, collects over 500 million data points each season, providing detailed statistics and different viewing options for fans. This collaboration also includes tools like the Digital Athlete for injury prevention and the Big Data Bowl, which encourages the use of data insights to improve the experience for fans and players. Check out the details.
SAP provides an ERP-centered approach to carbon management that uses AI to maintain data quality and simplify reporting. Sustainability data helps enterprises track and manage carbon footprints across operations, share sustainability data with partners, and integrate carbon accounting into financial decisions. Though ERP systems can be complex, they are vital for meeting today’s environmental and regulatory demands. By making use of these features, businesses can ensure compliance, improve efficiency, reduce costs, and make informed decisions aligned with sustainability goals.
Look for an upcoming research piece exploring how SAP demonstrates the impact of ERP on sustainability.
AST SpaceMobile’s launch of five commercial low earth orbit satellites on September 12 was a watershed event in supporting direct-to-unmodified-smartphone satellite connectivity. AT&T has been working with the company behind the scenes for nearly four years, and the operator’s financial investment signals confidence in the viability of satellite communications to bridge terrestrial mobile network coverage gaps. Moor Insights & Strategy will be publishing a podcast soon highlighting a conversation with Chris Sambar, president of AT&T Network, and Abel Avellan, CEO of AST SpaceMobile, discussing the launch and its broader implications.
]]>Adobe Launches AI-Driven B2B Marketing Solution
https://moorinsightsstrategy.com/adobe-launches-ai-driven-b2b-marketing-solution/
Fri, 20 Sep 2024 18:24:51 +0000https://moorinsightsstrategy.com/?p=42630Quite apart from its huge business in desktop creativity apps, Adobe is expanding its ability to help B2B marketers manage how they interact with prospects.
Adobe recently announced the general availability of its Adobe Journey Optimizer B2B Edition. With this launch, Adobe aims to use generative AI to give businesses a deeper understanding of the different types of customers they attract. By enabling more personalized experiences, this has the potential to improve B2B sales processes and conversion rates.
AJO B2B Edition provides a unified view of each customer across various channels. It was built on the Adobe Experience Platform, part of Adobe’s fast-growing Digital Experience segment, which is on track to become a billion-dollar business in its own right alongside Adobe’s much bigger business in creative apps (Photoshop, Illustrator and so on).
Navigating The Complexities Of B2B Sales
B2B sales, particularly in the enterprise sector, involve navigating intricate sales cycles and a complex web of decision makers. While IT departments once held sway over technology purchases, today’s buying landscape is far more nuanced. Decision making has become decentralized, with various departments including marketing, HR and line-of-business units influencing purchases. For instance, in a software sale, the finance team might scrutinize the ROI of the application while HR evaluates its potential impact on employees and IT assesses its technical integration with existing systems.
This distribution of responsibilities makes identifying key stakeholders challenging, and often extends the time required to close deals. GenAI can help by analyzing data to pinpoint decision makers, key roles and company priorities, enabling a sales team to customize its approach and navigate sales cycles more efficiently.
A New Approach To B2B Marketing
AJO B2B Edition should help marketers connect with their potential customers by providing key insights that facilitate a smarter, more tailored approach to each account. This is primarily done by consolidating individuals involved in a company’s purchase decisions into cohesive units. This shift from tracking individual leads to visualizing the whole group responsible for making purchase decisions and their connected relationships provides a more comprehensive perspective on the target company. Essentially, it promises to help enterprise sales and marketing teams move away from lead-based marketing to a smarter, more tailored version of the account-based marketing that has become more prevalent in recent years. GenAI helps by supplementing any gaps in the buying group information. It intelligently suggests potentially missing roles within the group, such as technical influencers or budget approvers, and can recommend specific individuals who might fit these roles.
Once buying groups are thoroughly defined, the AI can generate tailored marketing plans for each group member. GenAI then automates personalized content creation across channels, ensuring that relevant messages reach the right individuals at the right time. Integrating other Adobe Experience Cloud applications such as Marketo Engage and Adobe Real-Time Customer Data Platform aims to give marketers a deeper view of specific customer signals based on their actions. This enables marketers to adjust and optimize journeys based on evolving prospect behavior. Sales teams can receive real-time alerts about engaged buying groups, complete with AI-generated insights and recommended next actions. Brands can also leverage AI-generated assets from Adobe Firefly or Adobe Experience Manager to create customized content.
Amit Ahuja, senior vice president for Experience Cloud platform and products at Adobe, stressed that B2B marketers must leverage generative AI to meet rising customer expectations and drive efficiency. However, he noted, “In the B2B world, relationships and insights are key. We are supercharging this. It’s about enabling deeper connections and smarter strategies at a scale that was not feasible before.”
The Rise Of AI In Custom Content Creation
Adobe’s strategic focus on AI-powered solutions highlights the growing importance of AI in enterprise business applications. The company introduced its Firefly family of GenAI models in March 2023 and has since made a significant push into AI across its product suite. This includes the integration of the Firefly AI engine directly into its Creative Cloud applications and the launch of an AI assistant on the Adobe Experience Platform. Additionally, Adobe has launched Firefly and Adobe Express Premium as standalone apps, further broadening the reach of its AI-powered tools.
Adobe’s ambitious AI development and product rollout, particularly in the enterprise and B2B space, sets it apart from competitors such as Canva, which has also made aggressive moves in AI recently. While Canva may pose a threat to Adobe for light users and smaller businesses, Adobe’s wide range of products across marketing, content creation and data management gives it a significant advantage in the enterprise market. Furthermore, it signals the company’s confidence in its ability to mitigate potential legal risks associated with GenAI, such as copyright infringement, for its enterprise customers.
What Adobe’s B2B Approach Says About Enterprise AI
Adobe is a good example of a diverse tech company embedding AI in very different ways across its range of products. The distinct AI use cases show the diversity of GenAI when it’s applied intelligently to solve challenges, whether the problem is “Finish this illustration faster and better” or “We need to do a better job of marketing to our biggest B2B prospects.”
With this new offering, it is clear that Adobe also recognizes the importance of collaboration between sales and marketing teams for its customers. The platform should facilitate better alignment and communication across revenue-driving teams by providing shared insights and streamlined workflows.
If it works as well as Adobe says, AJO B2B Edition has the potential to reshape B2B marketing and deliver significant value for businesses. Of course, Adobe will need to continue innovating to meet the evolving needs of B2B marketers and keep pace with the nonstop advancements in AI technology. For now, however, AJO B2B Edition looks like a compelling way to address the complexities of B2B marketing.
]]>We’re LIVE for Ep 34 of the MI&S Hot Desk Podcast! Join Robert Kramer & Melody Brue for a rundown of what’s up this week in collaboration, CX, ERP & SCM, and a little sports tech. We’re talking lots of great stuff – Oracle, Salesforce, Zoho Analytics, ERP trends with IBM & Oracle, US Open, plus a preview of Mel’s new Webex Cisco Desk Pro.
Watch the full episode here:
Listen to the audio here:
00:52 Cisco Desk Pro Video Conferencing Device
4:40 Oracle CloudWorld
13:45 NetSuite SuiteWorld
18:31 Salesforce – Dreamforce
32:05 Zoho Analytics
36:12 ERP – IBM and Oracle – Robert Kramer’s ERP Research
Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.
]]>Game Time Tech: IBM and AI at the 2024 US Open with Noah Syken – Six Five Media Webcast
https://moorinsightsstrategy.com/game-time-tech/game-time-tech-ibm-and-ai-at-the-2024-us-open-with-noah-syken-six-five-media-webcast/
Wed, 18 Sep 2024 17:35:38 +0000https://moorinsightsstrategy.com/?post_type=gametimetech&p=42565Noah Syken from IBM shares his expert insights on how IBM has enhanced the viewer experience for millions of fans worldwide of the 2024 US Open with the power of generative AI on their website and app.
]]>On this episode of the Six Five – Game Time Tech, hosts Melody Brue and Robert Kramer are joined by IBM’s VP of Sports and Entertainment Partnerships, Noah Syken, to discuss IBM’s role in the tech on display at the 2024 US Open Tennis Championships. This year debuts the power of generative AI to add spoken commentary to match highlights, produce match previews, and write hundreds of match summaries called “Match Reports.”
Their discussion covers:
IBM’s 30+ years of innovation in their partnership with the USTA to revolutionize the digital experience of the 2024 US Open Tennis Championships
IBM’s collaboration expanding the tournament’s online presence, engaging millions of fans worldwide via USOpen.org and the US Open app
Expert insights on how tech and generative AI enhance the tournament experience
Disclaimer: The Six Five Webcast – Game Time Tech is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Robert Kramer: Welcome to this week’s edition of Moor Insights & Strategy Game Time Tech. What’s very interesting is we’re actually on episode nine now, Mel. So Melody Brue and Robert Kramer, both VPs and Principal Analysts here. And we’re at a special day today. We’re going to be talking about the US Open, and Sports Technology, with Noah Syken. And he’s the VP of Sports & Entertainment at IBM Partnerships. Mel, what do you think? We were just there. How was it?
Melody Brue: It was so great. It was really fantastic. And I feel very fortunate and very spoiled that I was at Wimbledon this year, and then the US Open. And different experiences, but both really cool. And I feel like it was, got to experience, I think, one, both events, but also, we both had a similar experience of being able to be in the IBM suite, and have this really great experience with Tracy Austin there and all of that. But then also, just being able to walk around the grounds, experience what everybody experiences when they’re there. Use the app to find our way around. Get the scores, find out. I think one of my actual favorite memories was when we were just walking around, and we heard all that cheering on court 12. And being able to just pull up the app, and see who’s playing on court 12, and see that they were going into overtime.
Or it’s not overtime, what, a tiebreaker. Wrong sport. Going into a tiebreaker and walking over to see that. So having the app on site is definitely really useful for those things. But then, when I went home, I continued to use the app to find who’s playing, what the standings were, where I could watch it, where it was playing, and all of that. I don’t know, I thought it was really great. What was your experience?
Robert Kramer: The app is one of those things where, as we evolve through the years and technology’s involved, it gives us so much more information. So like you said, it gave us some key information to go to that court 12. But what I really liked about it is, it gave us the schedules, it gave us our tickets, which is pretty cool how it was actually integrated with Ticketmaster. And then you could actually download your tickets. You had your tickets, you had the schedule, you had the match plays, you had the summaries, the stats, who’s going to win.
And then when I got home I was using it. It gave us a lot of information about players I didn’t know about, because, obviously, there’s a ton of players, tons of matches. And I think it’s a really unique app to have. And it was special. When you were at Wimbledon, which is really interesting, because you take the nostalgia of Wimbledon, and you have the interesting fan dynamic at US Open, and then you have the technology. Did the technology make the experience better at both? And then if it did, which I’m sure it did, which one did you feel was more interesting to you, and exciting?
Melody Brue: So I used the US Open app a lot more than I used the Wimbledon app. And I think it’s because of how complex it is with, there’s the transportation. Same thing with the tickets at Wimbledon, it was on the app. But it’s such a long tournament, and the number of matches, and the number of match, what do they call them? The match reports. Even just the transportation getting there, then the different places to eat. There’s just a lot more to it than there was at Wimbledon, where you can just walk around and see everything. It’s all right, a little bit more condensed. But one of the things I thought with these match reports, that with using generative AI to do that, is it really gives more personalized insights on a more granular level. Because previously, you have these reporters, basically, who can only do matches for, or only do these reports for a certain number of matches.
So if you have a player on, let’s say, court 12, you may not be able to get a report for that match, but that might be the player that you’re there to see. So with AI being able to generate these reports, they’re not just completely generating the report, they’re basically giving the basics. And then these reporters are going in and they’re filling in the rest of all of the details, and they’re adding in what they saw. And so I think it’s a very interesting use case of the potential of AI to augment productivity and output, but it still has that human involvement. And I think that really, that does remain really critical for ensuring that accuracy and quality. So I think that part was interesting to me. But in terms, to answer your question about the app, I think the Wimbledon app was really beautiful. The US Open app, there was a lot more to it. But like the Masters app, the Wimbledon app was just so… The user experience was quite lovely.
Robert Kramer: Interesting. But one of the things, remember you mentioned Tracy Austin, and Tracy was giving us a presentation before, and she was talking about the different players. And Watsonx aggregated a lot of data about the two players. And it talked about what one of the players does, and the way the person hits it. And we were like, that’s interesting. And then we actually went and watched the match, and it was exactly that.
Melody Brue: Yeah, yeah.
Robert Kramer: So AI is playing a role in grabbing a lot of this, like you were talking about, it takes all this different information. It’s not so black and white. If it was, they wouldn’t need AI. It would be the human being able to discern all that information and decide what they want to use from a predictability perspective. But this is way more complex. And it’s needed. And they’ve trained these models for years to be able to aggregate all this information to make these predictions, because it’s fantastic. And Noah, who will be joining us in just a few minutes, will elaborate on how that actually happens.
Melody Brue: All right, well let’s welcome Noah. The US Open just wrapped up. We were there. It was great. A girlfriend of mine said that the Flushing Meadows is just a vibe. And I was like, “Huh? What does that mean?” Now I know what it means. There were definitely some vibes there. But for IBM, for you, how did it go?
Noah Syken: It went great. It’s almost a three-week event now. And fans love the US Open. They love tennis. I think they reached over a million visitors this year, a record. And we saw more people come out than ever. And frankly, we had more demand for folks to come out than ever. People continue to consume content in all ways around the US Open, and there were some great match-ups with Americans making it to the finals. So I think it was a fantastic Open all around.
Melody Brue: That’s awesome. Yeah, we saw there were a lot of celebrities, a lot of former tennis players. You guys had both Tracy Austin and Andre Agassi in your suite. What do you think it means to those players to see this kind of change in technology? It’s such a massive shift from when they were playing, to now. How does it change for both the fans and the players when there’s just this major shift?
Noah Syken: Sure. From a fan perspective, certainly it gives fans more access to information and content that they’ve ever had before. They’re able to follow their favorite players. If you’re from a particular country, as an example, you have more ability to follow a player that might be a country from your country than you’ve ever had before, more flexibility to follow those types of players. So from a fan perspective, there’s just a greater ability to engage. And from a player perspective, I don’t know if anybody noticed, but the coaches’ boxes now have tablets in them with all kinds of video and data access right at their fingertips. And so we see and hear from those coaches, that this is the kind of information that they’re increasingly using.
Now, different players actually embrace data and information in different consumption patterns. But across the board, when you’re able to slice and dice the data, slice and dice the video, I think the coaches and players are saying, “How do I tap into that more? How do I learn more from that information?” And we’re actually seeing and having some conversations about how AI can play an increased role in getting that information to the right coach or player as quickly as possible.
Robert Kramer: Yeah, Mel and I were actually there as she alluded to. And it was interesting, I utilized the app a ton. Like you just said, it gave us the ability to take a look at some of these players, because there were upsets. And some of these players were from foreign countries we weren’t aware of. It really gave us a lot of information. But you go back to the technology, and the IBM technologies that’s being used, specifically like you mentioned, AI and the Granite models. And how are they trained for this sport? And how far back did it go to make that happen?
Noah Syken: Yeah, so I think we consumed about 7 million data points to create some new value for the USTA this year. And the key is really being able to understand the terminology, the jargon of the domain. And in this case, it’s particularly tennis, right? Passing shots, ACEs. The players, actually, some of the names are kind of challenging sometimes. In golf, it may be the particular jargon of the Masters, the front nine, the back nine, patrons instead of fans. And so tuning the models to the task is really, really important. And Granite, the Granite model in particular, is very domain specific.
So we tuned it to this particular domain. At the US Open, we created match reports, using Watsonx, using the Granite model. So every single match at the US Open, we enabled the USTA staff with a head start of storytelling. So every match we would produce a match report. And I think over the course of just the first day, I think, last year they produced, I think 24 match reports. This year they were able to produce 60 match reports in half the time. So they’re able to actually create more content more quickly, tuned to the task with these models, and using the language that tennis fans come to expect.
Robert Kramer: Was that from the code assistant that was able to do that?
Noah Syken: Well, the code assistant was actually in a different capability to actually help build the digital platforms themselves. So the models were tuned to the task of creating the content, where the code assistant was building the foundation of the platform from the ground up.
Robert Kramer: Gotcha.
Melody Brue: One of the examples that we saw of that was the word Ash. So Arthur Ash Stadium, and how the model can then learn to distinguish the word Ash from Arthur Ash, or if you’re talking about Ash. Give us an example of how that works for business, where you have to distinguish the use of a specific word, whether it’s being used for retail, or supply chain, or whatever. I guess in reality, one of the purposes of doing this right is to showcase your technology. So these technologies, you’re not just using it in sports, you’re using them in multiple industries. Can you give us a feel for these types of technologies that you’re using in a different industry?
Noah Syken: Sure. So the notion of natural language is really important. And being able to create natural language interactions with customers is going to be important. So natural language in tennis may be a match report, but in a retail environment, you want to be able to engage with your clients or your customers who may have a challenge returning a product, or want product specifications, or clarifications in a digital platform. In a banking transaction, you want to be able to understand, are they interested in the interest rate or are they interested in a different product? So digital assistance, in terms of engaging with customers or fans, pretty interchangeable. What the domain is might be a little bit different, but that ability to actually engage in real time, and get an understanding from who your customer is, and at scale, I think is probably the common thread that runs through both.
Melody Brue: That’s great. Well, thank you so much, we really appreciate your time. What other things, before we let you go, what other things is IBM working on that we should know about?
Noah Syken: Yeah, so at the US Open, we were highly focused on tennis. But at the same time, we were launching our fantasy football product. So we collaborate with ESPN. They have over, I think, 15 million people who play fantasy football every year on their platform. It’s the most popular fantasy football platform out there. And we help them make better choices using artificial intelligence every day of the week. And we engage with those fans for four months straight on those digital platforms.
Again, IBM Watsonx using the Granite model consumes all the writing and information from media, from pundits, from reporters, from the guys at practice this week who saw the player that might be injured. And we bring that all together, and make some predictions about who might boom this week, who might bust this week. And so as those 15 million fantasy football players are setting their fantasy football lineups for the week, they have all of this information, a volume of information they could never consume themselves. But in that app, it’s right at their fingertips in a very digestible way that can help them make those decisions. So we’re really excited about that. The US Open wraps up, but we’re right into fantasy football season, again, using the same Watsonx platform with Granite models on top.
Robert Kramer: Any tips for the fantasy players? I lost my first week and I need to pick up some players. So-
Noah Syken: Well, as a Giants fan, I might not pick up too many Giants.
Robert Kramer: Good call. Okay, well thank you so much.
Melody Brue: Well, that was great. What a nice guest to have.
Robert Kramer: Wow. They are using the technology in so many great places with the sports, and it’s going to go into some other entertainment areas. I’m excited to see where it can be used. And especially in fantasy football. I was with some friends and they were using the app as well, and they love it, so it’s really fun.
Melody Brue: Yeah, I think I’ve only done fantasy football twice, because it was just happening in the office. But now I’m going to have to check it out. Well, that was really great. I’m glad that we got to have some time with Noah. And thank you, everyone, for joining us. This is Game Time Tech. I’m Melody Brue. This is Robert Kramer. And if you like what you saw, please hit that subscribe button. Feel free to follow us on Twitter. Give us some comments. Let us know who else you’d like to see on this Game Time Tech, or what kind of technology you want to hear about. And thanks for joining us and we’ll see you next time.
The Moor Insights & Strategy team hopes you had a great weekend!
Last week, Patrick, Melody, Matt, and Robert were in Las Vegas for Oracle CloudWorld and Netsuite SuiteWorld. Will Townsend was in London for Connected Britain, where he also moderated a panel on tech innovations in mobile networks. Jason Andersen was in Austin for JFrog swampUP 24. Robert Kramer joined Infor’s weekly“What’s Up Water Cooler”podcast to discuss the latest innovations in modern ERP systems.Check it out on YouTube.
This week, Anshel is attending the Snap Partner Summit in Santa Monica; Patrick will be at Salesforce Dreamforce in San Francisco, and Jason, Melody, and Robert will attend virtually.
Over the last week, our analysts have been quoted multiple times in top-tier international publications with our thoughts on Apple, Automotive GPU IP, Border Gategway Protocol (BGP), Canva, Cyber Resilience, Google, Oracle, ZeroPoint, and Zoho Analytics.
Adobe has previewed its Firefly Video Model, which is an AI-powered tool that can streamline workflows and add a lot to an editor’s creativity. The existing Firefly models are image-based. The Firefly Video Model has many useful features such as filling timeline gaps with generated B-roll footage using text prompts, camera controls, and reference images to fill in the missing sections. It can create variations of existing concepts or brainstorm on demand to generate new elements and provide new ideas. Firefly can also create atmospheric effects, 2-D and 3-D animations, and other visual enhancements. Firefly can remove unwanted objects, smooth transitions, and more, allowing editors to focus on creative storytelling and collaboration. Everything considered, AI provides Adobe editors a powerful video toolkit with many creative advantages. It’s a big step forward for video editing.
I’ve been speculating on OpenAI’s stealth project code-named Strawberry, which is believed to have superior reasoning power. Well, OpenAI may have just released Strawberry in the form of its latest model, o1, which appears to be a groundbreaking language model that demonstrates improved reasoning. o1 excels in several complex tasks ranging from math to code challenges. It even beats human experts with certain problem-solving skills. It also has an impressive ability to explain its thought process and how it arrives at its conclusions, as well as the ability to learn and improve over time.
That said, o1 also has some challenges. For example, training with large datasets may be a problem. However, the model is still under development, so we can wait to see if that improves. Despite any training problems, its reasoning ability already appears to be a plus that will set new model standards. I’m looking forward to seeing this model in its fully developed form.
Adobe has previewed its Firefly Video Model, which is an AI-powered tool that can streamline workflows and add a lot to an editor’s creativity. The existing Firefly models are image-based. The Firefly Video Model has many useful features such as filling timeline gaps with generated B-roll footage using text prompts, camera controls, and reference images to fill in the missing sections. It can create variations of existing concepts or brainstorm on demand to generate new elements and provide new ideas. Firefly can also create atmospheric effects, 2-D and 3-D animations, and other visual enhancements. Firefly can remove unwanted objects, smooth transitions, and more, allowing editors to focus on creative storytelling and collaboration. Everything considered, AI provides Adobe editors a powerful video toolkit with many creative advantages. It’s a big step forward for video editing.
I’ve been speculating on OpenAI’s stealth project code-named Strawberry, which is believed to have superior reasoning power. Well, OpenAI may have just released Strawberry in the form of its latest model, o1, which appears to be a groundbreaking language model that demonstrates improved reasoning. o1 excels in several complex tasks ranging from math to code challenges. It even beats human experts with certain problem-solving skills. It also has an impressive ability to explain its thought process and how it arrives at its conclusions, as well as the ability to learn and improve over time.
That said, o1 also has some challenges. For example, training with large datasets may be a problem. However, the model is still under development, so we can wait to see if that improves. Despite any training problems, its reasoning ability already appears to be a plus that will set new model standards. I’m looking forward to seeing this model in its fully developed form.
JFrog held its annual SwampUp event, where it made a number of interesting announcements. Long known for its Artifactory code-storage solution, JFrog has been steadily adding new capabilities and acquiring technology. The focus on security has been an especially high priority, and this week JFrog announced a runtime protection service enabling end-to-end artifact security for both source and binary code. This was coupled with the formal announcement of its MLOps solution, which is based on its acquisition of Qwak this summer. My research on how JFrog has elevated itself from a DevOps point tool to a full-blown platform will be available soon.
In addition to its own innovations, JFrog also announced a partnership with GitHub that shows some long-term potential. One of the first big steps was the integration of Artifactory with GitHub Copilot. While there have been many announcements of this type over the past few months, this one stands out. Given that Artifactory provides a rigorous and secure registry for all development artifacts, this integration makes it easy for developers to have an AI assistant specifically configured around their companies’ standards. For instance, a developer will get assistance based only upon specifically curated artifacts, versions, and standards established by existing rules and policies. This out-of-the-box integration is something that other tools either cannot do yet or that would require a lengthy integration process.
Over the next few months, we will see a new arms race, with a range of vendors showing off their AI agents. This started last week at Oracle CloudWorld and NetSuite SuiteWorld. It makes sense for application platforms to begin introducing agents—and dev tools to build agents. Agents will be the next big thing now that LLMs and bots (which are actually the simplest agents) are becoming common. This will be a major focus of my research over the rest of 2024. Over time, agents will become as ubiquitous as apps on your phone, all built to manage multi-step activities by using AI. You can think of them as dev assistants that can help you write code. An agent will be able to write the code, develop a test plan, execute the tests, give feedback, and recommend where to deploy. All you will need to do is approve the tasks, and the code will be deployed.
Last week Amazon announced that the SageMaker platform will include added support for Amazon Enterprise Kubernetes Service with its HyperPod-managed MLops solution. This allows IT and DevOps teams to use the familiar Kubernetes interface and more easily manage HyperPod clusters. This solution also ties in with AWS Cloudwatch to enable production monitoring of the clusters. This is a great step in enabling IT Ops and DevOps to “de-silo” AI and machine learning workloads.
Salesforce’s new Agentforce platform aims to change how businesses operate by introducing autonomous AI agents that work alongside employees to handle tasks in various departments such as service, sales, marketing, and commerce. These AI agents can analyze data, make decisions, and take independent action, freeing human workers to focus on more strategic and complex tasks. This could lead to increased efficiency and the ability to scale operations on demand. Early adopters such as Wiley have reported significant increases in case resolution using Agentforce, highlighting the potential of this new technology to increase customer satisfaction as well.
While Agentforce promises efficiency and scalability, its implementation could lead to job displacement and over-reliance on AI, potentially impacting employee and customer relationships. Concerns around data privacy, bias, and unexpected errors also need addressing. Additionally, the cost and complexity of implementing such technology may pose challenges, especially for smaller businesses.
However, Salesforce is emphasizing the importance of its Data Cloud in powering the accuracy and capabilities of its new AI solutions. By unifying apps, data, and AI agents on a single platform, Salesforce aims to reinforce its position as a leader dedicated to customer-centric solutions in the evolving CRM landscape.
Oracle announced a strategic partnership with AWS through which Oracle Cloud Infrastructure (OCI) will reside in AWS datacenters, with Exadata infrastructure housing Oracle Autonomous Database and other services. This service, Oracle Database@AWS, is aimed at enabling customers to deploy their Oracle database environments natively in AWS for easier, more performant integration of enterprise data with services such as Bedrock, SageMaker, and other analytics tools. No ETL. No complicated data pipeline management.
How will this work? Customers will go through their AWS console to select and deploy Oracle Database@AWS, using either AWS or Oracle credits to activate. Database@AWS will spin up as a service for use. L1 service will be provided by AWS; anything beyond that will be a collaborative effort between the two companies.
AWS marks the last of the big cloud providers to embrace this native multicloud model that Oracle pioneered. And while this partnership may seem surprising on its surface, it actually makes perfect sense. Virtually every Oracle customer uses AWS, and virtually every large AWS customer uses Oracle. Many of these customers want to integrate their rich enterprise data with the AI and analytics tools that are available in the cloud—and AWS is that cloud of choice. Rather than make life complicated, or force customers into a choice that would be suboptimal regardless of which cloud they chose, OCI and AWS have found a way to address these needs.
I mark this as a big win for both companies. Oracle has effectively mainstreamed this concept of native multicloud – or cloud within a cloud. And AWS has sent a big signal to the market about its customer-first approach.
Lenovo has made a slew of announcements aimed at enterprise IT organizations struggling with enabling and supporting the AI environment. GPU-as-a-service (GPUaaS), AIOps, and deeper insights into liquid cooling are three launches that should drive efficiencies across the financial, operations, and sustainability vectors. Here are the three offerings in a nutshell:
With GPUaaS, the company has delivered a new solution in its TruScale lineup that allows organizations to deploy and meter uses of NVIDIA GPUs across AI and HPC workloads. This includes built-in consumption metering that can be used to charge back to internal customers.
XClarity One gets a significant upgrade as the company leans more heavily into AIOps, delivering greater levels of automation to IT operations.
Lenovo’s Neptune liquid cooling becomes easier to deploy and utilize with Lenovo advisory services designed to help customers better understand how to most efficiently use liquid cooling. This is especially important as AI becomes more present in the enterprise datacenter.
I like how Lenovo is driving differentiation across the areas where enterprise IT and datacenter operators struggle.
How does this GPUaaS work? It’s designed to make life easier for organizations with multiple business units. This is like when I was working in government IT in Florida, where we had 39 different agencies—and a number of entities within each agency—that were funded separately. GPUaaS would enable me to apportion GPU resources across the state—dedicated GPU resources along with dedicated bill-back. This is not simply differentiated; it’s differentiated and it delivers value to organizations that are trying to better utilize or leverage the very large investments they are making in performant computing platforms.
What’s in it for Lenovo? I see two things. First, this is a differentiated service that can deliver real value into the enterprise—a market segment where Lenovo has been trying to establish itself for some time. If Lenovo can gain traction with a service such as GPUaaS, it can perhaps find opportunities downstream, in the more general-purpose compute clusters and farms.
Second, I think this is maybe an opportunity for Lenovo to establish itself with a higher-margin AI add-on business in consulting services. Something we’ve seen across the quarterly earnings of the OEMs is that the AI infrastructure market has contributed big top-line gains, but not a lot of margin. Consulting add-ons could help Lenovo change that.
Canva is raising the price of its Teams subscription by up to 300%, citing the addition of AI-powered features as justification. This move has been met with mixed reactions from users, with some questioning the value proposition while others find the new features to be worth the increased cost. Canva seems to have overlooked a more flexible AI pricing strategy, forcing customers to pay for bundled features instead of choosing the AI tools they actually need.
Canva’s decision to bundle AI features into a significantly higher-priced plan risks alienating its core user base of smaller teams and individuals that may not require the full suite of AI capabilities. By potentially forcing customers to pay for features they don’t need, Canva could drive them to seek more affordable and customizable alternatives. In a rapidly evolving AI landscape where costs are decreasing, offering AI features as add-ons or adopting a usage-based pricing model could be a more sustainable and customer-centric strategy.
Zoho Analytics has unveiled over 100 new enhancements, including an upgraded AI assistant and a machine learning studio. The focus is on democratizing data analysis to empower users across all roles to extract actionable insights. This comprehensive upgrade positions Zoho Analytics as a powerful and user-friendly BI solution at a competitive price. You can read my full analysis in this Forbes contribution.
Zoho has released version 6.0 of its AI-powered Analytics platform, bringing new AI and machine learning features. The update offers more options for teams to collaborate, analyze data, predict trends, automate tasks, and connect data for better decision-making. AI-driven automation simplifies metrics, reporting, and dashboards, while AutoML allows users to create custom models. The platform is also more flexible and extensible, integrating smoothly with tools like Power BI and Tableau.
Oracle CloudWorld is one of the best tech events of the year, especially when you pair it with the company’s NetSuite SuiteWorld conference. Let’s recap these back-to-back events from last week.
Oracle came out strong with a slew of announcements, emphasizing how AI is now deeply embedded across its offerings—from Oracle Cloud Infrastructure (OCI) to Oracle Fusion Cloud Applications. Hot topics included partnerships with IBM, Amazon Web Services, Google, and Microsoft Azure to provide customers with more-unified experiences.
Larry Ellison remains as sharp and visionary as ever. (I hope I look that great when I’m 80.) He focused on passwords and Zero-trust Packet Routing (ZPR) to simplify the complexities of network security.
I had the opportunity to meet with the Oracle Cloud ERP and Oracle Cloud SCM teams (#mywheelhouse) to discuss some of their impressive updates. For example, an RFID-powered solution now ensures Oracle Cloud SCM healthcare customers get the right supplies to the right places at the right times—driving better patient care experiences.
ERP remains the backbone of enterprise operations, and by capitalizing on its modern data management and AI-driven solutions, Oracle can push its customers to fully leverage these innovations. I’ll dive deeper into this topic in an upcoming article, including how the new Oracle Intelligent Data Lake, powered by OCI, helps ERP customers integrate and analyze structured and unstructured data in an all-in-one solution.
NetSuite announced some notable AI-powered enhancements: a new procurement solution, a Salesforce connector, improved project management, upgrades to the user experience, fresh training resources, and an integrated benefits offering. To be fair, some of these features should have already been standard. I’ve covered this in more detail in my latest piece on NetSuite.
As always, real-world customer stories breathe life into Oracle’s narrative. Organizations including Clayton Homes, the CIA, BNP Paribas, MGM Resorts International, Cloudflare, DHL, Uber, WideLabs, and Guardian Life took the stage to explain how Oracle’s solutions are transforming industries.
In other news, IBM and Oracle announced that IBM Consulting will support Oracle customers in gaining more value from generative AI and its growing challenges. “Our clients are eager to extend generative AI initiatives but they’re also concerned about rising compute costs, lack of in-house AI skills, AI assistant sprawl, and management oversight,” said Corinne Koppel, Global Oracle Practice Leader, IBM Consulting.
Broadcom is another company to suffer from the AI bubble-burst phenomenon. Despite its strong earnings, the company’s stock took a dip based on investor concerns about future earnings. The fears may be unwarranted, since Broadcom is well diversified beyond GenAI and its silicon is used pervasively across many enterprise networking infrastructure providers.
SAP has completed its acquisition of WalkMe, a platform that enhances user experiences with features such as in-app walkthroughs and step-by-step guides. It simplifies complex software tasks, increases productivity, shortens training time, and improves software usability by offering real-time assistance and automation of routine processes. WalkMe supports employee onboarding and can be integrated with enterprise systems such as CRM, ERP, and HR. The acquisition is set to complement SAP’s Joule AI, RISE with SAP, and GROW with SAP programs by further enhancing user engagement and simplifying digital adoption.
Curious about how NetSuite transformed from an early SaaS innovator to a major player in the ERP landscape, especially following its acquisition by Oracle? Dive into my latest Research Note as I explore NetSuite’s remarkable journey, its differentiators, and the strategic advantages that have positioned NetSuite as a go-to ERP solution for small to medium-sized businesses across multiple industries.
IBM has announced its plan to acquire Accelalpha, a global Oracle services provider specializing in implementing, integrating, and managing Oracle Cloud applications. Accelalpha serves clients around the world, focusing on industries such as distribution, heavy industry, and financial services. The acquisition will enhance IBM’s consulting expertise, particularly in ERP, SCM, logistics, finance, EPM, and customer transformation services. The deal is expected to close in Q4 2024, pending regulatory approval. Financial terms remain undisclosed.
Acquisitions remain a key strategy for many companies in the ERP space, particularly to enhance resources and consulting expertise. A recent example is Capgemini’s announcement of its acquisition of Syniti to expand its data management capabilities and strengthen its expertise in SAP projects.
Mastercard is set to acquire Recorded Future, a cybersecurity company, for $2.65 billion to strengthen its fight against fraud and cyber threats. The acquisition, expected to close in early 2025, builds on an existing partnership between the two companies. Mastercard cites the growing threat of cybercrime, which is projected to cost trillions globally this year, as a driving force behind this strategic move.
PayPal is partnering with Shopify to handle a portion of Shopify Payments in the U.S. PayPal will become an additional processor for credit and debit card transactions. This will create a consolidated view for merchants by integrating PayPal wallet transactions with Shopify Payments. The deal expands a global strategic partnership between the two companies. It shows how PayPal is increasingly being selected as a preferred platform by major commerce brands, technology companies, and payment processors.
At Oracle CloudWorld in Las Vegas, Oracle announced a new open skills architecture within Oracle Dynamic Skills that helps organizations develop, curate, and execute an enterprise-wide skills-based talent strategy. Oracle Dynamic Skills is part of Oracle Fusion Cloud HCM. This new architecture should help HR leaders leverage AI to better understand and leverage the skills of their employees, identify skills gaps, expand access to talent and nurture it, and make smarter workforce decisions.
Oracle Dynamic Skills aims to simplify the process of managing skills data, regardless of an organization’s current capability in this area. The platform helps customers align employee skills with business goals to optimize the workforce and stay ahead in a rapidly changing job market. Using this product’s AI-powered capabilities, HR leaders can create a comprehensive skills inventory, enrich the data with external sources, analyze skills gaps and trends, leverage third-party skills data and labor market analytics, and effectively manage a skills library.
Skills have emerged as a critical metric for assessing an organization’s potential. Adopting a skills-based talent strategy can offer valuable workforce insights, facilitate access to a broader range of talent, and ultimately boost overall company performance. Oracle’s offering comes at a good time because talent needs are rapidly changing—yet many organizations struggle to initiate their skills journey.
The new iPhone 16 was announced last week, and pre-orders began on Friday. The biggest takeaway from the launch is that the iPhone 16 lineup is Apple’s most complete in ages, with the base model iPhone and the Pro series having very comparable chips and likely the same AI performance. The trade-in offers also seem very aggressive, which I believe is because Apple wants as many users to have Apple Intelligence-capable devices as possible to entice more developers to develop for it. The biggest problem that Apple Intelligence has—other than its low install base—is that many of its features aren’t available at launch. Apple is broadly saying “Fall” as the window for some of its features, but the biggest ones, for example the new version of Siri, won’t be available until 2025. My advice for anyone looking to buy a new iPhone is that this is probably the best version of the base-level iPhone in many years, including near-parity with the Pro on most features. Still, it might not be a significant upgrade for anyone who already has an iPhone 15 Pro, especially since those customers will also be getting Apple Intelligence.
During the event launching the new iPhone, Apple announced a series of new wearables, including a thinner, larger-screen Apple Watch and a bunch of updated and new AirPods. While the update to the AirPods Max brought nothing more than new colors and USB-C connectivity, Apple did announce new versions of its base-model AirPods—the AirPods 4—in two versions, as well as major updates to the AirPods Pro 2. The biggest updates to the latter, in my opinion, are the hearing test and the ability to use the AirPods Pro 2 as hearing aids. Apple just got FDA clearance for this last week, right after the event. At $249, these might be the cheapest hearing aids on the market. While I don’t believe they will work well for someone with severe hearing loss who always needs hearing aids, I do believe they might work well for people who have impaired hearing and might need temporary assistance, especially when talking to friends and family over the phone. At $249, these might also bring hearing aids within reach for people who could never afford them otherwise—as long as they have an iPhone, which might be the most limiting factor.
Google has started shipping the new Pixel Watch 3, including the new 45mm size. (It previously offered only 41mm.) I have been using it for a few days now, and the battery life is fantastic. Thanks to the included Fitbit software, it is truly the only other watch with comparable fitness and health capabilities to Apple. I genuinely appreciate the design and integration with the Pixel 9 Pro Fold. I am also reviewing the accessories and really appreciate the improvements to them, although I do wish the metal mesh wristband were compatible with the 45mm model. More broadly, I wish Google had better third-party support for its watches, but the reality is that the majority of the market belongs to the Apple Watch, while on the Android side most of the market is dominated by the Samsung Galaxy Watch.
Ericsson has formed a joint venture with 12 of the world’s leading network operators to create a company that manages and sells 5G network APIs. This joint venture will be 50% owned by Ericsson and 50% owned by the operators, which should create cohesion that simply has not existed before in the market. This announcement includes all three big carriers in the U.S. and will likely drive 5G Standalone applications and the monetization of 5G unlike anything that has been possible before. The really important thing here is that this new venture will be able to sell services to ISVs and other customers across multiple carriers at global scale. This move is truly unprecedented in scale and cohesiveness.
T-Mobile sent a test alert via satellite using its new partnership with SpaceX, covering more than 500,000 square miles. This approach should ensure that people receive emergency alerts even in the most secluded areas where cell service might not reach. For example, this could be especially important for people in National Parks and other remote areas during fire season who would otherwise not know that a wildfire is headed their way until it’s too late. This capability won’t be limited to T-Mobile users, either; it could potentially be used nationwide by all operators, and even enabled by the federal government for emergency preparedness and response. This could truly be a feature that saves lives.
AST SpaceMobile announced the successful launch of its first five satellites aboard a SpaceX rocket, which finally sets the company on a course to initiate service with its partners AT&T and Verizon. AST SpaceMobile’s BlueBird satellites are much larger than traditional low-earth orbit satellites, but they can serve larger areas and more users simultaneously and with higher speeds. The company’s approach will also serve a long list of other carriers around the world, but AT&T and Verizon will be among the most prominent in the U.S. I believe that AST SpaceMobile will compete with existing satellite operators, including SpaceX’s Starlink.
United Airlines announced that in 2025 its entire fleet of planes will offer free high-speed Wi-Fi connectivity thanks to a new partnership with SpaceX’s Starlink. While United has not given details on the speeds that users can expect from this new upgrade, they can range anywhere from 40 Mbps to 220 Mbps for the entire plane. For passengers, United’s announcement is significant because it means upgrades to Wi-Fi coverage and speeds for more than 1,000 planes. That said, this is a significant undertaking that will likely put new stresses on SpaceX’s Starlink satellite network. It will be interesting to see how SpaceX responds to this new stress and how users’ Wi-Fi speeds are affected.
Quantinuum has released a new roadmap that projects it will create a universal, fault-tolerant quantum computer by 2030. The roadmap forecasts that Quantinuum will achieve this major milestone using a fifth-generation quantum computer, Apollo, that will be able execute millions of gates. If all goes as planned, Apollo will be able to achieve quantum advantage using many high-fidelity logical qubits by scaling its QCCD architecture.
Quantinuum has collaborated with Microsoft on several recent breakthroughs. It demonstrated 12 logical qubits on its System Model H2 quantum computer, plus a chemistry simulation using a combination of logical qubits, AI, and HPC. Microsoft’s Azure Quantum Elements has also integrated Quantinuum’s InQuanto software into the product offering. The Quantinuum roadmap anticipates that the company will continue building increasing numbers of reliable logical qubits and leveraging partnerships with industry leaders like Microsoft.
The U.S. federal government recently published guidance related to border gateway protocol (BGP) internet routing security. BGP is instrumental in determining the optimal routes for information to be transmitted over the internet across public and private networks. There is a concern here, which is that BGP predates the launch of the public internet—and cyberthreats have vastly increased in sophistication. A more modern protocol, Open Shortest Path First (OSPF), could address BGP’s shortcomings and provide a more secure routing methodology.
IBM and ESPN are continuing their collaboration for the eighth year with the ESPN Fantasy app for fantasy football. As with IBM’s work on the US Open, Wimbledon, and The Masters, the watsonx data and AI platform is supporting over 12 million fantasy football users with advanced AI-driven tools.
“Millions of people participate in fantasy football on the ESPN Fantasy platform each year, and they are constantly looking for the best information available to compete in and win their leagues,” said Noah Syken, VP of sports and entertainment partnerships for IBM. “This year’s enhancements in the ESPN Fantasy platform put watsonx-powered insights directly in their hands, giving them access to personalized, data-driven insights that help deliver on these expectations.”
New tools such as “Top Contributing Factors” in Waiver and Trade Grades offer personalized player grades and AI-generated insights based on complex data, along with detailed measures of player performance and expert articles.
Qualcomm’s front-of-the-jersey Snapdragon sponsorship with Manchester United goes far beyond a logo on a shirt. It brings technology into the mix, enhancing fan experiences with better connectivity, data-driven insights, and more interactive features at the stadium. As Qualcomm CMO Don McGuire put it, “Our Manchester United partnership is how we come together with one of the most revered sports franchises in the world and how we build to scale for the Snapdragon brand—from awareness all the way through to affinity and advocacy.”
Read my Research Note to discover how Qualcomm and Manchester United are elevating sponsorship to new heights, and be sure to catch regular insights from Melody Brue, Anshel Sag, and me on the Game Time Tech podcast (linked below) as we explore how technology is shaping the future of sports.
Amid many announcements at Oracle CloudWorld, the new Oracle Fusion Cloud Sustainability application probably hasn’t gotten the attention it deserves. This new tool aims to streamline sustainability data management and reporting, enabling organizations to make more informed decisions and accelerate progress on their environmental targets.
Oracle Fusion Cloud Sustainability integrates data from various Oracle Cloud applications to allow automated tracking of sustainability-related activities, contextualized data analysis within existing business processes, and simplified reporting through pre-built dashboards. The solution also provides rigorous audit trails, emission factor mapping, and third-party integrations, giving businesses a broad toolkit to measure and improve their environmental performance. Notably, Oracle offers this new capability to existing customers at no additional cost, which suggests its commitment to supporting sustainability efforts.
Ericsson is attempting to accelerate mobile network programmability with the recent announcement of a joint venture with several mobile network operators. It is a logical move given that the company has written off its entire acquisition of Vonage and needs to chart a new course. From my perspective, Ericsson was ahead of the telecommunications industry with its API strategy, but with Nokia entering the category with its Network as Code platform just one year ago and the GSMA now providing guidelines, mobile network programmability may finally find its rhythm.
]]>Box Acquires Alphamoon To Boost Intelligent Document Processing
https://moorinsightsstrategy.com/box-acquires-alphamoon-to-boost-intelligent-document-processing/
Wed, 11 Sep 2024 19:30:20 +0000https://moorinsightsstrategy.com/?p=42274With the acquisition of Alphamoon, Box wants to improve customers' ability to get more business value from unstructured data in their files
]]>With its acquisition of Alphamoon, Box wants to improve its customers’ ability to get more business value from the unstructured data in their files. Eakrin Rasadonyindee via Dreamstime.com
Box recently announced its acquisition of Alphamoon, a Poland-based company focused on intelligent document handling. This key acquisition is expected to enhance Box’s content management platform by introducing automated features for extracting metadata from complex documents through optical character recognition, advanced language processing and user-friendly, no-code interfaces.
This acquisition complements Box’s earlier purchase of Crooze, a no-code workflow automation platform, and furthers the company’s vision for its Intelligent Content Management platform. Combining these technologies should enable Box to move beyond storing and managing content to processing and automating the workflows around content intelligently.
A New (Alpha)Moon Rising
Alphamoon’s technology has the potential to significantly improve how businesses handle documents. It uses advanced LLM models to automate extraction of metadata from unstructured files to harness this content for business purposes. This should enable Box customers to streamline workflows and enhance decision making by rendering critical information readily available and searchable. For example, sales teams might quickly identify high-value opportunities based on contract terms, or compliance officers could proactively detect potential risks hidden in policy documents.
Box sees a future in which this type of automation increases efficiency and productivity by reducing the need for manual processes, allowing employees to focus on more strategic tasks, and by creating straightforward access to information locked in static documents that might otherwise never be used at all.
The addition of Alphamoon also allows Box’s integration with external applications such as Salesforce to automate processes and facilitate data flow across the organization. Alphamoon’s technology can be tailored to specific industries, such as healthcare and finance. For example, it can extract clinical trial data from research papers in healthcare or simplify the review of intricate financial documents.
Box Keeps Crooze-ing Along
The Alphamoon deal builds on Box’s acquisition of Crooze in January 2024. Crooze was a long-time partner of Box whose no-code workflow automation was built on the Box platform. Its products are aimed at simplifying contract management, document handling and metadata organization, allowing users to build tools that streamline workflows and make content more actionable.
Box vice president of product strategy Rand Wacker sees these two acquisitions as working hand-in-glove. “Two things you need for a truly automated content management system are the ability to extract metadata from the content in an intelligent way, and then the ability to automate workflows around that content in a very simple way as well,” he said. “These two acquisitions couldn’t be more tailored to meet those two needs.”
The challenge of extracting metadata from unstructured data in documents is not trivial. Unstructured data, for example the text in PDFs, is inherently designed for human readability, not machine processing. Intelligent document processing like Alphamoon’s goes beyond simple capture and processing; it transforms unstructured files into a format that AI can readily understand and use.
With its vast repository of enterprise data, Box is well-positioned to leverage this technology. In the short term, IDP addresses a crucial need for Box. But in the long run, it paves the way for Box to evolve into a platform that prioritizes and seamlessly integrates AI into its content management capabilities.
A More Intelligent Future For Content Management
By leveraging Alphamoon’s OCR technology and incorporating third-party AI models, Box advances its ambitions for its Intelligent Content Management platform. By doing this, Box aims to enable businesses to do more with their data—not simply storing and sharing files, but getting the most out of content for greater business value.
As businesses grapple with the challenges of managing and deriving value from their ever-growing volumes of content, solutions like Box’s are becoming increasingly critical. With its recent acquisitions, Box is seeking not just to keep pace with businesses’ evolving needs, but to reshape the future of content management.
The Moor Insights & Strategy team hopes you had a great weekend!
Last week, Anshel Sag was at IFA Berlin, where his insights were featured during the Qualcomm press conference. This week, Patrick, Melody, Matt, and Robert will be in Las Vegas for Oracle CloudWorld and Netsuite SuiteWorld. Will Townsend will be in London for Connected Britain, where he’ll be moderating a panel. Jason Andersen will be in Austin for JFrog swampUP 24.
Over the last week, our analysts have been quoted multiple times in top-tier international publications with our thoughts on Intel, NVIDIA, Zoom, and charging for AI.
YouTube recently announced new AI detection tools intended to protect creators from unauthorized use of their likenesses. There has been concern about the ease with which AI can misuse someone’s face, voice, or other attributes. The new YouTube tools can detect when AI-generated content has copied a creator’s appearance or voice without permission.
The new policy, backed up by YouTube’s tools and commitment to protecting IP and personal rights, is appropriate because fake images, fake porn, and other videos are easily created, and almost anyone can do it with readily available AI tools. I believe the new detection methods will allow creators to more easily police their own digital properties to protect their reputations and brands from damage.
The bulk of—and most impressive part of—our exposure to AI began only a couple of years ago with ChatGPT. In the short time between then and now, AI has evolved rapidly, even though our understanding of AI’s inner workings hasn’t matched its functional evolution. According to a new research paper by Google DeepMind researchers, machine psychology provides a fresh way to understand how AI models work.
Traditionally, AI’s core functionality and power are based on complex neural network designs. Machine psychology doesn’t examine how an AI model reacts to inputs or what its output reveals. Instead of the inner step-by-step path through the model, machine psychology focuses on understanding the “behavior” of AI as it responds to commands and questions. Terms like “learning” or “reasoning ” have roots in human psychology, and applying them to AI can be confusing and meaningless. It is like calling AI intelligent even though it doesn’t have human-like understanding or consciousness yet.
Machine psychology is important because it helps us recognize and understand AI’s sophisticated behaviors and abilities beyond simple data processing. It will require long-term research to understand AI behavior over time, predict future developments, and ensure that AI remains safe and aligned with human objectives. Machine psychology is a significant step and necessary step to better understand AI. If you are interested in learning more, click here for the paper by Google DeepMind.
YouTube recently announced new AI detection tools intended to protect creators from unauthorized use of their likenesses. There has been concern about the ease with which AI can misuse someone’s face, voice, or other attributes. The new YouTube tools can detect when AI-generated content has copied a creator’s appearance or voice without permission.
The new policy, backed up by YouTube’s tools and commitment to protecting IP and personal rights, is appropriate because fake images, fake porn, and other videos are easily created, and almost anyone can do it with readily available AI tools. I believe the new detection methods will allow creators to more easily police their own digital properties to protect their reputations and brands from damage.
The bulk of—and most impressive part of—our exposure to AI began only a couple of years ago with ChatGPT. In the short time between then and now, AI has evolved rapidly, even though our understanding of AI’s inner workings hasn’t matched its functional evolution. According to a new research paper by Google DeepMind researchers, machine psychology provides a fresh way to understand how AI models work.
Traditionally, AI’s core functionality and power are based on complex neural network designs. Machine psychology doesn’t examine how an AI model reacts to inputs or what its output reveals. Instead of the inner step-by-step path through the model, machine psychology focuses on understanding the “behavior” of AI as it responds to commands and questions. Terms like “learning” or “reasoning ” have roots in human psychology, and applying them to AI can be confusing and meaningless. It is like calling AI intelligent even though it doesn’t have human-like understanding or consciousness yet.
Machine psychology is important because it helps us recognize and understand AI’s sophisticated behaviors and abilities beyond simple data processing. It will require long-term research to understand AI behavior over time, predict future developments, and ensure that AI remains safe and aligned with human objectives. Machine psychology is a significant step and necessary step to better understand AI. If you are interested in learning more, click here for the paper by Google DeepMind.
Last week I got the chance to tune into Dell’s AIOps strategy and products. For context, we are now about one year out from Dell’s acquisition of Moogsoft. I was impressed with how Dell is pragmatically tackling the challenges of increasingly complex IT ops structures. Instead of trying to be all things to all people, Dell focuses its efforts on its own infrastructure via its Infrastructure Observability platform, where it clearly can add the most value. To put it another way, instead of trying to do everything itself, Dell is adding a different sort of value to customers’ operations requirements via a clever partnering and integration approach. For application observability, Dell has chosen to tightly integrate with IBM’s Instana platform. For cross-platform integration and alerting, Dell Incident Management is the new name for Moogsoft. It may be just enough flexibility without sacrificing Dell’s own AI management solution that is optimized for its hardware.
Speaking of observability, this will be a hot topic for different companies over the next few months, with many new products and announcements in the pipeline. While I cannot speak to anything specific yet, enterprises need to understand a couple of things. Crucially, the scope of observability is changing in both breadth and depth. Observability tools are leveraging better analytics and AI tools to provide new—deeper and more connected—views of the environment. A good example of this is IBM Concert, IBM’s application- and service-centric viewpoint. On the breadth front, we are seeing a wider range of observability tools across the IT landscape. A good example of this is VMWare Cloud Foundation 9, which was announced last week. This flood of new products and capabilities will require more in-depth reviews to ensure that enterprises are able to (a) not pay for things they do not need and (b) make sure the tooling will align with increasingly complex environments.
Anecdotally, usage of AI code assistants seems to be trending upward. As I cover this space and have great enthusiasm for AI as a developer aid, I’ve seen a big uptick in both LinkedIn posts and Reddit entries on this topic. Some but not all were positive. Given the newness of the technology and how people are migrating up the AI learning curve (including me—seethis post from last week), mixed results are not surprising. But it is notable, and a trend I will continue to watch.
While tuning into VMWare Explore a couple of weeks ago, I saw the initial signs of a transformation effort for the company. And while it has been hard to hear about colleagues and customers that have been impacted by Broadcom’s acquisition, it does appear that VMWare is working hard to regain its footing. I see parallels between the steps VMWare is taking and those of other companies that have successfully transformed—or that, like SAP, are currently undergoing a transformation. It prompted me to sit down and consider what it takes for big tech companies to weather disruptive market events. You can read my thoughts in this new post on our site.
HPE released its quarterly earnings, and the numbers were impressive. Overall, revenue came in at $7.7 billion, up 10% year over year. GreenLake ARR grew at a 39% YoY clip, with over 3,000 new customers in the quarter (for nearly 37,000 GreenLake customers total). And server revenue came in at $4.3 billion—a 35% YoY increase. As we saw with other server vendors, HPE’s business is recognizing considerably more revenue due to the AI boom (the company’s AI business was roughly $1.3 billion—a 39% sequential increase). It is clear that the focus on driving adoption of HPE technologies and services through AI is paying off.
As with other OEMs, we are also seeing that the AI game is considerably lower-margin. While these AI servers are selling at a higher ASP, the margins appear to be going to the chipmakers who are providing AI acceleration. Strategically, it’s important for HPE (and all OEMs) to win as much business in this market as they can—despite the lower margins. Establishing itself as the AI solution of choice while this market is still nascent will lead to more, margin-rich business as inferencing begins to dominate the AI landscape. This will impact the entire HPE portfolio. (Keep an eye on the intelligent edge market over time.)
One area of concern is the company’s 7% shrink in its hybrid cloud business. This business includes server, storage, the recently announced private cloud, resiliency, and GreenLake Flex. While the company didn’t provide a breakout of contributions, I suspect storage is contributing to this decline. Despite HPE talking about numbers trending in the right direction, its storage business has been relatively flat to negative over the past few quarters (as has its largest competitor’s—Dell also reported soft storage numbers).
Here’s what I think is going on: the high end of the storage market is moving to AI and high-performance-specific storage vendors (VAST, Weka, DDN, etc.). Also, I believe companies including Pure Storage (up 10% YoY) and NetApp are taking their fair share of the commodity AI-storage market. Likewise, I believe these storage vendors are taking a share of mid-range enterprise storage through deployment-driven purchases. Lenovo has also done well in this “commodity storage” market.
In other earnings news, Broadcom reported mixed results. Its revenue for the quarter came in at $13.07 billion, with $7.25 billion attributed to semiconductors and $5.8 billion attributed to software. Fueling these numbers were AI-related silicon and the contribution of VMware to the software portfolio.
On the software front, VMware’s number is a little more impressive considering that, post-acquisition, Broadcom sold off two considerable contributors (the Horizon end-user computing division and the Carbon Black security unit). Countering these strong numbers was the rest of Broadcom’s software portfolio, which saw a considerably smaller 4% YoY growth. This is to be expected, as the other contributors include what was Symantec and mainframe software solutions (previously CA). While I see VMware’s contribution as a big win for Broadcom, I believe CEO Hock Tan has the right perspective on this. At the recent VMware Explore event, he said the real measure of success with VMware will not be in short-term licensing deals and revenue, but the consumption of its new VCF 9 private cloud platform. (I published a detailed analysis of VCF 9 on Forbes.) While licensing revenue is transactional, consumption of the full capabilities of VCF 9 is sticky—meaning very long-term.
On the silicon front, Broadcom suggested that AI acceleration was carrying the business, while non-AI-related silicon had “bottomed out.” Further, the company expects to see the non-AI-related business rebound and accelerate through Q4. What Broadcom is suggesting is what I’m seeing across the industry: AI is fueling the tech industry at the moment, while non-AI-related business is more sluggish.
What to make of the U.S. government going after NVIDIA for antitrust violations? This is a tough one to sort through. Does NVIDIA have a monopoly? Yup. Is this monopoly due to anti-competitive behaviors? This is where it gets murky. NVIDIA’s CUDA software platform makes it difficult for non-NVIDIA silicon providers to be competitive. CUDA is also almost 20 years old and only became popular because NVIDIA silicon was so much better than the competition that developers chose to use it. If AMD was as successful in designing and building silicon after its ATI acquisition, CUDA would not be a lockout architecture today. In fact, we see what happens when competitors do create competitive silicon: AMD’s most recent quarter saw it far exceed expectations with the MI300—and the company raised its forecast.
I was with AMD when the company sued Intel for anti-competitive behavior. It was a legitimate gripe. OEMs were being compensated to limit Opteron (the AMD server CPU) in terms of portfolio, positioning, and go-to-market. Not only did Intel compensate AMD, it also paid heavy fines around the globe.
(Let me be clear that I don’t mean to suggest that AMD is tied in any way to the actions of the U.S. government. I mention them simply because they are now NVIDIA’s closest competitor and because of my experiences during the Intel antitrust activities.)
Is NVIDIA deploying similar tactics? Or has it simply designed better GPUs over the years and is now benefiting from that success? I don’t know the answer to that question. But if NVIDIA hasn’t done anything wrong, the U.S. government is actively stifling innovation and the success that comes along with that innovative spirit. NVIDIA made a lot of bets far ahead of the market, and those bets have paid off.
The analyst community lost a wonderful soul with the recent passing of Brian Gong of Pure Storage (and formerly Cisco). Analyst relations folks are, by nature, social creatures. The best ones are funny, warm, and genuinely interested in us as analysts and people. Even by this measure, Brian was a cut above others. He will be sorely missed for his warmth, wit, and genuinely gentle spirit. To the good folks at Pure Storage—we wish you well during this difficult time.
Observe, Inc. provides an observability platform that unifies telemetry data from distributed applications, enabling faster, more cost-effective troubleshooting. Integrating with over 250 data sources and cloud services like AWS and Kubernetes, the platform is built on Snowflake and uses a usage-based pricing model focused on data storage and querying. Observe aims to modernize monitoring by replacing traditional log analytics and infrastructure tools.
I mention this because Observe has a new release for its Observe Agent, taking a further step in the observability game by adopting OpenTelemetry as the standard for data collection. Given how packed this market is with seasoned players, it’ll be interesting to see what sets Observe apart from the crowd. Patrick Moorhead and I recently connected with the team at Observe; watch for more details about the tech behind the platform and how Observe plans to carve out its niche.
Smartsheet is reportedly in talks to be acquired with private equity firms, including Vista Equity Partners and Blackstone. The reports of the acquisition talks resulted in a nearly 10% rise in Smartsheet’s shares. In its Q2 2025 earnings, Smartsheet reported that revenue increased by 17% YoY to $276.4 million. Smartsheet currently serves 85% of the Fortune 500 with its cloud-based enterprise solutions for project management and collaboration. Smartsheet management declined to comment on the buyout talks, but Reuters reported that the company hired an investment bank in June to explore interest from PE firms. Overall, M&A activity has slowed, creating pent-up demand in the private equity universe, according to a recent report from PwC.
Meta has joined the steering committee of the Coalition for Content Provenance and Authenticity. The C2PA, as a standards body, focuses on establishing ways to verify the origin and history of digital content, an increasingly vital task in the face of rising misinformation and the proliferation of AI-generated media.
There’s a certain irony in Meta promoting digital authenticity. Meta’s platforms, especially Instagram, have been criticized for fostering environments where users often present idealized (and heavily filtered) versions of their lives, contributing to feelings of inadequacy and inauthenticity, among other things. Meta’s business model relies heavily on collecting user data and targeted advertising, practices that can feel intrusive and manipulative, further eroding trust and authenticity. So, it can seem contradictory for Meta to now champion digital authenticity when its platforms have arguably played a role in creating the opposite.
Whether its efforts are perceived as genuine remains to be seen. In a digital world where distinguishing real from synthetic media is increasingly challenging, Meta’s active participation in the C2PA alongside other industry leaders represents a crucial step towards establishing a more transparent and trustworthy online environment. This move could influence how different platforms handle content verification, potentially shaping the future of how we consume and interact with information online. Ultimately, Meta’s deeper involvement with the C2PA is a promising indicator of a proactive approach to addressing the complexities and challenges brought about by the rise of AI and the spread of misinformation.
Data and AI were central themes at this year’s Amazon Web Services (AWS) summit in New York. Dr. Matt Wood, AWS VP for AI products, noted, “Customers are able to apply generative AI to understand and leverage existing data in new and exciting ways.” At the event, AWS introduced new features to its three-layer GAI stack, enhancing AI infrastructure, models, and applications. These expansions aim to make AI and data analytics more accessible for large enterprises, small businesses, and startups. The summit also highlighted how Nasdaq utilizes AI, as well as the broader impact of AWS’s generative AI offerings across different industries. My latest Forbes article provides insights on AWS’s recent summit.
HPE recently announced its Q3 earnings, and it was a tale of two product portfolios, despite double-digit top-line revenue growth. To no one’s surprise, server revenue was up 35% based on strong AI systems demand, while networking revenue was down 23%. I expect that the company’s quarterly performance is a result of customer prioritization of computing infrastructure, but networking could rebound in the future with the imminent close of the Juniper Networks acquisition.
Capgemini is acquiring Synitito broaden its data management expertise and strengthen its SAP project capabilities. With a team of more than 1,200 specialists, Syniti brings a wealth of experience in data transformation and management across industries including life sciences, aerospace, manufacturing, retail, and automotive.
This acquisition positions Capgemini to better support RISE with SAP implementations, especially in data migration to SAP S/4HANA. Both companies recognize that successful digital transformation hinges on clean, reliable data. By integrating Syniti’s expertise, Capgemini could offer clients a smoother path for data migrations and governance, as well as more efficient use of their data during ERP transitions.
Next week marks an exciting time as Oracle rolls out two of its big annual events. I will be at Oracle’s CloudWorld and NetSuite’s SuiteWorld conferences starting September 8, diving into the latest innovations in AI, automation, and more. AI and machine learning have an increasing influence on ERP systems, especially those related to demand forecasting, supply chain management, quality control, shipping, preventive maintenance, data intelligence, and process automation. I’m looking forward to seeing what’s new from Oracle and NetSuite in these areas. If you have any questions or want to set up a meeting, feel free to reach out.
Zoho announced the launch of Zoho Payments, a unified payment solution that allows businesses to accept payments via various methods (cards, UPI, net banking) directly within their business applications. Zoho Payments offers businesses flexibility in receiving payments from customers. Businesses can tailor the options—such as invoice e-mails, payment links, dedicated payment pages, and a secure client portal—so customers can choose how they want to pay. Early access customers in the U.S. can receive payments in 135 currencies, and the solution ties to all of Zoho’s finance ecosystem, including Zoho Books, Inventory, Billing, Invoice, and Checkout. This solution, which is now also available for early access in India, promises to provide faster payouts and streamlined dispute management.
The long-awaited and much-anticipated Thread 1.4 update is now available. This new version is significant because Thread is the low-power device mesh network used by Matter, and it’s already present in most homes as a standard feature of smart speakers and hubs.
Users frequently encounter five big problems when adding new devices and border routers to existing Matter/Thread networks, and this new version addresses all of them.
Different border router brands (smart speaker, hubs) can now share credentials and join existing Thread networks instead of creating new networks.
Users can connect multiple border routers over Wi-Fi and Ethernet to cover large buildings and campuses.
Users can install Thread devices without physical access to the device or its QR code.
Thread now supports network diagnostics that simplify troubleshooting.
Thread devices can directly communicate with cloud-based services.
Thread Group announced these enhancements at CES in January. Completing these complex new features in seven months is impressive progress for a standards body. For instance, multiple product ecosystems sharing the same Thread network required hyperscalers to collaborate on secure credential sharing. Those discussions must have been interesting. The good news is that 1.4 primarily affects border routers, while individual Matter/Thread devices are backward-compatible. Hence, the update should not delay the availability of new Matter devices. And many existing border router products are software upgradeable to 1.4, so I expect a slew of new Matter products at CES in January. Please refer to Thread’s1.4 features white paper for technical details.
Quantum Brilliance (QB) and Oak Ridge National Laboratory (ORNL) announced a collaborative effort to integrate QB’s room-temperature diamond-based quantum computing technology with ORNL’s high-performance computing systems. Quantum Brilliance was founded in Australia in 2019. It specializes in room-temperature diamond quantum accelerators. With funding supplied by the Australian Capital Territory Government, Quantum Brilliance wants to make quantum technology more accessible so it can be integrated into everyday devices and advanced computing systems.
The collaborative objective is to explore the effectiveness of parallel and hybrid quantum computing. Parallel quantum computing uses multiple quantum processors working together, while hybrid computing combines quantum and classical processors. It is hoped that the combination of enhanced computational capabilities will solve problems beyond the reach of classical computing alone.
We are getting closer to a powerful supercomputer that will integrate AI, HPC, and quantum technologies.
Zscaler is the latest company to suffer a stock value decline despite posting solid financial results for its most recent quarter. Sales were up 30%, billings up 27%, and deferred revenue up 32%. However, pressure on profitability and a softer revenue outlook triggered a 17% stock value decline last week. It proves that Wall Street will be satisfied only when expectations for both current but future financial performance are met.
SportAI recently closed a $1.8 million seed-funding round, allowing it to continue developing its technology and expand its reach. SportAI uses artificial intelligence to improve sports performance. Its platform provides sports-technique coaching, commentary, and analysis using machine learning, computer vision, and biometric technology. It caters to coaches, training facilities, broadcasters, sports equipment brands, and retailers. SportAI’s platform works with various video sources, eliminating the need for specific hardware and manual tagging—and thereby making video analysis more scalable and technically accessible. This also makes it more commercially accessible to more people, because previous versions of this type of performance analysis were costly and, therefore, mainly limited to professional athletes or larger companies.
AI is transforming both the business world and sports, with UEFA’s use of AI in the Champions League draw as a prime example. As the competition moved to a more complex 36-team league format, the traditional manual draw system, where teams were pulled from bowls, became impractical. The complexity of factors, such as preventing teams from the same country from meeting too often, made it too challenging for manual handling. AI software now manages these details, improving accuracy and reducing the risk of human error.
Concerns arose after a technical mishap in the draw for the 2021–22 tournament forced a redo. With AI now in place, UEFA has improved its ability to manage the process smoothly, but fans who worry about transparency and the potential for manipulation remain skeptical.
This shift in sports mirrors the broader impact AI is having across industries. AI is already transforming areas like customer service and data analysis in business, making operations more efficient. In sports, AI is enhancing efficiency and changing how complex logistics are managed. While scrutiny around transparency and misuse is present in both fields, the benefits of AI in reducing errors are evident. However, humans must always stay involved in these processes to ensure appropriate oversight.
Honeywell and Cisco are collaborating on an AI-powered solution that adjusts building systems based on real-time occupancy data to reduce energy consumption. The joint effort uses Cisco Spaces to collect occupancy and environmental data and Honeywell Forge Sustainability+ for Buildings to improve energy efficiency. Room temperatures, lighting, and ventilation are adjusted based on occupancy, leading to automated building operations, optimized energy use, improved employee comfort, and reduced greenhouse gas emissions.
This collaboration supports Honeywell’s aim to reduce buildings’ environmental footprint and aligns with its focus on automation and energy transition. Cisco also supports the initiative with its Country Digital Acceleration program, a worldwide effort involving governments and businesses to create equitable and safe societies using responsible and cutting-edge technology. Building owners are prioritizing energy management because of hybrid working policies and lower occupancy. These factors and other challenges are putting pressure on owners to operate buildings efficiently and minimize resource waste.
AT&T recently struck a new deal with Nokia; on the surface, many interpret it as an olive branch, given the operator’s alignment with Ericsson for open RAN infrastructure last year. Nokia has a long history of success in fiber optics, and the latest announcement will provide AT&T with platforms that will upgrade and expand its massive fiber network over a five-year period. Nokia’s Lightspan platform is extremely flexible and can provide symmetrical speeds at 10G, 25G, 50G, or 100G. It is potentially a lucrative opportunity for Nokia, one that the company desperately needs to keep its financial performance on stable footing.
]]>RESEARCH NOTE: Qualcomm Scores with Snapdragon’s Manchester United Sponsorship
https://moorinsightsstrategy.com/research-notes/qualcomm-scores-with-snapdragons-manchester-united-sponsorship/
Wed, 04 Sep 2024 17:05:21 +0000https://moorinsightsstrategy.com/?post_type=research_notes&p=41986Qualcomm’s multifaceted partnership with Manchester United aims to transcend the traditional boundaries of sports sponsorships. Far beyond simply slapping Qualcomm’s Snapdragon logo on the front of Man United player jerseys, the chipmaker and the iconic football club are following a calculated joint strategy aimed at amplifying brand value for both organizations and driving business growth […]
]]>Qualcomm CEO Cristiano Amon and CMO Don McGuire present the Snapdragon Cup to Manchester United – Photo by Melody Brue
Qualcomm’s multifaceted partnership with Manchester United aims to transcend the traditional boundaries of sports sponsorships. Far beyond simply slapping Qualcomm’s Snapdragon logo on the front of Man United player jerseys, the chipmaker and the iconic football club are following a calculated joint strategy aimed at amplifying brand value for both organizations and driving business growth for Qualcomm. Although Qualcomm has a long history with sports sponsorships, its link with Man United is a testament to an evolving sports marketing philosophy, where the spotlight is on creating a genuinely symbiotic relationship with the sports team.
Manchester United, a club with a global fanbase of more than 1.1 billion people, provides Qualcomm with a huge platform to elevate the Snapdragon brand. By aligning with such a successful club with global reach, Qualcomm is shrewdly tapping into a rich vein of consumer emotion, potentially forging connections with fans that resonate far beyond the football pitch.
This approach is reminiscent of Intel’s iconic “Intel Inside” campaign, which successfully transformed a B2B component into a consumer-recognizable mark of quality and performance. Intel’s branding efforts helped consumers associate its microprocessors with superior computing experiences, and now Qualcomm aims to make Snapdragon synonymous with premium mobile technology across smartphones, laptops, and other connected devices.
The partnership’s early success is evident in the metrics. Qualcomm reports over a billion brand impressions, record kit sales, and widespread social media engagement since the partnership launched in July. However, some of the value lies in the intangible—the “joy” that Qualcomm’s CMO Don McGuire believes the partnership evokes, echoing the Snapdragon brand’s ethos. He says this emotional connection lays the foundation for long-term brand loyalty and consumer preference.
Qualcomm’s strategic use of data further solidifies its approach. By tracking fan engagement metrics, from shirt sales to social media sentiment, Qualcomm gains valuable insights into its target audience. This data-driven lens allows for personalized marketing campaigns to enhance brand resonance and drive sales.
Qualcomm Explores Old Trafford Naming Rights—And Tech Renovation
Qualcomm’s potential acquisition of naming rights for Man United’s home ground, Old Trafford, could be a literal game-changer in terms of fan experience. It would expose Snapdragon to billions of Manchester United and Premier League fans by associating it with one of the world’s most iconic sports venues.
The collaboration between the two companies extends beyond branding, venturing into technological innovation. Qualcomm’s vision for a technologically advanced Old Trafford, powered by Snapdragon processors and other Qualcomm products, offers a glimpse into the future of sports and entertainment—a future where many flagship sports facilities are already headed. The company could create immersive experiences in the stadium that could change how fans interact with the game. At the same time, these technology renovations could also serve as a tangible demonstration of Snapdragon’s capabilities. The improvements could take the form of anything from augmented reality replays to personalized stats on mobile devices and interactive stadium features.
That’s also a slippery slope, however. The company would need to create meaningful digital experiences while maintaining the historical charm of the current facility. Although discussions are ongoing for the naming rights, Qualcomm is wise to play out this initial sponsorship for a while to assess the more tangible ROI before making such a significant additional investment.
The Copilot + PC logo adorns the back of the Manchester United game jerseys. – Photo by Qualcomm
Snapdragon Highlights Industry Wins from the Pitch to the PC
Qualcomm’s collaboration with Microsoft further underscores its strategic partnership approach. Snapdragon X Series processors are currently the only ones powering the first generation of Windows-based, AI-enabled Copilot+ PCs from all the big makers including HP, Lenovo and Dell. The tight integration between Qualcomm and Microsoft inside these computers is echoed by their cooperation to put the Copilot+ PC logo on the back of Manchester United jerseys.
Last month, the team displayed the new kit at the Community Shield game that pitted Manchester United against crosstown rivals (and defending Premier League champions) Manchester City. Although Man United lost in a penalty shootout, the Copilot+ PC logo on the jersey succeeded in showcasing the strong partnership between Qualcomm and Microsoft.
Whether we’re talking about Intel Inside in the 1990s or Snapdragon today, influencing consumer preference for B2B products incorporated into consumer devices presents a special challenge. Qualcomm must ensure its branding efforts create a strong enough pull to encourage consumers to actively seek out Snapdragon-powered devices when making purchase decisions. McGuire acknowledges this challenge. “Our goal is to cultivate such a strong affinity for the Snapdragon brand that it influences consumer behavior at the point of purchase,” he said. “We want consumers to not only recognize the Snapdragon name but actively demand it when choosing their next smartphone, laptop, or other connected device.”
That appeal should only expand over time, if Qualcomm has its way. CEO Cristiano Amon announced during the company’s Q3 2024 earnings call that Snapdragon X-series PCs will be available at a price point as low as $700 next year, thereby broadening the accessibility of these high-performance Arm-compatible processors.
There is still more work to be done. In a recent post on X (formerly Twitter), Moor Insights & Strategy CEO and chief analyst Patrick Moorhead pointed out that Qualcomm’s success in the PC market hinges on a multifaceted approach in which ramping up marketing and sales efforts toward the enterprise sector is crucial. The company needs to expand its product portfolio to cater to various price points, including the desktop segment. Continuous collaboration with Arm and Microsoft is necessary to ensure seamless software optimization across a wide range of applications.
Qualcomm Keeps Expanding Its Ambitions
By addressing these challenges and striking the right balance between the consumer and B2B markets, Qualcomm has the opportunity to solidify its position as a significant player in the PC industry while fostering the increased competition and innovation among processor vendors that OEMs tell MI&S they crave.
While the full impact of the Man United partnership—and Qualcomm’s ability to compete across multiple price points and form factors—remains to be seen, Qualcomm’s strategic branding vision and calculated moves have positioned it well for success. Qualcomm is building a deliberate narrative around the Snapdragon brand, demonstrating company leaders’ belief that a strong brand isn’t just a marketing asset, but a key driver of long-term success and profitability.
]]>Canva Steps Up To Challenge Adobe With AI Acquisition And Partnerships
https://moorinsightsstrategy.com/canva-steps-up-to-challenge-adobe-with-ai-acquisition-and-partnerships/
Wed, 04 Sep 2024 16:06:01 +0000https://moorinsightsstrategy.com/?p=42304Canva's acquisition of Leonardo.Ai and partnership with Getty Images boost its position against Adobe, but it must go further to truly compete against the design giant.
]]>Leonardo.Ai brings high-quality AI-generated images to the Canva platform. Leonardo.Ai
Canva, the popular online design platform, has recently made important moves that demonstrate its growing ambitions in the AI-powered design space. The acquisition of Leonardo.Ai, an Australian AI content production platform, and Canva’s partnership with Getty Images highlight the company’s focus on enhancing its AI capabilities. These strategic moves position Canva as a significant player in the design industry—and pose a potential challenge to Adobe’s dominance.
Canva’s AI Power Play: Leonardo.Ai Acquisition
Canva’s acquisition of Leonardo.Ai marks a big step forward in its AI journey. Leonardo.Ai’s generative AI models are known for their ability to create high-quality images and art from straightforward and guided text prompts. Adding Leonardo.Ai models should significantly enhance Canva’s existing AI tools, such as Magic Studio. While I don’t think anyone could argue that Leonardo is as advanced as Adobe’s Firefly models, this move does signal Canva’s intent to accelerate the development of new AI-driven features and provide its users with more advanced design capabilities.
The acquisition brings to Canva not only Leonardo.Ai’s technology but also its team of 120 researchers, engineers and designers. This influx of talent should bolster Canva’s ability to innovate and develop new AI products while continuing to invest in research and development.
Canva’s Content Boost: Getty Images Partnership
This new partnership integrates Getty Images’ vast library of high-quality stock photos directly into the Canva platform, providing Canva’s users with access to millions of premium images for their design projects. This move could help Canva cater to a broader audience, including professional designers and businesses seeking premium visuals.
This partnership gives Canva an advantage that differentiates it from Adobe’s offerings. The Getty Images library includes more than 350 million images, which compares to 248 million for the Adobe Stock library (including photos, illustrations and vectors). This could attract users who prioritize quality and seek an alternative to Adobe Stock.
Getty Images’ recent launch of its own generative AI tool raises the possibility—as yet unspoken by the companies involved—that the collaboration could further integrate AI capabilities within Canva’s platform.
Competing With Adobe: Differentiation And Market Expansion
While Adobe remains dominant in the design industry, Canva’s recent moves could disrupt the competitive landscape. Both Canva and Adobe Express have established themselves as popular graphic design tools, catering to a wide range of users, from solopreneurs and small businesses to creative professionals. Their browser-based and mobile app accessibility and intuitive interfaces have made them go-to solutions for those without extensive design experience.
For its sheer quantity of templates, Canva has the edge. Its platform boasts an extensive library of 250,000 free templates, with more than 400,000 templates available on its premium plan—significantly surpassing the number of templates offered by Adobe. This selection provides Canva users with a wide range of options to kickstart their design projects and support various creative needs.
However, Adobe Express distinguishes itself with its cross-platform compatibility and a dedicated “Quick Actions” section that streamlines everyday tasks such as background removal, image resizing and format conversions. Adobe Express’s focus on efficiency and productivity serves users who value speed and convenience in their design workflow.
By catering to the needs of professional users and businesses—including via the Leonardo.Ai and Getty Images deals—Canva has made a big push to expand its market reach beyond individual users and small teams. This begins to position Canva as a viable competitor to Adobe not just for Adobe Express but in the enterprise market as well, potentially opening up new revenue streams and growth opportunities for Canva.
The Road Ahead: A Multifaceted Challenge
Canva’s journey towards becoming a leader in AI-powered design is one I’ll be watching closely. However, it is crucial to recognize that Adobe’s strength lies not only in its design tools, but also its comprehensive suite of solutions catering to various aspects of enterprise marketing, content creation and data management. To truly challenge Adobe’s dominance, Canva would have to expand its offerings beyond design and venture into other areas where Adobe currently enjoys strong advantages. Fancy design tools, many templates and quick, streamlined actions for users are just a few steps in this journey. If it’s going to take on Adobe, Canva must build a robust ecosystem that addresses the entire content supply chain for businesses and creative professionals.
Nonetheless, Canva’s focus on accessibility, ease of use and AI-powered design could provide users with a compelling alternative to Adobe’s offerings. The competition between these two companies will likely intensify in the coming years, benefiting users with more choices and better tools.
Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking and video and speaking sponsorships. Of the companies mentioned in this article, Moor Insights & Strategy currently has (or has had) a paid business relationship with Adobe.
The Moor Insights & Strategy team hopes you had a nice Labor Day weekend!
Last week, Patrick Moorhead, Will Townsend, and Matt Kimball attended VMware Explore 2024. Matt also attended the GlobalFoundries Analyst event. Robert Kramer was in New York for the IBM SAP Analyst and Advisory Services Day, and Robert and Melody Brue were at the US Open with IBM.
This week, Anshel Sag is at IFA Berlin. Next week, Patrick, Melody, Matt, and Robert will be in Las Vegas for Oracle Cloud World, and Will Townsend will be in London for Connected Britain, where he’s also moderating a panel. Jason Andersen will be in Austin for JFrog swampUP 24.
Over the last week, our analysts have been quoted numerous times in international publications with our thoughts on NVIDIA earnings, IBM, Crowdstrike, Amazon, and AI-powered smart glasses.
I’ve speculated about OpenAI’s Strawberry release several times over the past few weeks, here and elsewhere. Most of my coverage is based on research papers and factual material, so it’s fun to occasionally go off the factual rails and speculate. Most recently another story appeared in The Information reporting that Strawberry was demonstrated to the U.S. government. The old information is that Strawberry will have much greater reasoning power than what’s available today. The newer information is that OpenAI is working on a new LLM called Orion, and Strawberry will be used to train and enhance Orion. Whenever it is released, I believe it will move us into a new era of AI, one of super-reasoning. I’m looking forward to something that will be unique.
iAsk.AI is a relatively new entity to me, but I think I’ve found a new go-to AI model. iAsk.AI is a cutting-edge AI search engine developed by a new company called AI Search Inc., which was established last year. Its founders previously created CamFind, a visual search engine, and JRank, a search tool for searching a single complex website. iAsk.AI uses that technology to deliver instant responses for user queries.
iAsk Pro was the first model to achieve the first “Expert AGI” performance, scoring 93.89% on the MMLU benchmark and 85.85% on the new MMLU Pro test. The MMLU (Massive Multitask Language Understanding) benchmark evaluates the performance of AI models across a wide range of subjects, including science, mathematics, history, and more. It comprises over 12,000 questions from academic exams and textbooks, testing an AI’s understanding and reasoning abilities in diverse domains. The new MMLU Pro test is an updated and more difficult version of this benchmark. Achieving an 85.85% score on MMLU Pro indicates that the AI model performs exceptionally well and surpasses the accuracy of many human experts in these subjects. It outperformed the previous best model, GPT-4, by a large margin of 12 percentage points.
I’ve speculated about OpenAI’s Strawberry release several times over the past few weeks, here and elsewhere. Most of my coverage is based on research papers and factual material, so it’s fun to occasionally go off the factual rails and speculate. Most recently another story appeared in The Information reporting that Strawberry was demonstrated to the U.S. government. The old information is that Strawberry will have much greater reasoning power than what’s available today. The newer information is that OpenAI is working on a new LLM called Orion, and Strawberry will be used to train and enhance Orion. Whenever it is released, I believe it will move us into a new era of AI, one of super-reasoning. I’m looking forward to something that will be unique.
iAsk.AI is a relatively new entity to me, but I think I’ve found a new go-to AI model. iAsk.AI is a cutting-edge AI search engine developed by a new company called AI Search Inc., which was established last year. Its founders previously created CamFind, a visual search engine, and JRank, a search tool for searching a single complex website. iAsk.AI uses that technology to deliver instant responses for user queries.
iAsk Pro was the first model to achieve the first “Expert AGI” performance, scoring 93.89% on the MMLU benchmark and 85.85% on the new MMLU Pro test. The MMLU (Massive Multitask Language Understanding) benchmark evaluates the performance of AI models across a wide range of subjects, including science, mathematics, history, and more. It comprises over 12,000 questions from academic exams and textbooks, testing an AI’s understanding and reasoning abilities in diverse domains. The new MMLU Pro test is an updated and more difficult version of this benchmark. Achieving an 85.85% score on MMLU Pro indicates that the AI model performs exceptionally well and surpasses the accuracy of many human experts in these subjects. It outperformed the previous best model, GPT-4, by a large margin of 12 percentage points.
VMWare Exchange was a big event for the IT Automation crowd this week, and the big news was the effort to simplify what had previously been a complex lattice of products. VMWare Cloud Foundation 9 is now a solid and more unified starting point for customers to build their own clouds. Much of the coverage has been on the unification of compute, storage, and networking features, but there are also new DevOps services that are notable. The consolidation of multiple ops services and stakeholders are trends that I discussed in this article published just a few days ago. A few years ago, we all would have been surprised to see VMware include a native Kubernetes stack within its foundation offering, but it’s there in VCF 9. Between these base DevOps services and a rich set of add-on capabilities, it will be very interesting to see how Broadcom will continue to build bridges to developer ecosystems.
This week I also had the pleasure to speak with two startups that are specifically working on how AI can improve the productivity of development teams. This is a big step forward from the personal productivity benefits associated with AI assistants embedded in an IDE. Network Perspective is a Polish firm that collects and aggregates team data from collaboration and productivity apps to figure out how development teams can work better together, with the goal of making more time for the deep work that developers need to get into the flow. Network Perspective is already helping customers and is showing a new use for AI. The second startup is still in stealth mode, but the idea is to create a new type of IDE that allows developers to work on the same sets of code at the same time, with an AI assistant facilitating the process. Think of it like a virtual hackathon. These two new approaches to innovating are starting to show us all that there is more to AI than chatbots. I’ll be keeping an eye on these startups as they navigate the market.
Investors are worried about how Salesforce’s use of AI agents could affect productivity and the need for customer license seats. During a Q&A session for its Q2 2025 earnings call, CEO Marc Benioff said that there is significant interest in AI agents, with approximately 200 million currently in trials. Salesforce is considering a new consumption-based pricing model, which could involve charging $2 per AI agent conversation. The company is confident in its AI strategy, especially with the upcoming launch of the Einstein 1 Agentforce Platform. Salesforce’s goal is to have one billion AI agents in use by the end of fiscal year 2026.
VMware Explore 2024 came and went, and the big news out of this event was VCF 9. This launch is the beginning of the company’s strategy coming into focus as it looks to effectively deliver the public cloud on-prem. Billed as a private cloud solution, VCF 9 is, to me, the realization of what enterprise IT craves—a wholly crafted cloud stack and operating model that also allows an IT executive to deliver the environment and agility that their developers and data scientists require while simplifying the way their IT staff deploys, provisions, and manages infrastructure.
As I said, I think VCF 9 is what IT craves, but I’m not sure that IT realizes they crave this. This is due largely to perception—especially the perception issue around the term “private cloud.” This is a phrase that is tied to older technologies that never quite met the expectations of enterprise IT organizations and eventually gave way to hybrid cloud technologies. I would have greatly preferred the company find a different way to position VCF other than “private cloud.” Or at least to refer to it as a later version—an evolution of the older private cloud.
One thing I didn’t hear addressed at the event is how VCF supports hybrid cloud. To tell enterprise IT organizations that they simply need to repatriate all of their apps and data from the public cloud back to on-prem is not realistic. And the company has not really demonstrated how it will resolve this.
That said, I do like how the company has laid out a vision for VCF, along with a set of tools and services to enable this transformation. I also like how the company has laid down a strong opinion on what the future datacenter looks like. Now it just needs to execute against this vision.
How about that Nutanix quarter?! The company showed a strong beat on expectations and its guidance was even stronger. Nutanix has executed a strong strategy—a masterclass in leveraging market disruption (in this case caused by VMware turbulence). How did it do so? By activating OEM and channel partnerships, from both a technical and go-to-market perspective.
Pure Storage delivered a strong second quarter, outpacing the market in terms of growth. Despite this, the company saw its stock take a significant drop as its guidance for the rest of year fell short of expectations. While I understand that the Street is forward-looking, it is disappointing to see a company punished while delivering stellar results and forecasting growth for the future. Regardless, it is good to see Pure establishing a stronger presence in both AI and hyperscalers, markets that represent the largest growth vectors in the storage market.
Are these Dell Technologies quarterly numbers real??? The company’s Infrastructure Solutions Group (ISG) saw a 38% year-over-year increase in revenue (to $11.6 billion) and a 22% YoY increase in operating income (to $1.2 billion). Both traditional and AI server sales saw strong growth and significant pipelines. Interestingly, the company’s storage business struggled, shrinking 5% YoY in terms of revenue. This is a continuing trend for the company that has seen its storage business on a continual decline, despite what it says is increased demand for core storage.
What is going on? I believe that AI and the performance requirements associated with it have put a renewed focus on high-performance storage. And while Dell’s storage portfolio is more than adequate, many storage companies (such as Pure, VAST, and Weka) are positioning themselves as critical to feeding the AI data pipeline.
No doubt Dell will find its footing on the storage front. And its >$7 billion revenue in servers and networking looks like it will be eclipsed next quarter.
MLPerf Inference 4.1 benchmark results were published this week, and there were some interesting numbers in the release. While NVIDIA ruled (as one would expect), AMD showed some compelling results with its first submission to the benchmark. Meanwhile, Untether AI demonstrated performance-per-watt leadership with its SpeedAI 240 accelerator.
While the AI training market is a battle among a few companies—and dominated by one—AI inference is an entirely different game. Traditional GPUs and big silicon will be challenged by companies like Untether AI that have designed and developed highly performant silicon that fits into very small power envelopes to support the diversity of use cases that span the enterprise.
Keep an eye on Untether AI and other companies like it (such as Tenstorrent)—this inference game is just heating up.
IBM and the US Open made last week memorable for me. It started with meeting Tracy Austin and her offering advice on my backhand for tennis and pickleball. When I was growing up I watched her win the US Open as a teenager (twice!); it was fantastic to hear her thoughts on IBM’s technology. FYI, she said she uses a two-handed backhand in pickleball (interesting!).
IBM is clearly transforming the way we experience sports and entertainment. This can be appreciated when we realize that IBM has been collecting data for the past 30 years in partnership with the United States Tennis Association for the US Open—in parallel with its efforts at Wimbledon and The Masters. I had the chance to use the US Open mobile app firsthand when I attended a few matches. The app provided detailed stories, scores, stats, AI-driven predictions, news, schedules, and much more that added a new layer of depth to the tournament experience. What really impressed me was its integration with Ticketmaster, which lets you access your tickets for the matches right within the app.
IBM’s technology from the US Open is making its way into various industries beyond sports and entertainment. In retail, it can be used to create personalized shopping experiences. In healthcare, it can enhance patient care. The cybersecurity measures deployed at the US Open can help financial institutions protect sensitive data. AI and data analytics can be used to optimize production processes in manufacturing, and these innovations are extending to many other sectors as well.
Much of the buzz around GenAI is centered on the compute side of the infrastructure stack. However, networking is a crucial component, serving as the conduit to move data, connect large language models, and eventually extend workloads to the network edge. Although Dell Technologies posted healthy growth in AI server sales in its recent quarterly earnings, its networking strategy heavily relies on Broadcom. That might not be a bad thing, given Broadcom’s investment in extending Ethernet’s interconnect capabilities, but companies such as HPE that are doubling down on networking infrastructure beyond using merchant silicon could gain an edge in delivering a more complete GenAI solution.
Last week the SAP practice within IBM Consulting hosted me for its analyst strategic session while I was in New York for the US Open. It was a valuable experience to join the IBM SAP team and discuss the critical nature of ERP systems for global enterprises. During the session, we discussed the key elements that contribute to successful ERP transformations and why the IBM SAP team has been effective. It starts with a global network of 18,000 certified SAP professionals. Solid execution processes, including change management, are also critical. Strong data management is at the core of this success, particularly given that AI is used to support project delivery and application management across industries including manufacturing, consumer goods, retail, defense, automotive, and utilities. In the coming weeks I’ll be writing up my research detailing more specifics, including case studies, on IBM SAP.
Transportation management systems (TMS) are improving supply chains by making operations more efficient for manufacturers, distributors, e-commerce businesses, retailers, and third-party logistics providers. These systems help streamline shipping, lower costs, improve profitability, and offer better visibility into the supply chain—in short, automating complex processes to secure transportation services at the best possible price without sacrificing quality. The value of TMS comes from understanding how to use the technology effectively and managing its implementation carefully to achieve tangible business results. TMS solutions can be part of larger SCM and ERP systems or used on their own. As TMS usage increases, I’ll explore their challenges, benefits, and impacts on businesses. More details to come.
According to Paycor’s “HR in 2025” study, newer employees are particularly prone to turnover, and remote workers often express less favorable views of their leaders and experience role ambiguity. Paycor surveyed more than 7,000 HR, finance, and IT professionals for the report. Some interesting key takeaways: The ongoing talent shortage is due to several factors, including low birth rates, retirements, skills gaps, and caregiving obligations.
In addition, the employee experience needs to start at the application step. 52% of candidates have declined job offers due to a poor experience during the hiring process, according to CareerPlug. To navigate this, companies are increasingly turning to AI to optimize their recruitment process and improve the candidate experience. This includes a wide gamut of processes, including broadening the candidate pool beyond active job-seekers, and automating touchpoints that keep applicants updated and feeling valued, even when they don’t end up with an offer.
In a recent analyst briefing, Ericsson provided details about “site energy orchestration,” an initiative to reduce cellular infrastructure energy costs. The company says global mobile networks comprise 1% of global energy consumption (source: GSMA report 2024), so savings could be significant. Electric utilities are rapidly moving to dynamic pricing models that reflect real-time supply and demand. Ericsson shaves loads to avoid peak prices, charges local batteries when rates are low, switches to battery power when rates are high, and sells excess power back to the grid from on-site renewables and batteries. Ericsson’s field tests produced savings of 36% when combining these features. Any home or business, not just cell sites, can reap these benefits by orchestrating electricity flow, and that’s where IoT comes in. Matter, the smart home connectivity standard from Amazon, Apple, Google, Samsung, and other companies, enables the whole-home device communication required to manage electricity usage, storage, and generation. This paper explains how autonomous energy orchestration software using Matter-connected equipment can deliver measurable financial benefits. As more homes and businesses orchestrate power usage, the effects become grid-scale, with public policy implications. Bravo Ericsson!
I’m increasing my coverage of the Linux Foundation’s LF Edge project to include two new at-large projects that are consistent with my views on IoT middleware and device software evolution. The first one is EdgeLake, sponsored by AnyLog—a distributed, virtual relational database that combines structured data from multiple sites. A standard SQL query selects results from all databases, regardless of location. There are two big advantages to this approach: (1) Only actionable data travels over the network. All data stays local, at the edge, until a specific query calls for it. (2) EdgeLake is pure middleware with standard SQL interfaces and no system dependencies. So, it can plug and play with any device and any back-end application. The second project is Ocre, sponsored by Atym, which uses WebAssembly (Wasm) and Zephyr to provide ultra-lightweight containers for microcontroller-based edge devices, enabling developers to focus on applications without building custom OSes, system images, and OTA update services. It’s like Docker for small devices. Both projects are worth watching.
In addition to the two LF Edge projects described above, the Linux Foundation’s Joint Development Foundation is sponsoring Margo, a mechanism for orchestrating applications and workloads on edge devices. Margo’s goals are ambitious (perhaps too ambitious), but big companies are involved (Microsoft, Intel, ABB, Capgemini, Rockwell, Schneider, Siemens, and more), so the initiative has plenty of resources. Margo is also worth watching.
Apple’s upcoming iPhone launch is happening on September 9 at 10 a.m. Pacific, and I believe it will be very iPhone- and AI-heavy. We might get more updates on wearables, though I don’t think we’ll get Macs running Apple’s M4 chips at the same event.
Anandtech’s abrupt shutdown marks the end of a 27-year journey for the hardware review publication, which will be missed by many and will leave a mark on the industry for many years to come.
A panel of justices from Brazil’s Supreme Court has upheld an order that X (formerly Twitter) be banned in that country. This ruling—the latest twist in months of conflict between the Brazilian judiciary and X over allegations of disinformation—shows the challenges of running a social media platform at global scale without global uniformity of law.
Spotify is (understandably) upset that Apple has stopped the function that allows the volume buttons on iPhones to work for Spotify Connect, forcing Spotify users to resort to a workaround to control this basic feature on connected devices such as wireless speakers or smart TVs. Spotify holds that this violates the Digital Markets Act in the EU, and I believe Apple’s actions will just set off another round of lawsuits in Europe—where Apple has already run into much resistance.
Post-quantum cryptography (PQC) is creating churn in the quantum ecosystem. Juniper Networks has announced an investment in Quantum Bridge Technologies, a pioneer in the Distributed Symmetric Key Exchange (DSKE) protocol for PQC networks. Juniper plans to advance quantum-safe communications by using Quantum Bridge to expand its DSKE technology, which integrates into its infrastructure without relying on asymmetric cryptography.
Quantum Bridge’s DSKE technology is the first to offer symmetric key distribution at scale and provides security against future quantum encryption-busting attacks. Combining this technology with Juniper’s quantum-safe VPNs and crypto-agility solutions increases the security of Juniper’s networking platform. It should protect encrypted data from “harvest now, decrypt later” threats, where actors steal encrypted assets now and decrypt them when the quantum capability becomes available. This deal gives both Juniper and Quantum Bridge a strong position in quantum-safe networking.
Iranian hackers hoping to disrupt U.S. political campaigns are using DNS techniques to register and weaponize lookalike domains with the intent of stealing data through sophisticated phishing attacks. Compromised data could be used for direct cyberattacks against specific candidates, or even to steal voter data to enable casting fraudulent ballots in the future. However, DNS-specific cybersecurity tools can be used to counter these attacks. A prime candidate (ahem) for this is Infoblox, which has made DNS the cornerstone of its platform development efforts for two decades. Recent announcements about its DNS threat intelligence capabilities point to its ability to identify threats much sooner than other vendors. Whatever happens, the vigilance of both big tech companies such as Google, Microsoft, and Meta and specialized security vendors like Infoblox will be required to back up the efforts of government agencies and the campaigns themselves to keep U.S. elections free from interference.
Verizon has partnered with the Atlanta Hawks and State Farm Arena to be the Atlanta venue’s official 5G wireless partner. This partnership aims to improve connectivity for fans at games and other events. Verizon will also be making technology upgrades throughout the arena to improve experiences for everything from event entry to concessions. With this partnership, Verizon also plans to create exclusive experiences for Verizon customers and connect more deeply with the Atlanta community.
Verizon is actively taking steps to reduce its environmental footprint. The company is transitioning to renewable energy, aiming to source 50% of its annual electricity usage from renewable sources by 2025, and 100% by 2030. The company also actively helps customers reduce their carbon emissions, with its solutions enabling the avoidance of over 90 million metric tons of CO2 equivalent since 2018. Water conservation is also a priority, as the company has reduced water usage by 16% between 2019 and 2022. Verizon is also working to electrify its fleet, plus it has set a goal to collect and recycle 10 million pounds of e-waste by 2026.
Deutsche Telekom recently announced its plans for deploying 5G Standalone—with an interesting twist. The operator plans to offer it as a bespoke service married with network slicing, rather than deploy it broadly to subscribers. It is an interesting strategy, one that is likely designed to not confuse the German market, given the roller coaster of high and low expectations of 5G globally. One thing is for certain: the 3GPP standards body will not make the same mistake of allowing core infrastructure deployment to lag the radio access network for 6G and beyond.
]]>The Silent Evolution Of Meetings: Hybrid Work’s Impact On Engagement
https://moorinsightsstrategy.com/the-silent-evolution-of-meetings-hybrid-works-impact-on-engagement/
Thu, 29 Aug 2024 20:26:16 +0000https://moorinsightsstrategy.com/?p=41874Virtual meetings are as popular as ever, but a Vyopta study shows that some of the trends connected to them raise concerns about employee engagement and retention.
For the study, Vyopta analyzed over 40 million meetings involving more than 450,000 employees, focusing exclusively on remote or hybrid meetings conducted through online platforms including Microsoft Teams, Cisco Webex or Zoom. The research revealed some startling facts. While many organizations have implemented return-to-office policies after the Covid pandemic, the volume of virtual meetings has remained steady even as in-person meetings have more than doubled. In other words, virtual meetings have become an integral part of our work culture, regardless of where people are physically located.
What We Have Here Is A Failure To Participate
However, the study also highlighted a concerning trend: the rise of no-participation rates in small-group meetings. This refers to instances where participants remain on mute for the entire meeting, essentially becoming passive observers. This is like “quiet quitting,” but for meeting participation.
In 2023, the rate of “quiet meetings”—where at least one participant is on mute for the duration of the meeting—reached 7.2%. This is a significant increase from 4.8% in 2022. According to McKinsey research, employee disengagement and attrition could cost a median-size S&P 500 company between $228 million and $355 million a year. The disengagement portion of that total varies by the size of the organization, but across the board amounts to 56% of the loss. That means lost productivity is costing many companies of this size $80 million or more per year as employees essentially disengage from crucial discussions.
Another notable shift is the decreasing frequency of camera usage in virtual meetings, which makes sense to me. There are many reasons to have the camera off when working remotely, including attending a super early meeting (read: not yet showered) with coworkers on a different time zone, having kids home from school or working in a location with a distracting background. But common sense also tells us that if someone truly isn’t participating in a call, it’s likely because they are doing something else. This could mean they are not a critical participant in the conversation or are attending as more of an observer. However, whether merely being a “fly on the wall” or not contributing for other reasons, not actively participating could indicate disengagement.
The increasing camera-off trend is the first Vyopta has observed since the early days of the pandemic. In 2023, the camera enablement rate dropped to 25.6%, down from 30.2% in 2022. While some may view this as a minor and natural shift, the study suggests that camera usage and participation rates strongly correlate with employee retention.
Is Digital Debt Contributing to Quiet Meetings?
The constant barrage of data, emails, meetings and notifications has created a phenomenon known as “digital debt”—similar to the “technical debt” that software developers know all too well. This overwhelming influx of information surpasses our capacity to process it effectively, leading to decreased productivity and innovation. A recent Microsoft study showed that 64% of employees struggle with having the time and energy to do their job, and these individuals are 3.5 times more likely to also struggle with innovation and strategic thinking. Furthermore, 60% of leaders express concern about the lack of breakthrough ideas within their teams, highlighting the tangible impact of digital debt on business success.
Quiet meetings don’t necessarily lead to quiet quitting—they might lead to actual quitting. Employees who showed significantly lower camera enablement rates (with cameras on as little as 18% of the time) left their organizations within a year at a much higher rate than those whose cameras were on at least 32.5% of the time. This may indicate that active engagement in meetings, both verbally and visually, plays a crucial role in fostering a sense of connection and belonging among employees, ultimately impacting their decision to stay with a company.
In a Harvard Business Review article, two Vyopta executives noted that these associations don’t necessarily prove a direct cause-and-effect relationship—and shouldn’t be applied indiscriminately to individual cases. However, the observed trends are significant enough to warrant further investigation at an organizational level.
Culture Backed By Technology Drives Employee Engagement
All of these findings underscore the need for organizations to rethink their approach to meetings in the hybrid work era. Leaders need to prioritize creating engaging and inclusive experiences that encourage active participation from all attendees, regardless of their location. From speaking to leaders across many organizations, I can tell you that the technology used to conduct meetings is a huge driver in creating those experiences.
Beyond the technology, a thriving hybrid work environment is fundamentally rooted in a culture of trust. Employees want to feel empowered to work when and where it’s most productive for them, without the constant pressure to be on camera. However, Vyopta’s study shows that on-camera time benefits both workers and companies. Features such as noise cancellation and blurred backgrounds create a professional atmosphere even in less-than-ideal home environments. Additionally, touch-up features or even avatars can help alleviate self-consciousness, ensuring everyone feels comfortable and confident participating fully in virtual meetings.
I’ve written about a wide range of hardware and software that can make meetings more enjoyable for participants in and out of the office. For example, I swear by my HP Poly Voyager Free 60 + UC earbuds to knock out ambient noise so I can take a call even when I’m in the bleachers at a baseball game (two of my sons are serious players) or in an airport. And Cisco has formed partnerships with auto makers such as Mercedes Benz, Audi and Ford to deliver in-car Webex functionality.
Regardless of the specific vendors chosen, it’s the synergy of trust and technology that creates a hybrid work environment that can maintainthe benefits of flexible work. Companies should take thoughtful, concerted measures to foster productivity and well-being whileensuring that meetings remain productive, collaborative and fulfilling for all participants. The future of work is hybrid, and there is a tremendous opportunity to shape it in a way that empowers employees and drives organizational success.
The Moor Insights & Strategy team hopes you had a nice weekend!
Last week, Robert Kramer attended the Modern Data Quality Summit 2024 virtually and Will Townsend hosted a live webinar with Nile: From Complexity to Cloud-Native: Top-10 Reasons to Start Building Your Next-Gen Enterprise Network. If you missed it, it’s now available on demand.
This week, Will Townsend is attending VMware Explore while Matt Kimball attends virtually. Matt is also attending the GlobalFoundries Analyst event. Robert Kramer is attending the IBM SAP Analyst and Advisory Services Day in New York. Robert and Melody Brue will attend the US Open with IBM in New York.
Last week, our MI&S team published 16 deliverables:
An AI “oops” — OpenAI does a lot of development with advanced AI models that may act autonomously. It has developed a Preparedness Framework designed to assess and mitigate potential risks associated with those models to ensure that anything related to autonomy is identified and managed. However, to address the challenges of evaluating generated code and simulating real-world development scenarios accurately, OpenAI uses the SWE-bench, a benchmark that evaluates large language models’ ability to solve real-world software issues sourced from GitHub. During its testing, OpenAI found that some tests in SWE-bench could be too hard or even impossible to solve. That could cause a model’s capabilities to be underestimated. OpenAI is currently working with SWE-bench to fix these issues.
The last thing we need is an AI safety incident caused by underestimating a model’s autonomous capabilities.
An AI “oops” — OpenAI does a lot of development with advanced AI models that may act autonomously. It has developed a Preparedness Framework designed to assess and mitigate potential risks associated with those models to ensure that anything related to autonomy is identified and managed. However, to address the challenges of evaluating generated code and simulating real-world development scenarios accurately, OpenAI uses the SWE-bench, a benchmark that evaluates large language models’ ability to solve real-world software issues sourced from GitHub. During its testing, OpenAI found that some tests in SWE-bench could be too hard or even impossible to solve. That could cause a model’s capabilities to be underestimated. OpenAI is currently working with SWE-bench to fix these issues.
The last thing we need is an AI safety incident caused by underestimating a model’s autonomous capabilities.
This week I published a new piece on Forbes about whether developers should be worried about AI replacing their jobs. The research I performed led me to the conclusion that there is no imminent danger for devs. Yet there still remains a gap between what the general public thinks versus what developers understand about the nature of what developers do, how they do it, and how the role has evolved. To that end I wrote thisnew article to help go beyond all of the soundbites and opinions out there.
This week has also been dominated by follow-up conversations and interactions from this new article. The topic moved from developers to the broader idea of AI augmenting humans and processes versus replacing them. I have a new piece coming up where I don my developer hat and fail spectacularly. But in the process, I learned a lot about how to change your mindset to get better results from AI, instead of getting the same results only faster and/or cheaper.
LiquidStack, the leader in immersion cooling technology, has done a good job of expanding its portfolio to be competitive in the direct-to-chip cooling space. This company, which cut its teeth in the crypto-mining space with a two-phase immersion solution, has clearly seen the trends and transitioned quite well into a company with a much broader portfolio.
Liquid cooling is the future—and not the distant future. The amount of investment dollars pouring into this space is astounding and companies like LiquidStack and JetCool are very well positioned (and funded) to play a significant role in both shaping and capturing the market. There is also a lot of promotion from server vendors around their own proprietary cooling solutions. However, as I speak with datacenter operators, it is clear they are looking for solutions that can span all systems across all racks, especially as AI and other workloads drive heterogeneity across the enterprise.
What to make of AMD’s acquisition of ZT Systems? Are you a fan? $4.9 billion is a lot of money to pay for a company when the intent is to spin off half of its operations. When looking at what AMD is actually acquiring, it’s about having a dedicated team to design AI systems. Given that about 1,000 engineers are coming over from ZT, some have framed the acquisition at a cost of about $4.9 million per systems engineer.
I’m a fan of the move—a big fan. By many estimates, the AI market is expected to grow to more than $400 billion dollars annually in the next few years. This AI market is going to be powered by servers that are unlike what is being deployed today. These will be highly bespoke systems that tightly integrate CPUs, GPUs, I/O, networking, and storage to best move, process, train, and operationalize data. In this context, AMD is the only company that can (at this moment) realistically challenge the dominance of NVIDIA with its IP portfolio.
Putting the pieces together in a bespoke platform is really difficult and time-consuming. And time-to-market is absolutely critical if AMD wants to compete beyond just spec sheets and capturing overflow business. To compete in a significant way, AMD needs the resources to design these systems faster and more completely. Further, the system design work has to integrate with and inform silicon design. THAT is what AMD has bought with ZT Systems, and it is going to pay dividends down the line.
Juniper Networks recently announced its Blueprint for AI-Native Acceleration. The company is offering training, trial offers that include software and hardware, and flexible licensing to reduce the friction for customers that are hesitant about embracing AI-infused networking. It is a novel approach, one that provides Juniper channel partners with a new set of tools that could lead to closing more network infrastructure sales opportunities.
During its August 2024 Security Patch Day, SAP released fixes for 17 vulnerabilities, six of which were particularly severe, scoring between 7 and 10 on the Common Vulnerability Scoring System (CVSS) scale. SAP urged customers to apply these patches immediately and provided workarounds for situations where immediate patching isn’t feasible.
These are the two most critical vulnerabilities:
CVE-2024-41730 — An authentication bypass flaw in SAP’s BusinessObjects intelligence platform, with a CVSS score of 9.8. This vulnerability allows unauthorized users to obtain a logon token via a REST endpoint if single sign-on is enabled, potentially compromising the system’s confidentiality, integrity, and availability.
CVE-2024-29415 — A server-side request forgery (SSRF) vulnerability in applications built with SAP Build Apps. This issue arises from improper categorization of IP addresses that was not fully addressed in a previous fix.
Hackers frequently target ERP systems because they present such big—and potentially disruptive—targets, most of all with major vendors such as SAP. There has also been a significant increase in ransomware attacks specifically targeting ERP systems since 2021. I suggest always staying current on your ERP system and taking immediate advantage of updates/patches to reduce the threat of attacks.
Contact-center software company Genesys released its 2024 Sustainability Report, which showed that it is making significant progress towards its 2030 sustainability goals. These include reducing emissions, improving its CDP and EcoVadis assessments, and opening a new LEED Gold-certified R&D center. Socially, Genesys has expanded its charitable offerings and continued to focus on diversity and inclusion in the organization. Genesys’ public commitment to sustainability is commendable and, I believe, a standout in the industry
Gamescom this year has really embraced its role as the replacement for E3, with tons of game announcements and many hardware manufacturers taking their teasers from Computex and relaunching them at Gamescom. One company that didn’t do that was HP, which announced a new highly customizable Omen 35L gaming PC along with some new keyboards and microphones.
There are rumors that Meta is canceling its La Jolla mixed-reality headset, a potential successor to the Quest Pro—and a device that I honestly didn’t think was necessary to begin with. Especially given that potential issues with LG’s manufacturing could already be pushing back production, I don’t think Meta has much wiggle room on timelines. I’m also not sure that we need more headsets over $1,000.
It has been really interesting to see how many titles are coming out for Xbox and PS5 at Gamescom thanks to both Sony and Microsoft mostly abandoning console exclusives in favor of simply selling as many copies of a game as possible. This approach showed how successful it could be with Helldivers, although Sony got a bit too greedy and tried to make people sign up for a mandatory Sony Online account after having already bought the game.
“QuitToking” is a growing trend in which employees publicly resign on TikTok or other social media platforms. This clearly reflects a shift in employees’ priorities towards work-life balance and a willingness to voice their discontent. This trend presents a double whammy for companies in that they are losing valuable talent, and public resignations can damage their brand. While it can be risky for employees to air grievances publicly, by the time they’re on TikTok recording themselves walking out of the office for the last time, the damage is done for that employer. Savvy organizations will take heed of this trend—and what it says about today’s workplace atmosphere—and put themselves ahead of the curve. One way is to take advantage of solutions provided by companies such as Workvivo, Zoho, and Slack. These platforms help foster open communication and track employee engagement (among other things) with the goal of creating a more positive work environment—one that helps companies protect their employer brand and retain top talent.
NXP released a fully supported Debian 12 Linux distribution for “select” i.MX and Layerscape evaluation kits. Debian is among the most popular distributions for embedded applications because of its stability, extensive package support, and long-term updates. NXP provides complete board support packages (BSPs) and a Yocto configuration toolchain, enabling developers to start building production-ready IoT applications with little or no system development. This announcement further proves that IoT device development is rapidly transitioning from DIY custom mashups to software-defined application platforms. Product companies using a platform approach rather than full-stack embedded development have much faster product cycles, lower costs, more advanced features, better security, and higher quality.
Smart metering growth has remained steady at a CAGR of about 7% for years. In a recent report, Transforma projects that the smart metering market will increase to $40 billion by 2033 as connectivity technologies consolidate to LPWA—cellular mMTC, LoRaWAN, and Sigfox. Today, metering accounts for ten percent of all IoT connections. Although the company reckons metering’s share will drop to 9% over the next decade as other IoT use cases accelerate, I predict a much sharper decline. As AI applications create insatiable demands for instrumenting enterprise and consumer physical assets, I believe the metering share will drop to less than 5% over that period. I describe how AI is transforming the industrial IoT landscape in this new Forbes article.
The Google Pixel 9 just hit the shelves, so people can now experience the new Gemini Live voice assistant, along with the new Panorama mode (exclusive to the Pixel 9) and satellite messaging capability. I’ll have much more on all aspects of the Google Pixel launch in a writeup later this week on Forbes.
Many wearables, especially earbuds and rings, have a major repairability problem: once they break, they’re mostly not repairable and turn into throwaway tech. This was brought home to me by a recent teardown of a Samsung Galaxy Ring from iFixit and another video about the same issue with Apple’s AirPods Pro. The scope of the problem suggests there should be a lurking entrepreneurial opportunity, so if any of our readers are familiar with companies attacking this problem, I’d love to hear about them.
Google’s Android 15 Quarterly Platform Release 1 is live for Pixel devices, and one of the interesting tidbits to emerge from it is that the Pixel 6 might be getting longer support than Google had promised. The Pixel 6 was only supposed to receive system updates until October of 2024, but it seems that it is now being included in the QPR, which means it might be getting Android 15—and stay fresher than most people expected.
Apple has split its App Store team in two and reorganized the group amid global regulatory scrutiny. The company has also begun allowing users to delete the App Store from their iPhones altogether. This comes in the wake of rulings earlier this year by the U.S. Supreme Court and the European Commission that have effectively forced Apple to modify its practices for the App Store. So while further changes are expected, nobody fully knows yet how it will all pan out
RingCentral recently announced updates to its contact center solution, RingCX. These updates include native, real-time AI assistance for agents and supervisors and AI-based coaching and feedback tools. The company also reported substantial growth in its RingCX customer base in its first year. RingCentral hosted a virtual event to showcase the new AI capabilities within RingCX and their practical applications. The company had CX author Blake Morgan discuss the importance of customer-focused leadership, so the event strategically combined product announcements with thought leadership. It was a smart move to showcase product updates alongside practical advice on how to leverage them for improved customer experiences.
Zoom’s stock surged following strong Q2 2025 results, marking its best day in almost two years. The company beat revenue and earnings expectations and raised its full-year guidance. This success comes as Zoom expands its product portfolio and invests heavily in AI to reignite growth, which stalled after its pandemic boom. The company reported that lower customer churn and its growing contact-center business helped with the results. This should indicate a promising future for Zoom beyond its videoconferencing roots. The company also announced the departure of its long-time CFO, Kelly Steckelberg, a significant loss despite the positive financial news.
Webex AI Codec is now generally available in the Webex app; it aims to enhance audio quality during online meetings and calls, particularly in challenging network conditions. Audio continues to be one of the most important aspects of collaboration and is a significant factor in providing the best experience for employees and customers. The company says the technology can deliver clear audio while using minimal bandwidth—which addresses a common frustration in remote and hybrid work environments. I saw and heard a demo of the codec last year, and it works as advertised. I think this is a significant development in the industry as it reflects companies’ focus on ensuring users’ ability to communicate regardless of their location or Internet connectivity.
NVIDIA has announced another partnership with MediaTek, this time for monitor scalers to enable G-Sync Pulsar, the latest generation of NVIDIA’s frame-rate panel refresh technology on more affordable computers. Previously, G-Sync required an additional G-Sync module to run all the technologies NVIDIA has created for it, but now it will be integrated into the monitor’s scaler, saving on cost and complexity. This will broaden the appeal of G-Sync and bring it to an even bigger audience.
PsiQuantum has made significant progress towards its goal of developing a fault-tolerant quantum computer with a million qubits. The company recently announced a partnership to establish a quantum computing hub in Chicago, backed by $500 million in public funds. PsiQuantum is also building prototypes in the U.K., at Stanford University, and in Brisbane, Australia, where it plans to develop its first fault-tolerant photonic quantum computer by 2027.
Photons have long coherence times with minimal environmental interaction, making them ideal for quantum computing. PsiQuantum uses single-photon sources, integrated superconducting detectors and photonic chips, all produced through a high-volume manufacturing partnership with GlobalFoundries. PsiQuantum’s photonic quantum architecture employs advanced filtering and interference techniques to ensure high-quality photon qubits and help it achieve a million-qubit system capable of solving complex, previously intractable problems. You can read more about PsiQuantum in my latest Forbes article.
Research by Quantinuum has simplified quantum error correction. It is easy for quantum computer qubits to make errors, but determining where these errors occur is a complicated process that requires a lot of time-consuming checks called syndrome extractions. Now scientists from Quantinuum and the University of Maryland have found a way to correct errors by using a method called single-shot quantum error correction. Quantinuum’s latest H2 quantum computer was used to test this method using a 4-D surface code that makes it easier to find and fix errors. Compared to the older 2-D surface code, the 4-D code did as well or better by using fewer resources and less time. This shows that single-shot error correction can speed up quantum machines for complex calculations. You can dig into the details in the preprint article conveying this research.
Halliburton is the latest big company to fall victim to a cyberattack. Critical infrastructure, including oil and gas services, will continue to be a target for bad actors that use denial-of-service techniques. What is especially troubling about this incident is the impact to the energy industry and the negative consequences for consumers and enterprises that rely on refinery and natural gas production. Consequently, it is imperative that a multi-layered approach to security is employed to safeguard an industry that is tied to the United States’ national security.
xMEMS has come out with a semiconductor cooling solution that has no fans and can move air without needing heat pipes or copper to transfer heat. This will hopefully give Frore Systems a run for its money and could give the entire solid-state cooling market a credibility boost.
I am looking forward to attending the US Open to see firsthand how IBM is leveraging AI to reshape tennis fans’ experience. The partnership between IBM and the USTA takes a strategic approach for brand and technology integration with AI-powered match commentary and personalized insights, which should enhance fan engagement on a granular level. What’s really impressive is IBM’s AI learning initiative through SkillsBuild using tennis as a familiar framework. This shows the company’s dedication to fostering broader technological literacy and the potential of AI to not only revolutionize industries such as sports but also create educational opportunities for individuals across diverse backgrounds.
The Moor Insights & Strategy sports technology team (Melody Brue and Robert Kramer) will be in action at Flushing Meadows for this year’s US Open tennis tournament. IBM, which has been working with the United States Tennis Association for 30 years, combines data, AI, and the IBM watsonx platform with tennis to bring fans the future experience of sports technology.
Tune in and take advantage of the team’s live podcasts. We will be taking a closer look at the technology and talking with IBM experts, fans, players, and other vendors. For now, you can find out more from this IBM post about AI at the US Open.
Nokia and Axiom Space are partnering to embed 4G LTE communications in spacesuits for the Artemis III mission to the Moon’s south pole in 2026. This effort complements Nokia’s already planned cellular network deployment on the Moon as part of NASA’s Tipping Point program. The applications are exciting, including helmet camera HD video streaming, telemetry data transmission, and voice communications over long distances. Pairing Nokia’s cellular infrastructure with spacesuit communications is truly out of this world, and it could help unlock new findings on an uncharted part of the lunar surface.
]]>Ep 33: Hot Desk Podcast: Talking RingCentral, Zoom, SCM Canadian Railways, ERP/SAP, WebEx, US Open
https://moorinsightsstrategy.com/hot-desk-pod/ep-33-hot-desk-podcast-talking-ring-central-zoom-scm-canadian-railways-erp-sap-webex-us-open/
Mon, 26 Aug 2024 20:50:44 +0000https://moorinsightsstrategy.com/?post_type=hot_desk_pod&p=41712The Hot Desk team talks Ring Central, Zoom, SCM Canadian Railways, ERP/SAP, WebEx and more on this episode of the Hot Desk Podcast.
]]>We’re LIVE for Ep 33 of the MI&S Hot Desk Podcast! Join Robert Kramer, Melody Brue for a rundown of what’s up this week in collaboration, CX, ERP & SCM, and a little sports tech. We’re talking lots of great stuff – RingCentral RingCX AI, Zoom Q2 2025 Earnings, ERP Vulnerabilities, SAP, SCM/Canadian Railways, Webex AI Codec, and the US Open Tennis Tournament.
Watch the full episode here:
Listen to the audio here:
1:45 RingCentral Announces AI for RingCX
7:22 Zoom Q2 2024 Earnings
12:54 SCM Canada Railways Significance
16:51 ERP/SAP Vulnerabilities
24:05 WebEx AI Codec
30:12 US Open
Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.