RESEARCH NOTE: NVIDIA Fourth-Quarter and Fiscal 2024 Results

By Matt Kimball - February 26, 2024

NVIDIA set the world on fire with its latest earnings release. Its strong growth continues to be fueled by a seemingly insatiable demand for its datacenter GPUs, especially for AI. While the fourth-quarter and year-end results were incredibly strong, the company offered upward guidance yet again for the first quarter of its FY2025.

Below is a summary of the key numbers, along with some insights into NVIDIA’s historic run.

By the Numbers

All financials are reported as GAAP and in millions (except EPS):

Quarter

Revenue

Y/Y Gross Margins Y/Y Diluted EPS

Y/Y

Q4 FY24

$22,103 265% 76.6% 12.7 pts $5.16 486%

Q4 FY23

$6,051 63.3%

$.88

 

Fiscal Year

Revenue Y/Y Gross Margins Y/Y Diluted EPS

Y/Y

2024

$60,992 126% 73.8% 14.6 pts $12.96

288%

2023

$26,974 59.2% $3.34

The company’s outlook for Q1 of FY2025 is as follows:

  • Revenue is expected to be $24.0 billion, plus or minus 2%.
  • Gross margins are expected to be 76.3%.

In the 24 hours following its earnings announcement last week, the company saw its market capitalization grow by $255 billion. It now sits at a market cap of $1.97 trillion.

Other Highlights

While gaming contributed a healthy $2.9 billion to NVIDIA’s fourth quarter, the company’s datacenter business accounted for an incredible 83% of revenue, at $18.4 billion. This contribution represents a 27% increase over the previous quarter and a staggering 409% increase over the previous year.

When looking at performance across FY2024, the datacenter business contributed $47.5 billion, representing a 217% YoY increase.

The company’s guidance for its upcoming fiscal quarter indicates that the AI wave is still very strong. And while competitive pressures continue to grow, the company is highly confident in its sales pipeline.

In addition to the solid financial performance, NVIDIA has continued to drive its partnerships in the past quarter. In November 2023, the company partnered with AWS to host NVIDIA DGX Cloud. This partnership enabled AWS to host the first supercomputer in the cloud, powered by the NVIDIA Grace Hopper “superchip” and AWS UltraCluster.

Also, the company delivered several tools to simplify and accelerate the deployment of enterprise AI. Included in these releases was NVIDIA NeMo Retriever, a microservice that enables organizations to enhance their generative AI services with retrieval augmented generation (RAG).

In addition to NeMo Retriever, NVIDIA introduced MONAI, a set of cloud APIs to assist in medical imaging. Industry-specific tools and services are the type of enablement functions that are helping establish NVIDIA as the full-service AI provider for organizations of all sizes.

Analyst Notes

It is hard not to be impressed—perhaps even amazed—by the success NVIDIA has had in AI (and don’t forget HPC). However, let’s not be fooled into thinking this is sheer good fortune and lucky timing. In the mid-2010s, NVIDIA focused heavily on building its capabilities in accelerated compute for the datacenter. At this time, the company’s gaming business contributed the overwhelming majority of its top-line revenue.

As the company focused more on its datacenter GPU business, supported by its CUDA framework (introduced in 2006), it attracted a set of HPC and data science professionals as customers. The success of NVIDIA—and its competitors’ frustration—can be traced directly to the broad adoption and almost standardization of CUDA as the GPU programming framework. As workloads in the datacenter became more datacentric and required acceleration for deep analytics, the company was able to leverage its technology and position in the market. In time, NVIDIA and its technology were perfectly positioned when the generative AI rush hit.

While NVIDIA has been a leading-edge silicon player for decades, its “overnight” success results from those decades of nurturing the ISV ecosystem and developer community. It’s also the result of executing on a belief that workloads of the future would require more compute than CPU cores could deliver to operate efficiently and effectively.

With this said, NVIDIA faces challenges as investments in AI training give way to investments in AI inference. The first challenge is financial. While pricing for an H100 or H200 GPU has not been officially published, it is believed to be in the $30,000 range. Meanwhile, the cost to manufacture that H100 is believed to be around $3,300. This kind of profit margin is unheard of in the semiconductor market.

Inferencing is not training. The computational resources required to power inference are a fraction of those needed to power training. Further, as inference can be distributed across the edge, low power combined with high performance becomes a requirement. Organizations will not be looking to deploy H100 and H200 GPUs to meet their inferencing needs—they will be looking for far lower costs (and lower margins). As NVIDIA’s revenues become more heavily weighted toward AI inferencing, its top-line revenue and margins will be impacted.

The second challenge NVIDIA will face is competitive. While the company already faces competition from companies such as AMD and Intel on the training front, the inference market landscape is peppered with startups offering very competitive solutions. While NVIDIA has developed a GPU that can be programmed to perform many functions, its competitors in inference are designing fixed-function silicon with architectural enhancements. Further, the competition is developing ASICs at a fraction of the power envelope and a fraction of the cost. Perhaps this is why we have seen NVIDIA’s recent announcement regarding the formation of a semi-custom AI chip business unit.

Finally, while not related to inferencing, NVIDIA may see other challenges related to its success. Consider the lead time for customers to receive GPUs for training; I’ve heard of customers having to wait up to 18 months for H100s. Meanwhile, the AMD MI300 has filled that supply gap nicely, and major cloud providers are adopting and deploying the MI300 at scale. The MI300 uses AMD’s ROCm framework, a competitor to CUDA. While ROCm has been known to be less efficient than CUDA, the latest release from AMD is markedly improved. This dynamic should somewhat reduce the dependence on (and dominance of) NVIDIA.

Even with all of these challenges, I don’t expect NVIDIA’s standing in the market to decline. The company has demonstrated an uncanny ability to innovate ahead of the market and a strong discipline in executing against that vision.

Matt Kimball
+ posts

Matt Kimball is a Moor Insights & Strategy senior datacenter analyst covering servers and storage. Matt’s 25 plus years of real-world experience in high tech spans from hardware to software as a product manager, product marketer, engineer and enterprise IT practitioner.  This experience has led to a firm conviction that the success of an offering lies, of course, in a profitable, unique and targeted offering, but most importantly in the ability to position and communicate it effectively to the target audience.