As NVIDIA (NVDA) cruised into the stratosphere of $1 trillion valuations, one of the most common remarks heard across business and social media was “Oh my, look at that sky-high valuation!”

There was first talk of NVDA trading at 100 times on a price to earnings (PE) basis.

Then I saw posts about a 200X PE.

Where were investors and journalists — and armchair analyst-traders — getting these ridiculous numbers?

The Rearview Mirror

Mostly from looking backwards. Because when NVIDIA reported quarterly earnings on May 24, those looking in the rearview mirror ran off the road in bewilderment.

Maybe they got scared by the “smart trucks” (large quantitative investors) accelerating behind them.

But if they had only listened to me in my umpteenth explanation of why you buy NVDA at all valuations between 10 and 20 times sales, they would have been looking forward.

For years I have said two things over and over again…

1. Advances in Gaming GPU R&D were teaching NVIDIA engineers what they needed to know to build the foundations of AI.

2. Datacenter demands for AI “massively parallel architectures” would overtake Gaming revenues in the early 2020s.

On March 20, I published a special edition of Zacks Confidential (ZC) where I outlined why NVIDIA was still the reigning “King of AI” and how the acceleration of, and access to, tools from ChatGPT had just put the stock back on the launch pad — even though we already had 100%+ gains from buying the October lows.

Here are some excerpts from that report titled ChatGPT: Time to Become an AI Conversationalist…

I’ve pounded the table about the King of AI since 2016 for several reasons that all preceded the current gee-whiz fascination with ChatGPT — including the emerging ability for software to write itself.

What I did NOT predict is that ChatGPT would catch on like wildfire this year and be the instrument that directly and rapidly teaches everyone the power of AI.

Now, we’re all talking about the “super bot” in awed, hushed tones, and wondering what jobs it can replace.

In this report, we’ll examine the capabilities, and the fears, of the chatterbox that ate the internet.

What Was So Super About NVIDIA AI Before ChatGPT?

First, we recap a basic 3-prong thesis for being awed by the power of AI, especially in the hands of NVIDIA CEO Jensen Huang and his teams of engineering wizards…

1) NVIDIA does deep research in nearly all applications of HPC (high-performance computing) and hyper-scale data mining and modeling that create the foundations for autonomous driving, protein and molecule discovery, factory design and automation, and the future of scientific discoveries in cancer, energy, agriculture, longevity, and climate.

And they are able to do this because they don’t just build the hardware GPU (graphics processing unit) semiconductors that make hyper-scale possible. They also build the software engines that integrate with them in a full stack of developer tools called CUDA, for Compute Unified Device Architecture. Along with work at Alphabet (GOOGL) and Microsoft, these are the most powerful platforms in the world that can use neural networks to create machine learning (ML) reinforcement and deep learning (DL) inference.

For these reasons, I continuously encourage investors — whether they own NVDA shares or not — to frequent the company’s newsroom page to see the constant stream of innovations, discoveries, and partnerships that make the industry of AI as powerful as any for business, science, and society.

2) Jensen Huang and his superior leadership strategy is not only focused on hiring the best engineering talent, but they also collaborate with major scientific institutions and technology companies to put the best tools in the smartest, most(ly) ethical hands. They build the most powerful supercomputers in the world for these organizations by stacking and connecting hundreds of their GPU-based DGX machines, as one example, to create “massively parallel architectures” vs. the slower serial-processing of CPUs from Intel.

Plus, they know how to evangelize the platform technologies of AI by supporting developer communities where new ideas and solutions will be created at ground level that we haven’t even imagined.

3) The day that software can meaningfully write itself is nearly upon us. I’ve written often about Ray Kurzweil’s “singularity” prediction — when AGI (artificial general intelligence) becomes equivalent (or superior) to human intelligence — and he may be getting ready to pull that forecast forward again, from his original target for the year 2045.

(end of excerpt from my March 20th Zacks Confidential)

In that report, I said that investors still needed to buy the stock at $250 then, looking for at least $300 this year — despite semiconductor analyst Joe Moore at Morgan Stanley reluctantly raising his price target to $300.

Little did he or I know that Jensen & Co. would blow the roof off with a big raise in sales guidance on May 24. But I want to share what he said because, at the time, he was very prescient about the NVDA + GPT growth potential — even though he was a reluctant diner and I was devouring the feast of possibilities with years of high double-digit compound growth ahead as AI surpasses the mobile industry revolution with a potential TAM (total addressable market) of $15 trillion…

Network Effects and Exponential Compounding

Last Friday (3/17/23), a Morgan Stanley analyst threw in the towel on his Neutral rating for NVDA and wrote “The stock will continue to be hard to ignore in an otherwise challenging semiconductor environment.”

Analyst Joseph Moore raised his rating on NVDA shares to Overweight from Equal-Weight and bumped his price target from $255 to $304 as he conceded that he underestimated the growth resilience of the innovator’s products and admitted that the “megatrend” of generative AI will overshadow all doubts.

“The high capital intensity of these workloads, particularly on the training side, is now a major part of the calculus for the largest companies in technology, with NVIDIA having dominance in the training market that is likely to persist for several years even with a relatively fixed number of model developers, model complexity plus multiple languages should still drive 3-5x growth in training over 5 years.”

Moore also observed that the enthusiasm the firm is seeing for Large Language Models is boosting spending both in the near and long-term. In fact, NVDA just filed a securities shelf offering for $10 billion two weeks ago and, since the stock barely flinched, maybe that’s what moved the analyst to admit how dominant their position will remain with new investment. Large enterprise customers who depend on data “mining and modeling” will want what Jensen is building for many years to come.

As his final mea culpa, Moore noted that “the power of transformers has become clear as products such as ChatGPT and other AI workloads have moved up priority lists and are now seen as one of the most significant developments in technology since the development of mobile internet.”

How Forward Estimate Revisions Leave Behind the Backward Bears

In my full ZC on ChatGPT, which you can get a copy of by emailing Ultimate@Zacks.com — and tell ‘em Cooker sent you — I explain the dynamics of “GPT” (Generative Pre-Training Transformers) and I give you 4 other stock picks to capitalize on the revolution that is being televised in real-time.

But now let’s get to the goodness of what analysts have done with sales and earnings estimates since Jensen & Co. dropped their “demand-driven” bomb.

Here’s how I described the situation in TAZR Trader on May 24…

NVIDIA: It’s the TAM, Stupid!
Posted on 5/24/23

TAZR Traders

“How extremely stupid not to have thought of that.”

Thomas Henry Huxley, upon hearing of Darwin’s theory of evolution by natural selection

NVIDIA (NVDA) blew some doors off tonight — and not just for the bears.

Yes, they raised guidance significantly. But so many investors and amateur analysts are still grappling to understand the significance of this juggernaut that I have called the King of AI for 6 years…

Get Your “MPA” in Deep Learning
March, 2017

NVIDIA Gaming Drives the Deep Learning-AI Revolution
August 21, 2018

Who Cares NVIDIA Makes Great Gaming Graphics?
August 23, 2018

And there are still so many amateurs who never understood those drivers then (or even yesterday) and still try to sound like experts now — heck, they didn’t even know what “CUDA” stood for last year!

ChatGPT and NVDA shares’ meteoric rise have made them all very interested “experts” now.

And I bet they still didn’t know tonight what Ampere, or Hopper, or Lovelace are. We know because we talk about them and I’m writing a book about “13 Great Women of Science” which includes Madames Hopper and Lovelace!

Nor do they understand a DGX, or a “petaflop,” much less a “massively parallel architecture” used for deep learning training and inference.

It’s almost comical, if it weren’t so sad. Next, when CRISPR stocks go meteor, they’ll be all about them too.

Anyway, my point is to be careful about the analysis you hear over the next few days and weeks as the arm-chair AI experts tell you why or why not NVDA shares should trade at any given valuation of 100X or 200X EPS.

As you know, I’ve traded NVDA from extreme valuations of 5X sales (realllly cheap) up to 30X sales (pretttty expensive).

And as I told our internal strategy group tonight in a chat…

“P/E for NVDA doesn’t matter as they dominate a TAM (total addressable market) that is hard to calculate right now. Is it $300 billion? $500B? (Some projections are for $2 trillion by 2030 and I think that is conservative.) So it can grow into a 20X sales valuation for a while. Imagine the transition from datacenters that were CPU-centric (serial processing) and devoid of software for training & inference. GPU A100 and H100 cards set the standard now as massively parallel architectures in DGX machines (8 cards with 50+ billion transistors). The one they built for Tesla that does 1.8 exaflops stacks hundreds of DGX boxes (basically a 2-foot cube). Every corp will want one now. That’s tens of thousands of $68,000 machines. CUDA has been the development platform they didn’t know they needed for design, engineering, data crunching and simulation.”

What do we mean when we talk about doing a billion or a trillion calculations per second?

Just like a million million equals a trillion, so too a billion billion equals a quintillion.

Coming to terms: FLOP stands for “floating point operations per second.” An exaflop is a measure of performance for a supercomputer that can calculate at least 10 to the 18th power, or one quintillion, FLOPS.

In mid 2022, CEO Jensen Huang guesstimated a TAM for NVIDIA of $300 billion in hardware and $300 billion in software. This is the defining projection for the CUDA hard+soft stack I’ve been preaching about for 6 years.

I think Jensen has been so conservative because it would blow the minds of even professional analysts to think about what is possible.

Imagine NVDA growing to just 3X a $600 billion TAM where it would still be the dominant player…

That would be a $1.5 trillion valuation minimum. See you soon above $1T.

(end of May 24 TAZR commentary excerpt)

Architect of the AI Supercomputer

I used to talk 5 years ago about how much NVIDIA’s revenue could grow as they sold DGX boxes for $68,000. That’s the “mini” supercomputer in a 2-cubic foot space with 400 to 500 billion transistors for machine learning (ML) applications — depending on what generation of GPU card is used, from Volta to Ampere to Hopper. The A100 system-on-a-chip GPU costs $10,000 each.

A single DGX box is capable of massive computing power of between 1 to 5 petaFLOPS. A petaFLOP is a billion million floating point operations per second, so you can imagine how many you need to build a supercomputer like the one at Tesla.

Here’s a good description from software consulting firm Run:ai of what a user gets for their investment in a single DGX box…

“Beyond the powerful hardware they provide, DGX systems come out of the box with an optimized operating system and a complete pre-integrated environment for running deep learning projects. They provide a containerized software architecture that lets data scientists easily deploy the deep learning frameworks and management tools they need with minimal setup or configuration.”

To build a corporate, university, or scientific R&D lab supercomputer, you need between 100 and 1,000 of these dynamic cubes, “stacked” together to form “massively parallel architectures.”

Now a DGX box costs anywhere from $100,000 to $150,000. Not to mention the value-enriching capabilities of the full CUDA hardware+software stack.

And every major enterprise wants 100 to 1,000 of them now that ChatGPT took center stage. They either need to capitalize on the “chatterbox that ate the internet,” or exceed it.

THIS is the math you need to understand to grasp how and why Wall Street analysts underestimated NVIDIA sales potential, even after ChatGPT.

And they are just catching up with these potential topline moves…

Fiscal year 2024 (ends January ‘24): $42.64 billion = 58% growth

Fiscal year 2025 (ends January ‘25): $54.70 billion = 28% growth

These are dramatic turnarounds from where revenue projections were only a few months ago when I wrote my ChatGPT report.

Back in mid-March, FY24 sales were cast at around $30 billion and FY25 was around $37 billion. But you were already starting to see the optimism being expressed in those 12-24 months forward estimates as ChatGPT caught like wildfire and NVIDIA was inking deals left and right with partners like Microsoft.

In fact, the day I published that ChatGPT report on March 20 was the start of the greatest computing event of the year, the NVIDIA GPU Tech Conference, or GTC. Jensen & Co. are always full of surprises that wow the tech crowd and this was no disappointment.

NVIDIA launched DGX Cloud, where users can rent access for $37,000 per month. And they unveiled DGX GH200, a large-memory AI supercomputer that uses NVIDIA NVLink to combine up to 256 NVIDIA GH200 Grace Hopper Superchips into a single data-center sized GPU. Google Cloud, Meta (META) and Microsoft (MSFT) are already the first customers.

This was one of the primary drivers in the dramatic leap in sales estimates for this year and next.

What Price-to-Sales Valuation Do We Have Now?

On current-year estimates we have $1 trillion/$42 billion = 23.8 times.

On next year estimates of $55 billion, we have 18X.

To me, that’s a BUY based on everything I’ve presented.

Regarding misplaced P/E ratios, when NVDA EPS FY25 estimates were at only $6 a month ago (before their quarterly report), the stock was trading at 50X EPS.

Then when shares launched above $400 after earnings on May 24, allMay the valuation hand-wringers started screaming about 100X and even 200X earnings. But, again, they were looking backwards at trailing EPS. In the worst possible view of earnings, some are looking at fully-diluted TTM (trailing 12 months) EPS of $2.19. In the best possible TTM view, they are using around $3.35 non-GAAP.

Right now, the FY24 EPS consensus is for $7.64 and FY25 is over $10, putting the forward PE somewhere around 50 times.

For a conservative view of these numbers, let’s hear from analysts at Stifel Nicolaus where they have a $370 price target and Hold rating on shares…

“We continue to view NVDA as amongst the best-positioned companies to benefit from accelerated AI-focused spending. With shares trading well above the company’s 5-year average multiple on a forward earnings basis at 54.2x our F2024 EPS estimate and 50.4x our F2025 estimate, versus average multiples of 42x and 35x, respectively, we believe strong near-term momentum is well understood.”

I’m not saying NVDA is cheap. But the world has finally woken up to its AI potential and you may never get the chance to buy it at 10X sales again. Maybe 15X sales during a correction, or after an earnings miss. So don’t blink if you see a $750 billion market cap. Just buy.

— Kevin Cook

Want the latest recommendations from Zacks Investment Research? [sponsor]
Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report.

Source: Zacks