More Inferencing
Wall Street’s sell-off resumed on Tuesday after two days of gains, dragging major indices lower.
The S&P 500 slid 1.1%, edging closer to correction territory, while the Dow dropped 260 points and the Nasdaq 100 tumbled 1.6% under pressure from tech weakness.
Alphabet shares fell 2.3% following news that Google will acquire cloud security firm Wiz for $32 billion.
Other tech giants, including Nvidia and Palantir, also posted losses of 3.4% and 4%, respectively.
Investors are on edge ahead of the Federal Reserve’s policy decision on Wednesday, with markets broadly anticipating rates to remain unchanged.
Jensen Huang delivered his keynote at Nvidia's annual GTC developers conference.
Before we get into the key takeaways, let's revisit what’s changed since the launch of the Chinese DeepSeek model earlier this year.
The "breakthrough" from the DeepSeek model, was the ability to enhance existing foundation (large language) models. And do so at a low cost.
This innovation slashed the barriers to AI model development. That means broader AI adoption and more AI consumption. With that, more AI models mean more inferencing. More inferencing means more data creation (by the models), which leads to ... more inferencing.
This is all a formula for more and more demand for computing capacity – and that further cements Nvidia's leadership role in the technology revolution.
Add to that, Jensen showed how the entire AI ecosystem is coming together, and Nvidia is involved in every part of it – from the hardware that powers AI, to the software that makes it smarter and smarter, and the systems that will bring it to life in products like autonomous robots.
That said, at this stage, AI advancement is being limited by computing capacity. We've seen it in Nvidia's quarterly earnings.
Remember, the growth (in new revenue dollars) in Nvidia's data centre revenue has been on a rhythm of about $4 billion a quarter since the second half of 2023. Taiwan Semiconductor, Nvidia's manufacturing partner, is clearly maxed out.
And yet, Jensen said that he expected the global data centre buildout to reach a trillion dollars by 2030. We can only assume new advanced chip making capacity would have to come on-line for that to be fulfilled.
The next "limitation" to the speed of AI advancement, noted by Jensen: Energy.
The future of AI isn’t just about making more chips. Eventually, the ultimate limit will be how much power you can deliver to these systems. Faster inferencing, means higher performance, which translates into more revenue.
And revenue will be determined by your access to energy.
Investing by Design closes for new members in four (4) days, 23 Mar 2025. Click on the button below to join at a huge discount and secure your access.