On Markets & Investing

Still waiting for the AI bubble to pop?

21 July 2024
"You don’t have to be an expert on every company, or even many. You only have to be able to evaluate companies within your circle of competence. The size of that circle is not very important; knowing its boundaries, however, is vital."
- Warren Buffett
This comment by Warren Buffett is one of the guiding principles of the Hidden Value Gems newsletter.

It sounds simple but it took me many years and more than a single loss to fully appreciate it.

There are at least three other reasons to stay cautious about today’s AI boom.

Firstly, the history of technology shows that it is very hard to spot the ultimate winners as pioneers are often replaced by later rivals.
Secondly, focusing on supply is usually more important than on demand. Rising demand (e.g. for chips, compute power, data storage and transfer) is obvious to many market participants. Eventually, it attracts more capital, which usually leads to falling profits. Consumers are often the biggest winners, not the companies. This has been the case with the previous tech revolutions in Airlines, Automobiles, PCs, telecoms etc.

Finally, the attention-grabbing headlines about AI coupled with high growth expectations usually lead to a lot of enthusiasm that eventually transforms into euphoria as the current leaders report ever stronger results that investors interpret as further proof of their thesis. Valuation multiples continue to rise, with more investors finding it harder to stay on the sidelines and continue to underperform. A combination of high expectations and high valuation multiples is the surest recipe for the bubble to pop.

So, why then talk about AI, the area that attracts some of the smarted scientists, engineers, venture capitalists and leading global tech platforms?

Reason 1. Despite the challenges in identifying the ultimate winners, I believe AI is a long-term trend with more changes and breakthroughs coming. If anything, it is never too late to learn more and try to be prepared for future changes.

Reason 2. The previous Internet revolution gave birth to new business models, based on network effects. This in turn defied traditional laws of economics with rising returns on capital as the leading platform companies gained scale. Today, these tech leaders have exceptionally strong competitive advantages despite operating in a highly competitive space.

Reason 3. There have been many opportunities to buy today’s tech leaders at a fraction of their current prices and many years after they first entered the stage. Amazon and Netflix saw their share prices collapse 90% and 80% at least once since. Ten years after reaching its previous high in 2000, Microsoft stock was still over 70% lower, valued below 10x P/E. You could still buy it at 8-9x P/E in 2011-12, when the market was concerned about the rise of smartphones and open-source substitutes for MS Office.

With this in mind, we have decided to host a member webinar on AI, focusing on its parallels with past innovations, key players, bottlenecks, and potential investment opportunities.

We had the pleasure of having two panelists who are both deeply involved in AI and are also private investors.
Grigory Sapunov: posts on Medium, LinkedIn profile

Yunjian Jiang: posts on Medium, LinkedIn profile

Below is the edited summary of the webinar. The link to the full replay is available in the member’s area.

What should we make of the current AI boom?

Question 1: Can you relate the current AI breakthrough to the past tech revolutions? Are today’s changes a real revolution or is the future from AI impact overhyped?

Grigory Sapunov:

This new wave in AI is likely to be much more significant than other AI waves. Its impact could even eclipse the previous Internet revolution.

We are still in Day 1 and have not really tapped even a fraction of the AI potential. The impact will be much more dramatic in many years to come.

One critical difference with the previous technological revolutions is that AI can improve itself and get better over time; this sets it apart from the Internet, electricity, etc.


Yunjian Jiang:

The Internet gave birth to business models based on network effects, it was about connecting more computers and mobiles. Today’s AI revolution is more similar to what happened in the 1970s, which was about pure computation.

There are three ingredients to the current AI revolution:

1. Compute
2. Data
3. A new network architecture.

The PC revolution of the 1970s enabled the automation of many hand processes, like a spreadsheet or word processing.

In the next few years, probably within 2 to 5 years, a lot of new business models will emerge, mostly in the automation area, especially related to text, for example, technical support (e.g. call centres), paralegal (processing huge amounts of documents), medical area, personal assistants.



Question 2: How critical is Nvidia’s role in this revolution? Are its chips really so much better, and can anyone ever compete with them?

Yunjian Jiang:

There are two stages in developing AI: the first is training, and the second is inference. Training requires billions of data points, it can take a month or so and it could cost billions of dollars to train a model these days. And the more data you feed and the longer you train the model, the better the results you get. So far we have not reached that saturation point. That means people are going to build larger and larger cloud data centres and then train with larger and larger amounts of data. But at some point, this is going to saturate, right?

So, NVIDIA is clearly a monopoly today. No other CPU and GPU can compete with NVIDIA. In the second area, inferencing, the likes of Apple, Amazon, Google, are trying to build into their products. Inferencing does not require so much GPUs. I don't think there's going to be one simple model that dominates the world. There are going to be millions of different types of models. Each train for different applications, and these models will be embedded into different types of computing environments serving different used cases. So from this point of view. I see many different types of architectures and computing emerge for inference applications.


Grigory Sapunov:

NVIDIA is definitely a great company. It's number one right now, in terms of what hardware to use for train, and and in many cases for inferencing your networks on in cloud or in on servers.

But the point is that there are different technologies on the table. The first is good old CPUs, which everyone has in their laptops and even in smartphones. That's just ordinary processors that can run almost anything, any logic. They are very universal.

And that's the problem. They cannot run some special compute effectively enough, that is required for machine learning, for example. GPUs were originally designed for graphics in video games. But suddenly people realised that almost the same GPUs can be efficiently used for mathematics that is used in machine learning, mostly matrix multiplications.

A GPU is something like a big processor comprised of many small processes that can run in parallel, a lot of small tasks, like matrix multiplications.

NVIDIA’s chips are good, but they are not specifically designed for machine learning. And of course, you can design a special cheap that it's perfectly fit for exactly machine learning without anything else. And there are many chips like that.

Google has its own processor, called a TPU, Tensor Processing Unit. It’s available only in Google Cloud. Right now, it’s already, I think, the fifth generation of this processor. Google trains its models, like Gemini, on its own TPUs. Other players, like Graphcore, Cerebras, Groq, and many other companies, produce very special chips for deep learning and LLMs. They are typically more energy efficient than GPUs.

AMD is a very interesting company. They have been expanding their processors. They bought another GPU company, and right now AMD has both CPUs and GPUs and their GPUs are technically very good. What has kept them behind NVIDIA historically was the lack of its own software infrastructure. But they are catching up now.

Another important sign is that several large supercomputers in the world were built with AMD instead of NVIDIA chips, and the number one supercomputer in the top 500 ratings, the Frontier supercomputer, is built on AMD chips, not NVIDIA.

There are, I think, two potential competitors to Nvidia: one is AMD and another - a very diverse group. It contains special chips, which are typically called ASICs or TPUs. And in this category you have Google, it's Amazon also, and many other Chinese companies, many Chinese companies like Tencent, Alibaba, Baidu. You also have Graphcore, Cerebras, and many other incumbents are already in this group.

It is not easy to beat NVIDIA because it is not just a chip company. They are developing the whole ecosystem. They build datacentres, develop a lot of software, and many other solutions at different levels. It's it's hard to compete with such a giant. But in small niche areas, there is definitely competition already. And right now, if you can train a model in the cloud, you can already choose between Google and NVIDIA GPUs. Google Cloud may be a cheaper option for some.



Question 3: If we look at the whole supply chain, where do you see the bottlenecks, what companies are the critical elements in this chain?

Grigory Sapunov:

I can definitely see a few bottlenecks. I can highlight three major ones. NVIDIA does not produce its chips on its own. They use companies like TSMC. TSMC is the largest player. There are others but no one in anyway close to them. And if there are any problems with Taiwan we'll see a shortage of chips like we saw a few years ago during Covid.

If NVIDIA needs, say, 100 times more chips right now, no one has the capacity to produce that. This is definitely a bottleneck.

ASML is also a very interesting company and is also a bottleneck. Because that's the only company, based in Netherlands, that produces machines for creating chips. They build these very expensive machines for TSMC and other companies that produce chips themselves. And I do not see any large competitor to ASML right now.

Then we have energy companies. Right now, there is an arm’s race between countries like America and China and between different companies. Everyone wants to build a larger cluster, train a larger model, and so on. And while right now energy is not a limiting factor, there are many studies showing that in pretty near future, maybe 10 years, we'll see a shortage of energy, and because the compute will require, say, half of the energy that US consumes today, that's definitely a limit.

If we extrapolate, the current trends energy is becoming a bottleneck soon, and if anyone will build a huge data centre that will require maybe a gigawatt of energy. There is a requirement to build a power plant somewhere nearby. Otherwise you just cannot use it. And as I see, energy companies like Vistra, Constellation and others saw their shares rise sharply this year because some major players are arranged contracts with them. So

Things may change in the future as the energy efficiency can improve, for example, through the use of superconductors, but for now energy is the third bottleneck.


Yunjian Jiang:

So, regarding the supply chain. There's for sure going to be a boom, or , a hyper-growth of all the companies involved in supplying this AI technology stack, including you know, manufacturing, software vendors, providing software services for these chip designs companies, for example,

Synopsis and Cadence are two companies that dominate the semiconductor software space.

But again, I would be very cautious in this type of short-term environment. I will give maybe a couple of analogies to the Internet boom in the late 1990s. T here was a lot of investment that went into building out the Internet infrastructure. And at that time, there were two fibre optics companies that saw their stock prices go up tenfold or something like that. One of them was Ciena , the other JDS Uniphase.
Even after more than 20 years, their stocks have not recovered to their peak levels.

Another analogy is from the PC sector. In the late 1970s, early 1980s, there were a lot of PC companies that adopted the IBM standard. So, they flourished for a short period of time. None of them exist anymore. And then it as Windows operating system and the Intel CPU architecture that became the only two players that captured the majority of the profit of the entire PC industry. All the other companies have been basically commoditised.

So, I think something similar is going to happen in the AI area. A few companies are going to become monopolies, earning most of the profit, and the other supply chain is going to be very much commoditised. After a short-term boom, they go back to normal growth.

So, I think over the long term, you still want to look for a long-term sustainable competitive advantage. The sustainable advantage lies in the data because data is the fuel for AI training, right? The quality of the model can get better. But it's going to be as good as the data that you have. So to the degree that you have access or ownership of data that no one else has. That's a significant advantage in the AI area.

For example, medical data. Right now, it's pretty much not explored at all. But in the future it's gonna be very, very valuable. Personal search data, Google, to the same degree, Apple has vast amount of personal data that you have. This is going to be hugely valuable.

I imagine in the future the AI is going to know more about you than yourself. But all of this comes from the data. And if you look at other data platforms,

the type of data or proprietary data is owned by individual companies. I think these are the companies that will have a lot of value generating in the future. For example, if you look at financial data. A lot of the financial data are public. But to the degree that you have private transaction data, that will be very, very useful. health data I just mentioned. And data related to transportation like Uber tester type of data for autonomous vehicles. And automated navigation imagery data for the Google Earth type of data that monitors all the different activities across the globe, right? These types of data are going be very help, very useful in the future.

So my key point is that the supply chain - it’s going to experience a short-term boom but eventually stabilise, and the large language models are likely going to be commoditised.

And there's going be a lot of different types of models emerging for different use cases, and the real advantage, I think, lies in data.



Question 4: What are some examples of stocks that you own that are potential beneficiaries from AI trends?

Grigory Sapunov:

I bought ASML, it is in a unique position, I believe AMD could catch up with Nvidia, so I bought it too. I am cautious about Nvidia, I sold it, probably too early, but still following it, it may become attractive again. I bought Vistra. I also think both Google and Microsoft are well-placed and should benefit from AI developments.


Yunjian Jiang:

I don’t own AI stocks directly. I hold Google and Amazon, the two major players that have some exposure to AI.
2024-07-21 08:25