The more things change the more they stay the same (Anon)
Sometimes the whole world can change in a weekend. It certainly seems that way today as the most recent DeepSeek models have apparently upended the whole ‘AI industrial complex’. Here is the front page of the Wall Street Journal as I write this:
My inbox has been full of DeepSeek ‘takes’, ranging from the thoroughly considered to the dangerously hot. I really need DeepSeek to make sense of them all.
Irrational Analysis has the most fun (if perhaps not for the author) post:
I’m going to avoid commenting on DeepSeek in detail except to note (via Ben Thompson) that part of the reason for DeepSeek’s success is that:
DeepSeek engineers had to drop down to PTX, a low-level instruction set for Nvidia GPUs that is basically like assembly language.
As we saw in Demystifying GPU Compute Architectures PTX is an intermediate language that is translated to assembly language for Nvidia GPUs. This is more than enough to warm the heart of a low level programming enthusiast.
Back to this week’s events. In times like this I try to turn to wise counsel, and who better than the late Gordon Moore?
When Gordon Moore was invited to give a talk on VLSI Industry Trends at Stanford in 1990 it was more than twenty years since he and Robert Noyce had founded Intel. It’s a terrific and engaging presentation full of Moore’s insights and humor. Despite the poor video quality it’s well worth watching in full.
Another three decades have now passed since that talk, so what has changed? A lot, but at the same time surprisingly little. We’ll come to that in a moment but first some context.
Gordon Moore needs little introduction either as the author of Moore’s Law, or as the founder and then CEO and Chairman of Intel. For more than four decades he was at the center of the development of semiconductor technology at Shockley, Fairchild and Intel.
When he gave this talk, ‘VLSI Industry Trends’, he was Chair of Intel and recently:
Intel had left the DRAM market (1986) and had just launched the 80486 CPU (1989) - with chief architect Pat Gelsinger!
Morris Chang had just founded TSMC (1987)
Arm would be founded later that year (Nov 1990)
Ronald Reagan had imposed 100% tariffs on some electronics imports from Japan in order to protect the US semiconductor industry (1987)
The talk was followed by a Q&A. So what themes were preoccupying Moore and the audience?
Competition from Asia (particularly from Japan and Korea but Taiwan also gets a mention. The People’s Republic of China isn’t mentioned at all).
The rising costs of semiconductor manufacturing equipment.
The role of the US government in supporting the semiconductor industry.
Would the US re-enter the DRAM market?
Finding uses for all the computing power that Intel wanted to deliver.
Will computers make us stupid (!!!)
Neural network chips.
Do most of these themes seem familiar?
The full talk can be watched by clicking on the screen capture below (YouTube embedding has been disabled for this video to the link will take you to YouTube).
Premium subscribers can also download a transcript of this talk at the end of this post.
So what happened next?
Moore need not have worried. Intel was about to enter a period of dominance. Jon Y at the Asianometry YouTube channel captures this so well in his recent video ‘Intel At The Peak’ (and thank you for the kind shout-out Jon!)
Quoting Jon:
By 1998, the company did $28 billion in revenue and $6.95 billion in net income.
That year, Intel was the third most profitable company on the Fortune 500. Only Exxon Mobil and General Electric did better.
The 1990s made Intel the world's richest and most powerful semiconductor company.
These two pages from Intel’s 1997 Report and Accounts really capture the mood of Intel’s late 1990s.
Why did this happen? Let’s go back to the first of Moore’s worries in 1990, which he calls ‘Driving the Machine’:
One of them I'll call driving the machine. This business and its rapid growth continue to depend on the market expansion in order to be able to afford the investment to keep the technology moving. You really have to look. Have you used your million transistors yet this year?
This is the question it comes down to. Where do we find applications for 10 to the 16th transistors a year or 10 to the 17th transistors a year? How can we use this much electronic function?
Moore worried about this as he saw Moore’s Law as, in large part, an economic law. We discussed this in more detail in our post ‘Moore on Moore’.
His best answer in the Stanford talk was speech recognition but the most important line is at the end of this section (my emphasis).
I think we can wave our hands and see some things that could produce the kind of requirements we're after there. The real-time speech recognition is one of my favorites so I can talk to my PC instead of having to fumble over the keyboard. That can use more bits and mips than about anything I can think of. But those are the kinds of things you have to have. In order to use these huge numbers of electronic functions, you have to come up with an application that you can multiply by the population of the world, or by the number of households in the world, or something like that. It can't be something that sits one per major university and have any real impact on keeping the industry driving through these very high-volume applications. And I will admit I am having trouble seeing the applications that are going to consume the vast amount of electronics we have to make. But that shouldn't bother you.
What happened next? In part, the Internet. From Intel’s 1998 Report:
In January 2025, almost thirty-five years later, we are now again - suddenly - worrying about how we are going to use all the compute that Moore’s successors at Nvidia, TSMC and many others, although to a lesser extent at Intel, are going to deliver.
Two more things from the talk before we go. One of the audience’s worries was about computers making us stupid.
[Audience] Modern microelectronics has made life a whole lot easier in a lot of ways. Calculators do our arithmetic for us, word processors double check our spelling, cash registers compute change, and so forth. Is there a downside to that?
Are we all turning stupid as a result? [LAUGHTER] I've forgotten how to multiply.
A lot of clerks I've met have forgotten how to compute change. Spelling checkers are making people forget how to spell.
[Moore] Is that something you worry about? I don't worry about it at all. I have the other problem. It takes me four times as long to balance my checkbook with the computer as it used to take me by hand. [LAUGHTER] But I think learning how to do long division was an important skill when I was in school, but I don't see why everybody ought to have to learn how to do that now.
If you can get it out of a computer, that's fine as long as the basic principles are understood by somebody.
Computers have done many things for us and to us, but I think we can probably agree that they haven’t made us (completely) stupid.
Finally, one of the audience ask about neural network chips. Moore’s response (my emphasis) was:
It's a weighting function for neural networks. We think neural networks are a very interesting alternative way to do computing, particularly things like pattern recognition where regular computers don't do well.
And we want to at least keep our finger on that and participate. We have the right technology to make a particular product there.
We got a small research group that's following up where that whole field is going. We hope someday we can really pick some products out of it.
Right now, it's not at all clear how big it's going to be or exactly how to get a hold of it.
How Intel missed neural networks and AI is a fascinating topic, but perhaps one for another day.
After all that, there really is no other way to end but with two words from another great technology oracle, Douglas Adams:
It goes without saying that none of this constitutes investment advice.
I’ve created a cleaned-up transcript of Moore’s talk that premium subscribers can download below.