Discussion about this post

User's avatar
Bradford Morgan White's avatar

Before the iPad Pro in 2015, I felt that ARM wouldn't be able to seriously compete with x86 in any reasonable amount of time. Most ARM CPUs that I'd seen up to that point were ... struggling to compete with a Pentium 3 in terms of felt performance. Of course, this hadn't been true for some time, but the nature of mobile devices meant that I couldn't really see the performance that ARM chips could produce in any real way (except for handling modern web). The iPad Pro started getting some serious creative applications ported to it, and all of that changed.

I then bought a non-pro iPad, and I helped my son create a video for his theatre class with it. It was far faster than any edit/render job on my PC; outrageously faster. Dedicated logic for a task can make a very noticeable improvement in any machine.

The M1's 8 wide decode is a large advantage in terms of perceived compute power. Likewise, the massive out-of-order buffer helps quite a bit. Then, if we add on that RISC instructions are 1 clock cycle, you get even more performance. Still, even with those design wins, x86 is competitive with M1, and x86 can still beat it (in raw compute power). Where Apple gained the bulk of their advantage is the SoC as a whole. Having so many dedicated ICs sharing the same package cuts latency, improves throughput, and mitigates the lack of efficient complex instruction decode that CISC chips have.

In the end, I think that x86 may last a very long time indeed, but it will need to adopt some of the M1 SoC design cues of insanely high integration. I personally feel that the future will hold many incompatible SoCs for which code will need to be optimized or for which specialized translating compilers like Rosetta2 will be needed. Apple's M1 is not a pure ARM play, and x86 need not be pure play either (it actually already isn't given on-die GPU).

Given recent changes of ARM's licensing model, I actually think that RISC-V will see more investment, more optimization, wider adoption, and may ultimately win the race for ISA dominance. It still won't matter too much though. The platform is no longer merely a CPU but instead many special purpose ICs plus a CPU smashed into a single package.

Expand full comment
James Wang's avatar

Great article! It's funny, I was always personally most bullish on the server market. It's always made a lot of sense, and there's just too scattered of an ecosystem on consumer PCs. With Linux becoming the de facto standard for servers, I thought it HAD to be coming soon. Especially since it's not THAT hard to recompile certain parts of the stack to work on ARM.

And then lo and behold, software lock-in strikes again. SOME part was always just a little buggy, didn't work, wasn't supported, or whatever on ARM, and even my own company stayed on x86 instances. And then Apple goes all-in on their own silicon on the M1, a bunch of toolchains get updated in a hurry, and even for myself, I needed to get things to dual-compile.

It hasn't been really worth it for a lot of companies—including for mine—to move off of x86, but there's no real barrier anymore. It's just a bit of annoyance. Those with more scale, where the power and cost savings matter more, probably already did.

AMD has its own attempts to be Nvidia going, but Intel unfortunately seemed to really marry itself to the fate of x86, which seems more like a slow death at this point.

Expand full comment
5 more comments...

No posts