It’s 1979, and Intel has a problem. Its 16-bit 8086 microprocessor is losing out to its competitors. To quote from the Oral History:
Dave House: … Motorola was selling the 68000 and Zilog was selling the Z8000, and we were commonly coming in third with the 8086 when it came to design win choices.
Why is it losing?
Dane Elliot: So any engineer who understood minicomputer architecture was going to appreciate what Motorola and Zilog had to offer!
Dave House: Exactly.
Rich Bader: The software guys!
The 8086 is designed as a stopgap. Intel has focused its attention on the iAPX432, a 32-bit processor with a whole array of advanced features. But the iAPX432 wouldn’t be ready for several years.
The 8086 project started in 1976 and was first available to buy in 1978, but the design had all sorts of limitations. Top of the list was its segmented memory architecture, which added significant complexity when addressing its full memory space.
So, Crush was born. The not very antitrust friendly name for Intel’s programme to fight off competition from Motorola and Zilog. It focused on the 8086 and its companion chips as a systems solution rather than on the 8086’s architecture or software. Crush soon gained momentum. By 1980 the 8086 had gained over 2,300 design wins.
And the 8086 had a secret weapon. Assembly code for Intel’s popular 8-bit 8080 CPU could be automatically translated to 8086 compatible code. Software writers could take their existing applications and quickly adapt them to run on the 8086.
Then IBM chose the cheaper version of the 8086, the 8088, to create the IBM PC.
And the rest is history.
Fast-forward to today and the stopgap 8086 has been enhanced, extended, and speeded-up. Intel even had a little help from arch-rival AMD in making a 64-bit version of the architecture. And along the way, the architecture came to utterly dominate first desktop and then datacenter computing.
But now we have credible rivals. Both Arm and RISC-V provide not only viable designs but also attractive alternative business models in the shape of either licensed or open-source architectures.
Apple has switched to Arm on the Mac. Microsoft makes Windows for Arm. Arm instances are available on Amazon, Google, Microsoft, and Oracle clouds. These firms are using Arm because it provides real advantages for many users, notably in cost and power efficiency.
But consider how long architectures last. You can still buy 6502 and Z80 compatible CPUs today. This is fully forty years after these architectures were at their peak. And these designs never had the volumes or the ubiquity of x86.
And crucially, x86 now has over forty years worth of software written to run on the architecture. Some users will continue to need x86 backwards compatibility just to keep their businesses running.
I think it’s safe to say that we will still be able to buy x86 compatible machines in at the least sixty years time. Maybe very much longer.
But the translation tools that took 8080 assembly code and turned it into 8086 code have their counterparts in the 2020s. Apple’s Rosetta 2 does ahead of time compilation of 64-bit x86 binary code into Arm code. Microsoft has similar tools to enable 64-bit x86 Windows applications to run on Windows for Arm. And these tools put the ability to change from x86 to Arm in the hands of users rather than developers. We will surely see similar tools that do the same for RISC-V. The chains of backwards compatibility have been broken.
So how long will x86 endure? As the leading architecture on desktops, laptops and servers? I‘m going to venture for at least a decade but not for two. As a practical desktop architecture? I’m going to guess at least thirty years. And x86 code, thanks to translation tools, will very likely be around for even longer after that.
What do you think? Please let me know in the comments.
Footnotes
#1
For more on Crush see this video of the oral history panel from 2013.
#2
Why hasn't Arm made its own x86 to 64-bit Arm translation tool? It has the most to gain from the switch from x86. It’s probably superfluous now but a few years ago it could have been a useful tool to encourage users to switch. I’m genuinely puzzled.
#3
Shortly after drafting this I became aware of Box86 which supports running of 64-bit x86 binaries on 64-bit Arm systems. And of course qemu should allow x86 virtual machines to be emulated (slowly) on 64-bit Arm systems.
Do let me know if you know of any others.
Photo Credits
Pentium Pro
https://commons.wikimedia.org/wiki/File:Ppro512K.jpg
Rosetta Stone
© Hans Hillewaert Licensed under CC BY-SA 4.0
Before the iPad Pro in 2015, I felt that ARM wouldn't be able to seriously compete with x86 in any reasonable amount of time. Most ARM CPUs that I'd seen up to that point were ... struggling to compete with a Pentium 3 in terms of felt performance. Of course, this hadn't been true for some time, but the nature of mobile devices meant that I couldn't really see the performance that ARM chips could produce in any real way (except for handling modern web). The iPad Pro started getting some serious creative applications ported to it, and all of that changed.
I then bought a non-pro iPad, and I helped my son create a video for his theatre class with it. It was far faster than any edit/render job on my PC; outrageously faster. Dedicated logic for a task can make a very noticeable improvement in any machine.
The M1's 8 wide decode is a large advantage in terms of perceived compute power. Likewise, the massive out-of-order buffer helps quite a bit. Then, if we add on that RISC instructions are 1 clock cycle, you get even more performance. Still, even with those design wins, x86 is competitive with M1, and x86 can still beat it (in raw compute power). Where Apple gained the bulk of their advantage is the SoC as a whole. Having so many dedicated ICs sharing the same package cuts latency, improves throughput, and mitigates the lack of efficient complex instruction decode that CISC chips have.
In the end, I think that x86 may last a very long time indeed, but it will need to adopt some of the M1 SoC design cues of insanely high integration. I personally feel that the future will hold many incompatible SoCs for which code will need to be optimized or for which specialized translating compilers like Rosetta2 will be needed. Apple's M1 is not a pure ARM play, and x86 need not be pure play either (it actually already isn't given on-die GPU).
Given recent changes of ARM's licensing model, I actually think that RISC-V will see more investment, more optimization, wider adoption, and may ultimately win the race for ISA dominance. It still won't matter too much though. The platform is no longer merely a CPU but instead many special purpose ICs plus a CPU smashed into a single package.
Thanks for the article! I'm curious where the number 60 years came from in the line "I think it’s safe to say that we will still be able to buy x86 compatible machines in at the least sixty years time." (If I'm understanding correctly, its only been around 44 years, and I would have naively guessed that computer hardware will change a lot more in the next 40 years compared to the previous 40.)