Compatibility and Fragmentation in the AI Era
New architectures pose new challenges but AI may help to provide solutions and break down barriers
If you go far back enough in time, every new computer design had a new instruction set.
Which, of course, was a pain!

When IBM introduced the 709 mainframe in 1958 as the successor to the popular (with a grand total of 123 systems sold in total!) 704 the new computer’s architecture was incompatible with its predecessor. Fortunately, by then high level languages - and FORTRAN created by John Backus in 1957 in particular - were available to enable software to run on machines with multiple architectures. For example, CERN was one important 709 installation that adopted FORTRAN early:
With Mercury [a computer from the UK’s Ferranti] and the 709 operating together, CERN had its first experience of compatibility problems. This was a continuing source of difficulty as various different computers came into operation at CERN. Many of CERN’s programs were also used on a range of computers in its 13 member states.
A high-level computer-programming language called “FORTRAN” (short for “FORmula TRANSlating”)that made its CERN debut with the 709, and this language quickly became the only programming language in general use at the laboratory. A new generation of programs, written in FORTRAN to exploit the greater speed of the IBM 709, were brought in to analyse measurements from bubble-chamber photographs.
IBM also provided an optional emulation system - the first of its kind - so that the 709 could run 704 machine code.
The 709 was laughably simple and slow when compared to today’s machines:
The 709 had 32,768 words of 36-bit magnetic-core memory and could execute 42,000 add or subtract instructions per second. It could multiply two 36-bit integers at a rate of 5000 per second.
There was a good reason for new computers having new architectures: the technology was advancing so quickly that it made sense to abandon old designs in order to cast off their limitations.
High level languages like FORTRAN helped deal with this architectural ‘tower of Babel’. Soon, though, the number of languages (and different implementations of those languages) the costs of supporting those languages across architectures also grew:
Every time a new architecture or high-level language was developed then a number of new or revised compilers were needed. For every new language, compilers for each architecture were needed and for a new architecture compilers for each language were necessary.
This led to what became known as the ‘n * m problem’. With n architectures and m machines, n * m compilers were needed to implement all the languages on all the architectures. Writing a new compiler was a significant undertaking.
Users wanted a better approach.
This led to the idea of what was then called a ‘Universal Computer Oriented Language’ or ‘UNCOL’ or what would now be called ‘Intermediate Representations (IRs)’ or ‘virtual instruction sets’ corresponding to ‘virtual machines’.
Meanwhile, in 1964, IBM cut through much of this complexity with its System/360 (S/360): an architecture that would be implemented across machines with a range of sizes and price points and, crucially, where future designs would provide full ‘binary compatibility’ with existing programs.
The S/360 was a huge success and came to underpin IBM’s growth and dominance of business computing for another two decades.
This snapshot of history may end more than 60 years ago but the solutions to the challenges of compatibility remain broadly the same today:
Major architectures provide the promise of compatibility across designs with a range of sizes and prices and of backwards compatibility with historic designs.
Architecture independent ‘Intermediate Representations’ are widely used to underpin many high level languages.
Almost all programs are written in ‘high level’ languages.
Problem solved? We can stop thinking too hard about new architectures and fragmentation?
Not so fast!
As David Patterson said in 2019 (before the launch of ChatGPT!):
The next decade will see a Cambrian explosion of novel computer architectures, meaning exciting times for computer architects in academia and in industry.
Exciting times, with lots of innovation, also means more fragmentation.
Success in dealing with the impact of this fragmentation will help to shape how the computer and semiconductor industries develop and who wins and loses.
And, as with much else, ‘AI’ is likely to upend many longstanding assumptions in other ways too. What follows is a rapid tour through some recent and possible changes.
We’ll start with some other developments before looking at how AI might change things.


