23 Comments
May 19Liked by Babbage

NIce article.

A related anecdote: in the early 1990s I did some markreting consultancy for a company who attempted to commercialise a CPU in this vein: the Linn Rekursiv.

Linn was most famous for its ultra-high-end record players. They believed utterly in vertical integration: the record players were made in their factory, using custom tools, controlled by a unique ERP/MRP software system they had written, all coded in Smalltalk (or rather a unique , in-house, Smalltalk variant).

But on a VAX it ran s-l-o-w-l-y so obviously they designed their own ASIC/CPU chipset. Obviously...

And having designed it, got the few they needed fir their own in-house use they then decided to sell it externally, which is where I came in.

It was very clever: unified memory (RAM & HDD were a single addrees space), everything was an "object", tagged memory, hardware garbage collection.

It was also impossible to use, incredibly strange, almost undocumented, no faster than conventional CPUs by the time it was released, full of bugs and a "user hostile" commercial attitude ;)

Not surprisingly, it was not a success!

Expand full comment
author

Hi Rupert! As yes Linn - I was a hifi buff in the 1980s and LP12 was the best turntable you could buy - well at least according to the UK hifi press.

I’d forgotten the Smalltalk connection with Rekursiv but this snippet from Wikipedia came to mind immediately

‘The last known copy of a Rekursiv computer ended up at the bottom of the Forth and Clyde canal in Glasgow.’

Might do a post on it. Are there any good references?

Expand full comment
May 24Liked by Babbage

You have to wonder why they threw it in canal?! Were they thst pissed off!?

The Wikipedia link has a few references, including a piece by me (essentially a piece of PR) in PCW from 1990. I don't know if you can find a copy but it did discuss architecture, register structure, tages etc. I don't think it covered the specific ISA

I did do quite a lot of other PR, collateral etc but I didn't keep any copies.

I did do one article at the time, not just on Linn but there were a couple of others.

Expand full comment
author

It does sound like they were pretty unhappy with the machine!

Thanks for the pointer to PCW - it was March 1990 think. I'm on the case to track down a copy. There was also a Dick Pountain piece in Byte - Nov 1990 - which was quite comprehensive.

Thanks for the sub. I definitely plan to do a full post - it's just too interesting a story to pass up. Can I send you a pre-release version in due course?

Expand full comment

Of course. Happy to help (to the extent I remember anyway)

I am looking to see if I had any of the collateral.

I know ths for a time I did have, biut of course, these things vanish over the years...

Expand full comment
May 25Liked by Babbage

Although the Linn was another attempt to run a dynamic object-oriented language efficiently, I heard it took the opposite approach from SOAR (best memory of conversations with Mario Wolczko, who know more about it than I). The Rekursiv was a microcoded, CISC (high-level instructions). [not sure if the post button will show my name; I am David Ungar]

Expand full comment

Yes, Rekursiv was definitely the opposite extreme to Berkley RISC".

Not just CISC, but CISCier-than-thou ;)

It really was "can we microcode a HLL" directly.

I'm wondering if in a pile of paper or perhais thst old bag of 3.5" floppies there might be some more detail? But a quick browse hasn't found anything

Expand full comment
May 19·edited May 19Liked by Babbage

One of the main legacies of Smalltalk and SOAR was to get architects thinking about optimizations more broadly. You can see a lot of Smalltalk trickery in dynamic language implementations from Basic to Java to Python, as well as in how speculative hardware in CPUs work. The use of optimistic patterns not just for jumps but to enable dynamic typing to work efficiently really opened up possibilities. These techniques might have been invented anyway (LISP did originate some of the best work on pointers vs. values, and on GC), but Smalltalk was early, pure, in the right place at the right time, and attracted a lot of talent.

Expand full comment
author

Thanks, that's a really interesting point. It would be fascinating to try to trace the Smalltalk influence through published papers for example in each of these areas. Might be a bit time consuming though!

Expand full comment
May 19Liked by Babbage

Another important paper on making Smalltalk run faster on non-microprogrammed hardware is:

L. Peter Deutsch and Allan M. Schiffman. 1984. Efficient implementation of the Smalltalk-80 system. In Proceedings of the 11th ACM SIGACT-SIGPLAN symposium on Principles of programming languages (POPL '84). Association for Computing Machinery, New York, NY, USA, 297–302. https://doi.org/10.1145/800017.800542

Expand full comment
author

That looks very interesting, thank you. I note that they refer to the UC Berkeley research so it sounds like the two groups were sharing results and ideas.

Expand full comment
May 25Liked by Babbage

Yes. Peter used to visit Berkely. He taught me about Smalltalk Virtual Machines. He also brought the key idea of lifetime-based GC over from MIT. And his PS implementation surpassed Berkeley Smalltalk and inspired me to go the direction of self-modifying code for Self.

Expand full comment
May 19Liked by Babbage

You say: "Further research on object-oriented programming was funded by DARPA at Xerox PARC leading to the appearance of the Smalltalk language in 1972." I believe all the work at PARC in the 1970s was funded solely by Xerox. I have always heard they had a token DARPA contract only to allow their connection to the ARPANET. You can check the published Smalltalk papers to see there is no acknowledgement of non-Xerox funding.

Expand full comment
author

You're right! DARPA did fund work on Smalltalk, including at UC Berkeley, but not at Xerox PARC. Now corrected and thank you so much for pointing this out.

Expand full comment
May 25Liked by Babbage

Wow! Thank you for this article, a very well-done summary of the SOAR work. I (David Ungar), helped design the architecture (very proud of our one-cycle jumps and stores), wrote the runtime system (in assembler IIRC), and wrote my dissertation on the effectiveness of each architectural feature. TLDR: Most of our clever ideas did not pay off enough. SOAR was not optimally RISCy enough.

Peter Deutsch (see the Deustch-Schiffman paper) first did the compilation trick. Bill Bush wrote our compiler, and many others made important contributions. Of course, nothing would have happened with Dave Patterson. There's a story about how I got enthused about Smalltalk, and, hearing about RISC, starting talking up the idea of RISC for Smalltalk to Dave Patterson.

The SOAR project led to some very important technique today:

I came up with Generation Scavenging, a simple generational collector, researched its performance, and wrote up with I believe were the first published figures on generational GC. (Dave Moon had built a more intricate generational collector for the Lisp Machine, if memory serves.)

After Berkeley I taught at Stanford, and the worked at Sun Labs. Inspired by the SOAR work, we came up with Adaptive Optimization, Polymorphic Inline Caching, Dynamic Deoptimization. Those techniques enabled our language, Self, to achieve good performance on stock hardware, with fix-and-continue, source-level debugging of optimized code, and fewer non-changeable primitive operations. To my knowledge, every dynamic object-oriented language implementation today that strives for performance uses some of these, including Java (HotSpot led by Self alum Lars Bak), JavaScript (IIRC a key implementation also led by Lars), and (I suspect but don't know for sure) maybe even Python.

Now, without the SOAR project, surely someone else would have invented these techniques. Hardware was just getting fast enough to spend cycles on adaptive optimization. Those were heady days, from 1981 through 1993. The first version of Berkeley Smalltalk (I named it for the acronym) ran on a time-shared Vax, with a 9600-baud serial line to an AED bitmapped terminal. (BTW, BS was strictly an interpreter.) The last version ran on an early Sun workstation. At Stanford, generous grants bought of Suns for the group. There were huge leaps in hardware then, and the cost of cycles and memory crossed a threshold. How fortunate we were there then to have the chance to creatively exploit it.

Expand full comment
author

Hello David and wow from me! Thanks so much for taking the time to add all your comments and thanks so much for your work on SOAR and, of course on Self.

I'd love to follow up on some of the threads that you've highlighted here with a further post - maybe on Self itself?

Just to follow up on one point - in your later comment - that SOAR only ever ran on a simulator . It sounds like you actually had real chips but ran out of time to create a working system with them?

Expand full comment
May 25Liked by Babbage

One more: Dave Patterson was a fantastic Ph.D. advisor and project leader. I owe him much.

Expand full comment

I have met himn a few times at RISC-V events. A lovely guy.

I also know David May. They are remarkably similar ;)

Expand full comment
May 25Liked by Babbage

BTW, sorry, can't stop posting:) In those days, memory was much closer to the CPU, maybe just two cycles away. And by using the high-order-bit of the instruction to signal a jump/call vs anything else, and by putting the absolute jump address in the rest of the instruction, we could fetch the jump target in the very next cycle. (I don't think that jump prediction was a bit thing back then.) The inline caches took care of the dynamic dispatch.

Expand full comment
May 25Liked by Babbage

PS: SOAR only ever ran on a simulator, although I believe I did run the Compiler Benchmark, the most realistic and longest one we had. I don't think that the hardware was perfected before the project ended, even though we did get real silicon.

Expand full comment
May 25Liked by Babbage

PPS: Yipes! I forgot some critical details: Randy Smith, co-leader of the Self group at Sun, Craig Chambers who wrote the first Self compiler, Elgin Lee who wrote an early memory system, Urs Hoelzle who did Adaptive Optimization (including on-stack replacement)--btw this was renamed as "HotSpot". The Sun HotSpot JVM was a partial rewrite of the Self Virtual Machine, with many improvements. Ole Agesen, John Maloney, Mario Wolczko all deserve lots of credit for the contributions. (Forgive me if I miss anyone or anyone's specific achievements.) And thanks to Jecel Assumpcao Jr., for pointing me to this article.

Expand full comment
May 19Liked by Babbage

I am pretty sure the iAPX432 emulation was a joke .. hence "always undefined".

Expand full comment
author

Ahhhh! That's very good. Thank you!

Expand full comment