I wished at least assembler syntax would at be standardized for the x86 ISA so that we don't have AT&T vs Intel syntax as well as slightly different mnemonics across various assemblers/compilers. As far as the instruction names, maybe in addition to standardized short form mnemonics, the standard could also define longer descriptive aliases for the same opcode, and/or maybe allow the user to define their own alias in the asm source.
Having a standard where the machine code would also optionally retain labels would be helpful too in debugging / disassembly. Or does that exist? I suppose it verges into compiler symbols maybe such asm labels could be stored in the same section. In working with Solaris in the past, one the most brilliant features the engineers implemented in the OS, compiler and debugger was always pushing the arguments for each function call onto the stack even if the arguments were actually passed by registers. The debugger, mdb or kmdb would retrieve this information for each stack frame and display it as arguments. It results in extra overhead but an intentional and worthy compromise IMHO, invaluable for debugging and troubleshooting. It worked for live disassembly with their debugger as well as with core dumps to retain the current arguments for the whole stack trace.
It was on production code. I have to assume that as cpus got faster, they decided the overhead for always pushing args on the stack became negligible enough when weighed against the benefits of being able to troubleshoot all the machines out in the field. Just to be clear, called functions would still read args directly from registers if passed by regs and this was done only for kernel and driver code.
One could argue the best way to "read assembly" is to read it's LLVM IR code.
The IR is why nearly all compiler development efforts have been switched to LLVM in recent years, and for good reason.
For one, it's actually a language meant to be read. Still, while staying implementation agnostic, it very closely resembles your machine code implementation.
And even if you don't have a source code, there are "lifters"" which translate your assembly code into IR, which in turn you can always compile back.
Hi Carlos, That's a really interesting challenge. Do you know how LLVM handles some of the more recent ISA extensions like AVX 512. Is it possible to debug AVX512 code using only the LLVM IR?
I've been intending to take on that challenge for a while, but still haven't found the time to do so. If you come to look into it, please keep me posted
Hi Carlos, My apologies in turn! Thanks for coming back on this - it's really interesting from a number of angles. Like you I haven't had much time to look at this but will let you know when I do. Best wishes.
It’s been well over 30 years since I last programmed assembly languages. Initially in process control computers with 256k (words) memory, later Digital Equipment’s PDP-11 and VAX/VMS.
Anyway, the “obscurity of assembly language”. Mind you, for all I remember a piece of code is as obscure as its programmer intends it to be, also bad programmers write obscure code. I’d rather maintain a well written assembly program than a C program where all language features are used apart from the comment feature (do refer to the Wikipedia page on “obfuscated c code contest” for examples).
Another point is “why on earth would you program in assembly language”? Valid reasons are “my system doesn’t support anything else”, or “I need to cram a program into 1024 bytes memory”. Invalid reasons are “only thing I know” or “real programmers write assembly” (google “real programmers don’t use Pascal” and read the article).
Yes, I wrote assembly programs, and yes, I wrote and modified operating systems. The most useful however was to let a compiler generate assembly code, then inspect and optimise it. Usually there was between 10% and 25% in size to be won, often more in speed and in process control environments that was interesting. I stopped looking at compiler generated assembly code in the mid-1980s. By then compilers became too smart to attempt to optimise the result any further.
Summarised “count your blessings and don’t use assembly code unless you really, really need to”.
Oww.. that's some time ago: late 1970s, early 1980s. There was the Honeywell 316, with slightly customised real-time operating system, but I was not really involved with those. The Honeywell 716 was a training / development system with a Disc Operating System and a real disc (!), and that one I did use quite often. Then there was a series of Honeywell 4500 systems with RTMOS (yes .. Real-Time Multiprogramming Operating System .. the multiprogramming bit was .. well .. benefit of the doubt) and a subset of the TDC2000 process control environment. Later on I was involved with DEC VAX/VMS and much later with PDP-11 RSX (yes, wrong time order .. I know).
The Honeywell 716 and 4500 systems had a Fortran IV compiler. The compiler on the 4500 systems was almost good enough, some manual optimisation would have been possible but would have raised the effort/cost of software maintenance.
Check out the assembly language for the Analog Devices SHARC DSP series. They use an algebraic notation that is fairly easy to read.
The read-mostly nature of assembly language is a fairly recent phenomenon I think, corresponding to the point where C compilers regularly did better than about 1.2 times hand-coding (let's say gcc3.3 for argument's sake (2003)). Before then it was very often the reverse: vast reams of assembly code would be written for performance-critical programs like machine control, video games and signal processing, and most of it would never be read again. One of the aspects of assembly language is that it requires hard-coding largely irrelevant decisions like register allocation, which make minor code changes difficult. It is usually much simpler to re-write whole sections than it is to try to change functionality or add a feature. This is the great win from low-level-high-level languages: the automation of register allocation and instruction scheduling means that code can now reasonably be patched and changed. IMO. YMMV.
As far as why it is shaped the way it is: that's mostly for simplicity: there is a 1-1 relationship between each line of code and a corresponding machine instruction (give or take macro-expansion, which was very common in assemblers). You run the risk with algebraic notation that it immediately becomes possible to express things that don't correspond to the functionality of any single instruction. And conversely, there are quite a lot of instructions in modern architectures that require a page of pseudo-code to explain exactly what it is that they do, which you could never reasonably express algebraically.
Hi Andrew, Thanks so much for the suggestion on the Sharp DSP series. I'll definitely have a look.
I agree with all your later points on the 1-1 relationship between likes of code and the machine instructions and that algebraic notation has some major issues. I suppose my point was that don't think that there is a magic bullet answer but that modern assembly could make some effort to be a bit more readable even if there is inevitably a degree of pragmatism needed.
When you really want to understand a disassembled program, rather than just check that the compiler has done a decent job of a loop optimization, then there are a couple of things that can help (depending on what you've got to play with):
If you've got the source code, then you can usually ask the compiler to emit an "interleaved source and assembler" form of listing file, where you can (or could) get hints about what the assembler is supposed to be doing from the surrounding source. Note that this is less useful today than it used to be, because today's compilers move code around a _lot_, doing ad-hoc inlining, specialization and unrolling/re-rolling so that a lot of the time there just isn't a direct correspondence between source and assembly. This is also why (per VJ's comment above) debugging is often harder: not only are arguments not stored in memory on the stack, they sometimes don't exist in the emitted code at all, having been folded into inlined leaf functions.
Having said all that: it is still a program, and the machine code means something and can be understood. Modern disassembly tools of the sort used by the malware/security-defence communities have a lot of visual and graphical aids, such as grouping straight-line code fragments and drawing call/return/jump lines between fragments. Never had cause to use one of these myself, but the screen-shots on the product web sites make them look pretty useful, if you need that kind of thing.
Regarding the SHARC: that also introduces another wrinkle in the assembly/disassembly world that is especially common in the world of DSP architectures, and that is VLIW, even if the W is not especially V, in the case of SHARC (48 bits) or 56k (24 bits): multi-columnar assembly language, because each instruction word is or can be composed of several largely independent operations, such as arithmetic (on registers) paired with parallel-operation memory loads or stores. Some cores like the TI C6000 series got as far as eight-way, and went back to writing instruction-elements vertically, even though they operated explicitly in parallel. Very much like doing crossword puzzles, I thought. Combine that with an exposed six-stage pipeline and it is very much an exercise best left to the compiler code generators...
I have to say that I never used assembly language at work. But, when I was in school, boy I loved it. So intuitive and so addictive. Your faults are yours and not anybody else. For small programs it's marvellous. Easy to debug.
Hi, I used Motorola 6800. That was back in late 90's.
It was a very satisfying experience.You can almost write 300 lines and when you execute it comes with almost no problem. Evidently you have to know how to play with the registers and be careful not to override your storage.
What a coincidence, I also learned Motorola 68k assembly in college, also in the 90s and also really enjoyed it!
But ironically (or maybe not so ironically) things turned out not so enjoyable at all later when doing 32-bit MIPS assembly for an architecture class where it wasn't like the 68k class where you'd write programs but learning how the machine code traversed the different pipeline stages then re-writing to avoid stalls (this was an in-order cpu), etc, and definitely not so when having to learn some x86 asm on the fly for work in the context of system debugging and not knowing what the hell was going on with everything else interfacing with the code being a black box...
The entire point of assembly is that it is supposed to represent an exact one to one relation from the assembly operation code instructions to the underlying machine code binary instruction. Assuming that I know the processors instruction set architecture (ISA), I should be able to generate the actual binary machine code from looking directly at the assembly code instruction. I can look up the assembly operation code to get its numeric value, size, and calculate the rest of the instruction based on the operands. It's a very simple translation. I cannot do that with an instruction like:
M(k+3)×R to cA
It completely obscures what underlying binary code is supposed to be which is totally antithetical to the purpose of assembly language. I cannot even look up the instruction code in an assembly manual because the instruction code is never actually stated. This is reminiscent of other poorly written assembly languages such as x86 where the same 'mov' assembly code can represent 14 different underlying processor instructions (88, 89, 8A, 8B, 8C, 8E, A0, A1, A2, A3, B0, B8, C6, or C7) that must be inferred based on the type of the operands. THAT makes it difficult to read.
Determining the operands types should never be necessary to make the instruction distinctive. The instruction should always make the operands distinctive. The instruction to move values between registers should be different than the instruction that moves values from registers to memory and different from the instruction that moves a value from memory into a register. Design the names of your operation codes correctly and you will not have any trouble determining what the operands should be.
Hi Tim, It's a fair challenge and I would probably agree for something like RISC-V but with x86 and it's proliferation of addressing modes you're looking at a lot of mnemonics to ensure a 1:1 correspondence between assembly and opcode, so you're looking at a lot to remember!
It may be worth trying to dig out some assembly for the Apollo PRISM (that's "A88K", for the DN10000, not to be confused with DEC PRISM or Motorola M88K) if you'd like another example of a slightly more readable assembly language. (Unless I'm misremembering and am thinking of Pyramid Technology minicomputers...)
A fairly obvious point to anyone who's seen both: the Z80 incorporates the whole of the 8080 instruction set, but Z80 assembly uses more readable notation for the 8080 instructions than the original 8080 assembly, by far. (If you're ever coding for the 8080, do yourself a favour and use Z80 assembly for the purpose!)
There is something to be said for the `mov ebx, eax` syntax: it reflects the instruction layout more accurately than `eax to ebx`. As far as opcode abbreviations are concerned, yeah it'd definitely be possible to give them more sane names.
Hi, I'd be with you entirely were it not for that fact that AT&T and Intel formats swap the order of the source and destination registers. As an occasional user of both formats I have an extra mental step to remind myself which format I'm using. With eax to ebx it's obvious just from the code itself.
Understandable, though I will note that the Intel syntax is the more 'correct' one for x86, and the operand order matches the instruction set. AT&T syntax is actually a holdover from PDP-11.
Ultimately the goal of an assembly language is to provide a human readable representation of the machine code. A good assembly language should provide a succinct, unambiguous representation of every instruction while still abstracting implementation details such as the exact representation of the different addressing modes and things like instruction prefixes that determine operand size. Some common operations may also have pseudo-instructions that combine a small number of instructions that are commonly used together (for example loading 32-bit immediates in RISC-V requires two instructions, but there is a pseudo instruction that combies the two).
Hi Maury, Thanks so much for this and sorry for the slow response. That's really interesting. I'm planning to have a look at the BELLMAC series in a little while, but I had no idea that the assembly could look like this. Thanks again.
I wished at least assembler syntax would at be standardized for the x86 ISA so that we don't have AT&T vs Intel syntax as well as slightly different mnemonics across various assemblers/compilers. As far as the instruction names, maybe in addition to standardized short form mnemonics, the standard could also define longer descriptive aliases for the same opcode, and/or maybe allow the user to define their own alias in the asm source.
Having a standard where the machine code would also optionally retain labels would be helpful too in debugging / disassembly. Or does that exist? I suppose it verges into compiler symbols maybe such asm labels could be stored in the same section. In working with Solaris in the past, one the most brilliant features the engineers implemented in the OS, compiler and debugger was always pushing the arguments for each function call onto the stack even if the arguments were actually passed by registers. The debugger, mdb or kmdb would retrieve this information for each stack frame and display it as arguments. It results in extra overhead but an intentional and worthy compromise IMHO, invaluable for debugging and troubleshooting. It worked for live disassembly with their debugger as well as with core dumps to retain the current arguments for the whole stack trace.
Hi VJ, It's really frustrating that one syntax didn't 'win' on x86. I suppose like vi and emacs both have their superfans!
Really interesting comment on Solaris. Presumably this was only for debugging or was the overhead small enough to deal with on production code?
It was on production code. I have to assume that as cpus got faster, they decided the overhead for always pushing args on the stack became negligible enough when weighed against the benefits of being able to troubleshoot all the machines out in the field. Just to be clear, called functions would still read args directly from registers if passed by regs and this was done only for kernel and driver code.
Very interesting, thanks!
One could argue the best way to "read assembly" is to read it's LLVM IR code.
The IR is why nearly all compiler development efforts have been switched to LLVM in recent years, and for good reason.
For one, it's actually a language meant to be read. Still, while staying implementation agnostic, it very closely resembles your machine code implementation.
And even if you don't have a source code, there are "lifters"" which translate your assembly code into IR, which in turn you can always compile back.
Hi Carlos, That's a really interesting challenge. Do you know how LLVM handles some of the more recent ISA extensions like AVX 512. Is it possible to debug AVX512 code using only the LLVM IR?
Hi Babbage, sorry I took so long to reply.
LLVM does handle recent ISA extensions, in fact it does so even ahead of any compliant processor release (see https://www.phoronix.com/news/LLVM-17-Arrow-Lake-ISA-Adds)
Debugging LLVM-IR however isn't as straight forward as one might expect. There is a tool that does the necessary symbol export but it seems the community is still discussing its inclusion to the official main project (see https://stackoverflow.com/questions/31984503/is-there-a-debugger-for-llvm-ir)
I've been intending to take on that challenge for a while, but still haven't found the time to do so. If you come to look into it, please keep me posted
Hi Carlos, My apologies in turn! Thanks for coming back on this - it's really interesting from a number of angles. Like you I haven't had much time to look at this but will let you know when I do. Best wishes.
It’s been well over 30 years since I last programmed assembly languages. Initially in process control computers with 256k (words) memory, later Digital Equipment’s PDP-11 and VAX/VMS.
Anyway, the “obscurity of assembly language”. Mind you, for all I remember a piece of code is as obscure as its programmer intends it to be, also bad programmers write obscure code. I’d rather maintain a well written assembly program than a C program where all language features are used apart from the comment feature (do refer to the Wikipedia page on “obfuscated c code contest” for examples).
Another point is “why on earth would you program in assembly language”? Valid reasons are “my system doesn’t support anything else”, or “I need to cram a program into 1024 bytes memory”. Invalid reasons are “only thing I know” or “real programmers write assembly” (google “real programmers don’t use Pascal” and read the article).
Yes, I wrote assembly programs, and yes, I wrote and modified operating systems. The most useful however was to let a compiler generate assembly code, then inspect and optimise it. Usually there was between 10% and 25% in size to be won, often more in speed and in process control environments that was interesting. I stopped looking at compiler generated assembly code in the mid-1980s. By then compilers became too smart to attempt to optimise the result any further.
Summarised “count your blessings and don’t use assembly code unless you really, really need to”.
Hi Hans, Thanks for a great comment. Do you remember what architectures the process control computers used?
Oww.. that's some time ago: late 1970s, early 1980s. There was the Honeywell 316, with slightly customised real-time operating system, but I was not really involved with those. The Honeywell 716 was a training / development system with a Disc Operating System and a real disc (!), and that one I did use quite often. Then there was a series of Honeywell 4500 systems with RTMOS (yes .. Real-Time Multiprogramming Operating System .. the multiprogramming bit was .. well .. benefit of the doubt) and a subset of the TDC2000 process control environment. Later on I was involved with DEC VAX/VMS and much later with PDP-11 RSX (yes, wrong time order .. I know).
The Honeywell 716 and 4500 systems had a Fortran IV compiler. The compiler on the 4500 systems was almost good enough, some manual optimisation would have been possible but would have raised the effort/cost of software maintenance.
Check out the assembly language for the Analog Devices SHARC DSP series. They use an algebraic notation that is fairly easy to read.
The read-mostly nature of assembly language is a fairly recent phenomenon I think, corresponding to the point where C compilers regularly did better than about 1.2 times hand-coding (let's say gcc3.3 for argument's sake (2003)). Before then it was very often the reverse: vast reams of assembly code would be written for performance-critical programs like machine control, video games and signal processing, and most of it would never be read again. One of the aspects of assembly language is that it requires hard-coding largely irrelevant decisions like register allocation, which make minor code changes difficult. It is usually much simpler to re-write whole sections than it is to try to change functionality or add a feature. This is the great win from low-level-high-level languages: the automation of register allocation and instruction scheduling means that code can now reasonably be patched and changed. IMO. YMMV.
As far as why it is shaped the way it is: that's mostly for simplicity: there is a 1-1 relationship between each line of code and a corresponding machine instruction (give or take macro-expansion, which was very common in assemblers). You run the risk with algebraic notation that it immediately becomes possible to express things that don't correspond to the functionality of any single instruction. And conversely, there are quite a lot of instructions in modern architectures that require a page of pseudo-code to explain exactly what it is that they do, which you could never reasonably express algebraically.
Hi Andrew, Thanks so much for the suggestion on the Sharp DSP series. I'll definitely have a look.
I agree with all your later points on the 1-1 relationship between likes of code and the machine instructions and that algebraic notation has some major issues. I suppose my point was that don't think that there is a magic bullet answer but that modern assembly could make some effort to be a bit more readable even if there is inevitably a degree of pragmatism needed.
When you really want to understand a disassembled program, rather than just check that the compiler has done a decent job of a loop optimization, then there are a couple of things that can help (depending on what you've got to play with):
If you've got the source code, then you can usually ask the compiler to emit an "interleaved source and assembler" form of listing file, where you can (or could) get hints about what the assembler is supposed to be doing from the surrounding source. Note that this is less useful today than it used to be, because today's compilers move code around a _lot_, doing ad-hoc inlining, specialization and unrolling/re-rolling so that a lot of the time there just isn't a direct correspondence between source and assembly. This is also why (per VJ's comment above) debugging is often harder: not only are arguments not stored in memory on the stack, they sometimes don't exist in the emitted code at all, having been folded into inlined leaf functions.
Having said all that: it is still a program, and the machine code means something and can be understood. Modern disassembly tools of the sort used by the malware/security-defence communities have a lot of visual and graphical aids, such as grouping straight-line code fragments and drawing call/return/jump lines between fragments. Never had cause to use one of these myself, but the screen-shots on the product web sites make them look pretty useful, if you need that kind of thing.
Regarding the SHARC: that also introduces another wrinkle in the assembly/disassembly world that is especially common in the world of DSP architectures, and that is VLIW, even if the W is not especially V, in the case of SHARC (48 bits) or 56k (24 bits): multi-columnar assembly language, because each instruction word is or can be composed of several largely independent operations, such as arithmetic (on registers) paired with parallel-operation memory loads or stores. Some cores like the TI C6000 series got as far as eight-way, and went back to writing instruction-elements vertically, even though they operated explicitly in parallel. Very much like doing crossword puzzles, I thought. Combine that with an exposed six-stage pipeline and it is very much an exercise best left to the compiler code generators...
I have to say that I never used assembly language at work. But, when I was in school, boy I loved it. So intuitive and so addictive. Your faults are yours and not anybody else. For small programs it's marvellous. Easy to debug.
Hi William, That's great. I really enjoyed it too (most of all Z80!). Which architecture / machine did you use?
Hi, I used Motorola 6800. That was back in late 90's.
It was a very satisfying experience.You can almost write 300 lines and when you execute it comes with almost no problem. Evidently you have to know how to play with the registers and be careful not to override your storage.
What a coincidence, I also learned Motorola 68k assembly in college, also in the 90s and also really enjoyed it!
But ironically (or maybe not so ironically) things turned out not so enjoyable at all later when doing 32-bit MIPS assembly for an architecture class where it wasn't like the 68k class where you'd write programs but learning how the machine code traversed the different pipeline stages then re-writing to avoid stalls (this was an in-order cpu), etc, and definitely not so when having to learn some x86 asm on the fly for work in the context of system debugging and not knowing what the hell was going on with everything else interfacing with the code being a black box...
No thank-you.
The entire point of assembly is that it is supposed to represent an exact one to one relation from the assembly operation code instructions to the underlying machine code binary instruction. Assuming that I know the processors instruction set architecture (ISA), I should be able to generate the actual binary machine code from looking directly at the assembly code instruction. I can look up the assembly operation code to get its numeric value, size, and calculate the rest of the instruction based on the operands. It's a very simple translation. I cannot do that with an instruction like:
M(k+3)×R to cA
It completely obscures what underlying binary code is supposed to be which is totally antithetical to the purpose of assembly language. I cannot even look up the instruction code in an assembly manual because the instruction code is never actually stated. This is reminiscent of other poorly written assembly languages such as x86 where the same 'mov' assembly code can represent 14 different underlying processor instructions (88, 89, 8A, 8B, 8C, 8E, A0, A1, A2, A3, B0, B8, C6, or C7) that must be inferred based on the type of the operands. THAT makes it difficult to read.
Determining the operands types should never be necessary to make the instruction distinctive. The instruction should always make the operands distinctive. The instruction to move values between registers should be different than the instruction that moves values from registers to memory and different from the instruction that moves a value from memory into a register. Design the names of your operation codes correctly and you will not have any trouble determining what the operands should be.
Hi Tim, It's a fair challenge and I would probably agree for something like RISC-V but with x86 and it's proliferation of addressing modes you're looking at a lot of mnemonics to ensure a 1:1 correspondence between assembly and opcode, so you're looking at a lot to remember!
It may be worth trying to dig out some assembly for the Apollo PRISM (that's "A88K", for the DN10000, not to be confused with DEC PRISM or Motorola M88K) if you'd like another example of a slightly more readable assembly language. (Unless I'm misremembering and am thinking of Pyramid Technology minicomputers...)
A fairly obvious point to anyone who's seen both: the Z80 incorporates the whole of the 8080 instruction set, but Z80 assembly uses more readable notation for the 8080 instructions than the original 8080 assembly, by far. (If you're ever coding for the 8080, do yourself a favour and use Z80 assembly for the purpose!)
Hi Stu, Thanks so much. I'll have a look.
I'm with you 100% on Z80 vs 8080, and really don't like having to read 8080 syntax code at all!
There is something to be said for the `mov ebx, eax` syntax: it reflects the instruction layout more accurately than `eax to ebx`. As far as opcode abbreviations are concerned, yeah it'd definitely be possible to give them more sane names.
Hi, I'd be with you entirely were it not for that fact that AT&T and Intel formats swap the order of the source and destination registers. As an occasional user of both formats I have an extra mental step to remind myself which format I'm using. With eax to ebx it's obvious just from the code itself.
Understandable, though I will note that the Intel syntax is the more 'correct' one for x86, and the operand order matches the instruction set. AT&T syntax is actually a holdover from PDP-11.
Interesting. I do wish everyone would follow the Intel syntax. It would make like a lot easier!
Ultimately the goal of an assembly language is to provide a human readable representation of the machine code. A good assembly language should provide a succinct, unambiguous representation of every instruction while still abstracting implementation details such as the exact representation of the different addressing modes and things like instruction prefixes that determine operand size. Some common operations may also have pseudo-instructions that combine a small number of instructions that are commonly used together (for example loading 32-bit immediates in RISC-V requires two instructions, but there is a pseudo instruction that combies the two).
why don't we write a translator for it :)
Check out the assembly for the BELLMAC-8:
#define NBYTES 100
char array[NBYTES];
sum()
{
b0 = &array;
a1 = 0;
for (a2 = 0; a2 < NBYTES; ++a2) {
a1 =+ b0;
++b0;
}
}
Yes, that's assembly, not C code.
Hi Maury, Thanks so much for this and sorry for the slow response. That's really interesting. I'm planning to have a look at the BELLMAC series in a little while, but I had no idea that the assembly could look like this. Thanks again.
The early 6502 standard for "load accumulator immediate mode" was not
LDA #25
but rather
LDAim 25 (or LdaIm 25)
This matters because reputedly that (overlooked or extraneous) # ends up being the top reason for bugs AND misunderstood code in 6502.
There’s also a HUGE difference between writing 8bit 6502 or z80 assembly compared to modern x86-64 asm