I've been struggling with understanding the ASCII adjust instructions from x86 assembly language.
I see all over the internet information telling me different things, but I guess it's just the same thing explained in a different form that I still don't get.
Can anyone explain why in the pseudo-code of AAA, AAS we have to add, substract 6 from the low-order nibble in AL?
And can someone explain AAM, AAD and the Decimal adjust instructions pseudo-code in the Intel instruction set manuals too, why are they like that, what's the logic behind them?
And at last, can someone give examples when these instructions can be useful, or at least in what applications they have been useful in the past.
I know that nowadays these instructions aren't used, but I still want to know how these instructions work, it's good to know.
Because in hexadecimal each character has 16 distinct values and BCD has only 10. When you do maths in decimal, if a number is larger than 10 you need to take the modulus of 10 and carry to the next row. Similarly, in BCD maths, when the result of the addition is larger than 9 you add 6 to skip the 6 remaining "invalid" values and carry to the next digit. Conversely you subtract 6 in subtractions.
For example: 27 + 36
Doing unpacked addition is the same except that you carry directly from the units digit to the tens digit, discarding the top nibbles of each byte
For more information you can read
AAM is just a conversion from binary to BCD. You do the multiplication normally in binary, then calling AAM divides the result by 10 and store the quotient-remainder pair in two unpacked BCD characters
For example:
AAD is the reverse: before the division, you call AAD to convert it from BCD to binary and do the division just like other binary divisions
For example: 87/5
The reason for those instruction is because in the past, memories are expensive and you must reduce the memory usage as much as possible. Hence in that era CISC CPUs are very common. They use lots of complex instructions to minimize the instructions used to do a task. Nowadays memory is much cheaper and modern architectures are almost RISC, with the trade off of CPU complexity and code density