A real question that I've been asking myself lately is what design choices brought about x86 being a little endian architecture instead of a big endian architecture?
相关问题
- Shared common definitions across C/C++ (unmanaged)
- Where can the code be more efficient for checking
- NASM x86 print integer using extern printf
- “rdtsc”: “=a” (a0), “=d” (d0) what does this do? [
- Can a “PUSH” instruction's operation be perfor
相关文章
- Is it possible to run 16 bit code in an operating
- x86 instruction encoding tables
- x86 Program Counter abstracted from microarchitect
- Why doesn't there exists a subi opcode for MIP
- Assembler : why BCD exists?
- DDD Architecture - Where To Put Common Methods/Hel
- What should I #include to use 'htonl'?
- Visual Studio: How to properly build and specify t
This is quite archeological, but it most likely was not Intel's choice. Intel designed processors with backward compatibility a primary concern, making it easy to mechanically translate assembly code from the old to the new architecture. That turns the clock back from 8086 down to 8080 to the first micro-processor where endianness mattered, the Intel 8008.
That processor was started when CTC (later named DataPoint) came to Intel to ask for help with their data terminal product. Originally designed by Victor Poor and Harry Pyle, it had a logical processor design in MSI (many chips). They asked Intel to provide them with a storage solution, using 512 bit shift registers.
That was not Intel's favorite product, they took on these kind of custom design jobs to survive the ramp-up time for their 1024 bit RAM chip. Tedd Hoff, Stan Mazor and Larry Potter looked at the design and proposed an LSI processor with RAM.instead. That eventually became the 8008. Poor and Pyle are credited with designing the instruction set.
That they chose little-endian is credible from this interview with Poor. It skips through it rather quickly and the interview is rather scatter-shot but the relevant part on page 24:
The "had no choice" remark is odd, that appears to only apply to the bit-serial design of the MSI processor. Also the reason they shopped for shift registers instead of RAM. It comes up again at page 34:
Ultimately CTC did not use the 8008, it was finished a year too late and they had already implemented the MSI processor by then. The micro-processor design was certainly CTC's intellectual property, they however traded the rights to it with Intel for the design cost. Bit of a mistake :) Law suits about patent rights followed later.
So, as told, Intel ended up with little-endian because of the way serial ports worked.
Largely, for the same reason you start at the least significant digit (the right end) when you add—because carries propagate toward the more significant digits. Putting the least significant byte first allows the processor to get started on the add after having read only the first byte of an offset.
After you've done enough assembly coding and debugging you may come to the conclusion that it's not little endian that's the strange choice—it's odd that we humans use big endian.
It reflects the difference between considering memory to always be organized a byte at a time versus considering it to be organized a unit at a time, where the size of the unit can vary (byte, word, dword, etc.)