Why are 8 and 256 such important numbers in comput

2020-05-25 06:22发布

I don't know very well about RAM and HDD architecture, or how electronics deals with chunks of memory, but this always triggered my curiosity: Why did we choose to stop at 8 bits for the smallest element in a computer value ?

My question may look very dumb, because the answer are obvious, but I'm not very sure...

Is it because 2^3 allows it to fit perfectly when addressing memory ? Are electronics especially designed to store chunk of 8 bits ? If yes, why not use wider words ? It is because it divides 32, 64 and 128, so that processor words can be be given several of those words ? Is it just convenient to have 256 value for such a tiny space ?

What do you think ?

My question is a little too metaphysical, but I want to make sure it's just an historical reason and not a technological or mathematical reason.

For the anecdote, I was also thinking about the ASCII standard, in which most of the first characters are useless with stuff like UTF-8, I'm also trying to think about some tinier and faster character encoding...

10条回答
ら.Afraid
2楼-- · 2020-05-25 06:56

The important number here is binary 0 or 1. All your other questions are related to this.

Claude Shannon and George Boole did the fundamental work on what we now call information theory and Boolean arithmetic. In short, this is the basis of how a digital switch, with only the ability to represent 0 OFF and 1 ON can represent more complex information, such as numbers, logic and a jpg photo. Binary is the basis of computers as we know them currently, but other number base computers or analog computers are completely possible.

In human decimal arithmetic, the powers of ten have significance. 10, 100, 1000, 10,000 each seem important and useful. Once you have a computer based on binary, there are powers of 2, likewise, that become important. 2^8 = 256 is enough for an alphabet, punctuation and control characters. (More importantly, 2^7 is enough for an alphabet, punctuation and control characters and 2^8 is enough room for those ASCII characters and a check bit.)

查看更多
趁早两清
3楼-- · 2020-05-25 06:57

ASCII encoding required 7 bits, and EBCDIC required 8 bits. Extended ASCII codes (such as ANSI character sets) used the 8th bit to expand the character set with graphics, accented characters and other symbols.Some architectures made use of proprietary encodings; a good example of this is the DEC PDP-10, which had a 36 bit machine word. Some operating sytems on this architecture used packed encodings that stored 6 characters in a machine word for various purposes such as file names.

By the 1970s, the success of the D.G. Nova and DEC PDP-11, which were 16 bit architectures and IBM mainframes with 32 bit machine words was pushing the industry towards an 8 bit character by default. The 8 bit microprocessors of the late 1970s were developed in this environment and this became a de facto standard, particularly as off-the shelf peripheral ships such as UARTs, ROM chips and FDC chips were being built as 8 bit devices.

By the latter part of the 1970s the industry settled on 8 bits as a de facto standard and architectures such as the PDP-8 with its 12 bit machine word became somewhat marginalised (although the PDP-8 ISA and derivatives still appear in embedded sytem products). 16 and 32 bit microprocessor designs such as the Intel 80x86 and MC68K families followed.

查看更多
闹够了就滚
4楼-- · 2020-05-25 06:58

We normally count in base 10, a single digit can have one of ten different values. Computer technology is based on switches (microscopic) which can be either on or off. If one of these represents a digit, that digit can be either 1 or 0. This is base 2.

It follows from there that computers work with numbers that are built up as a series of 2 value digits.

  • 1 digit,2 values
  • 2 digits, 4 values
  • 3 digits, 8 values etc.

When processors are designed, they have to pick a size that the processor will be optimized to work with. To the CPU, this is considered a "word". Earlier CPUs were based on word sizes of fourbits and soon after 8 bits (1 byte). Today, CPUs are mostly designed to operate on 32 bit and 64 bit words. But really, the two state "switch" are why all computer numbers tend to be powers of 2.

查看更多
Root(大扎)
5楼-- · 2020-05-25 06:59

I believe the main reason has to do with the original design of the IBM PC. The Intel 8080 CPU was the first precursor to the 8086 which would later be used in the IBM PC. It had 8-bit registers. Thus, a whole ecosystem of applications was developed around the 8-bit metaphor. In order to retain backward compatibility, Intel designed all subsequent architectures to retain 8-bit registers. Thus, the 8086 and all x86 CPUs after that kept their 8-bit registers for backwards compatibility, even though they added new 16-bit and 32-bit registers over the years.

The other reason I can think of is 8 bits is perfect for fitting a basic Latin character set. You cannot fit it into 4 bits, but you can in 8. Thus, you get the whole 256-value ASCII charset. It is also the smallest power of 2 for which you have enough bits into which you can fit a character set. Of course, these days most character sets are actually 16-bit wide (i.e. Unicode).

查看更多
甜甜的少女心
6楼-- · 2020-05-25 07:06

Since computers work with binary numbers, all powers of two are important.

8bit numbers are able to represent 256 (2^8) distinct values, enough for all characters of English and quite a few extra ones. That made the numbers 8 and 256 quite important.
The fact that many CPUs (used to and still do) process data in 8bit helped a lot.

Other important powers of two you might have heard about are 1024 (2^10=1k) and 65536 (2^16=65k).

查看更多
姐就是有狂的资本
7楼-- · 2020-05-25 07:07

Not all bytes are 8 bits. Some are 7, some 9, some other values entirely. The reason 8 is important is that, in most modern computers, it is the standard number of bits in a byte. As Nikola mentioned, a bit is the actual smallest unit (a single binary value, true or false).

As Will mentioned, this article http://en.wikipedia.org/wiki/Byte describes the byte and its variable-sized history in some more detail.

The general reasoning behind why 8, 256, and other numbers are important is that they are powers of 2, and computers run using a base-2 (binary) system of switches.

查看更多
登录 后发表回答