Why is network-byte-order defined to be big-endian

2019-01-07 06:58发布

As written in the heading, my question is, why does TCP/IP use big endian encoding when transmitting data and not the alternative little-endian scheme?

1条回答
爱情/是我丢掉的垃圾
2楼-- · 2019-01-07 07:07

RFC1700 stated it must be so. (and defined network byte order as big-endian).

The convention in the documentation of Internet Protocols is to express numbers in decimal and to picture data in "big-endian" order [COHEN]. That is, fields are described left to right, with the most significant octet on the left and the least significant octet on the right.

The reference they make is to

On Holy Wars and a Plea for Peace 
Cohen, D. 
Computer

The abstract can be found at IEN-137 or on this IEEE page.


Summary:

Which way is chosen does not make too much difference. It is more important to agree upon an order than which order is agreed upon.

It concludes that both big-endian and little-endian schemes could've been possible. There is no better/worse scheme, and either can be used in place of the other as long as it is consistent all across the system/protocol.

查看更多
登录 后发表回答