As written in the heading, my question is, why does TCP/IP use big endian encoding when transmitting data and not the alternative little-endian scheme?
相关问题
- IPAddress.[Try]Parse parses 192.168 to 192.0.0.168
- What would prevent code running in a Docker contai
- How to run tcp and udp on a single port at same ti
- Docker-Compose: Can't Connect to Mongo
- Make Laravel Homestead Accessible via the Internet
相关文章
- RMI Threads prevent JVM from exiting after main()
- fsc.exe is very slow because it tries to access cr
- How many times will TCP retransmit
- Writing an OS X kernel extension to implement Linu
- Virtual Box limit Bandwith on network [closed]
- Is it possible to send LDAP “requests” via telnet?
- Is ICMP a transport layer protocol?
- How to add negative filter in network tab of Chrom
RFC1700 stated it must be so. (and defined network byte order as big-endian).
The reference they make is to
The abstract can be found at IEN-137 or on this IEEE page.
Summary:
It concludes that both big-endian and little-endian schemes could've been possible. There is no better/worse scheme, and either can be used in place of the other as long as it is consistent all across the system/protocol.