I am playing with the unix hexdump utility. My input file is UTF-8 encoded, containing a single character ñ
, which is C3 B1
in hexadecimal UTF-8.
hexdump test.txt
0000000 b1c3
0000002
Huh? This shows B1 C3
- the inverse of what I expected! Can someone explain?
For getting the expected output I do:
hexdump -C test.txt
00000000 c3 b1 |..|
00000002
I was thinking I understand encoding systems..
I found two ways to avoid that:
or
I think it is stupid that hexdump decided that files are usually 16bit word little endian. Very confusing IMO.
This is because hexdump defaults to using 16-bit words and you are running on a little-endian architecture. The byte sequence
b1 c3
is thus interpreted as the hex wordc3b1
. The-C
option forces hexdump to work with bytes instead of words.