I read in a comment to an answer by @Esailija to a question of mine that
ISO-8859-1 is the only encoding to fully retain the original binary data, with exact byte<->codepoint matches
I also read in this answer by @AaronDigulla that :
In Java, ISO-8859-1 (a.k.a ISO-Latin1) is a 1:1 mapping
I need some insight on this. This will fail (as illustrated here) :
// \u00F6 is ö
System.out.println(Arrays.toString("\u00F6".getBytes("utf-8")));
// prints [-61, -74]
System.out.println(Arrays.toString("\u00F6".getBytes("ISO-8859-1")));
// prints [-10]
Questions
- I admit I do not quite get it - why does it not get the bytes in the code above ?
- Most importantly, where is this (byte preserving behavior of
ISO-8859-1
) specified - links to source, or JSL would be nice. Is it the only encoding with this property ? - Is it related to
ISO-8859-1
being the default default ?
See also this question for nice counter examples from other charsets.
For an encoding to retain original binary data, it needs to map every unique byte sequence to an unique character sequence.
This rules out all multi-byte encodings (UTF-8/16/32, Shift-Jis, Big5 etc) because not every byte sequence is valid in them and thus would decode to some replacement character (usually ? or �). There is no way to tell from the string what caused the replacement character after it has been decoded.
Another option is to ignore the invalid bytes but this also means that infinite different byte sequences decode to the same string. You could replace invalid bytes with their hex encoding in the string like
"0xFF"
. There is no way to tell if the original bytes legitimately decoded to"0xFF"
so that doesn't work either.This leaves 8-bit encodings, where every sequence is just a single byte. The single byte is valid if there is a mapping for it. But many 8-bit encodings have holes and don't encode 256 different characters.
To retain original binary data, you need 8-bit encoding that encodes 256 different characters. ISO-8859-1 is not unique in this. But what it is unique in, is that the decoded code point's value is also the byte's value it was decoded from.
So you have the decoded string, and encoded bytes, then it is always
for arbitrary binary data where
str
isnew String(bytes, "ISO-8859-1")
andbytes
is abyte[]
.It also has nothing to do with Java. I have no idea what his comment means, these are properties of character encodings not programming languages.
"\u00F6"
is not a byte array. It's a string containing a single char. Execute the following test instead:To check that this is true for any byte, just improve the code an loop through all the bytes:
ISO-8859-1 is a standard encoding. So the language used (Java, C# or whatever) doesn't matter.
Here's a Wikipedia reference that claims that every byte is covered:
(emphasis mine)