Java: why “\\uFFFF” converts to [-17, -65, -65] in

2019-05-29 07:41发布

问题:

Why does "\uFFFF" (which is apparently 2 bytes long) convert to [-17,-65,-65] in UTF-8 and not [-1,-1]?

System.out.println(Arrays.toString("\uFFFF".getBytes(StandardCharsets.UTF_8)));

Is this because UTF-8 uses only 6 bits in every byte for codepoints larger than 127?

回答1:

0xFFFF has a bit pattern of 11111111 11111111. Divide up the bits according to UTF-8 rules and the pattern becomes 1111 111111 111111. Now add UTF-8's prefix bits and the pattern becomes *1110*1111 *10*111111 *10*111111, which is 0xEF 0xBF 0xBF, aka 239 191 191, aka -17 -65 -65 in twos complement format (which is what Java uses for signed values - Java does not have unsigned data types).



回答2:

UTF-8 uses a different amount of bytes depending on the character being represented. The first byte uses the 7 bit ASCII convention for backwards compatibility. Other characters (like chinese signs) can take up to 4 bytes.

As the linked article in wikipedia states, the character you referenced is in the range of the 3 byte values.