Let's say I have a string in Python:
>>> s = 'python'
>>> len(s)
6
Now I encode
this string like this:
>>> b = s.encode('utf-8')
>>> b16 = s.encode('utf-16')
>>> b32 = s.encode('utf-32')
What I get from above operations is a bytes array -- that is, b
, b16
and b32
are just arrays of bytes (each byte being 8-bit long of course).
But we encoded the string. So, what does this mean? How do we attach the notion of "encoding" with the raw array of bytes?
The answer lies in the fact that each of these array of bytes is generated in a particular way. Let's look at these arrays:
>>> [hex(x) for x in b]
['0x70', '0x79', '0x74', '0x68', '0x6f', '0x6e']
>>> len(b)
6
This array indicates that for each character we have one byte (because all the characters fall below 127). Hence, we can say that "encoding" the string to 'utf-8' collects each character's corresponding code-point and puts it into the array. If the code point can not fit in one byte then utf-8 consumes two bytes. Hence utf-8 consumes least number of bytes possible.
>>> [hex(x) for x in b16]
['0xff', '0xfe', '0x70', '0x0', '0x79', '0x0', '0x74', '0x0', '0x68', '0x0', '0x6f', '0x0', '0x6e', '0x0']
>>> len(b16)
14 # (2 + 6*2)
Here we can see that "encoding to utf-16" first puts a two byte BOM (FF FE
) into the bytes array, and after that, for each character it puts two bytes into the array. (In our case, the second byte is always zero)
>>> [hex(x) for x in b32]
['0xff', '0xfe', '0x0', '0x0', '0x70', '0x0', '0x0', '0x0', '0x79', '0x0', '0x0', '0x0', '0x74', '0x0', '0x0', '0x0', '0x68', '0x0', '0x0', '0x0', '0x6f', '0x0', '0x0', '0x0', '0x6e', '0x0', '0x0', '0x0']
>>> len(b32)
28 # (2+ 6*4 + 2)
In the case of "encoding in utf-32", we first put the BOM, then for each character we put four bytes, and lastly we put two zero bytes into the array.
Hence, we can say that the "encoding process" collects 1 2 or 4 bytes (depending on the encoding name) for each character in the string and prepends and appends more bytes to them to create the final result array of bytes.
Now, my questions:
- Is my understanding of the encoding process correct or am I missing something?
- We can see that the memory representation of the variables
b
, b16
and b32
is actually a list of bytes. What is the memory representation of the string? Exactly what is stored in memory for a string?
- We know that when we do an
encode()
, each character's corresponding code point is collected (code point corresponding to the encoding name) and put into an array or bytes. What exactly happens when we do a decode()
?
- We can see that in utf-16 and utf-32, a BOM is prepended, but why are two zero bytes appended in the utf-32 encoding?
First of all, UTF-32 is a 4-byte encoding, so its BOM is a four byte sequence too:
>>> import codecs
>>> codecs.BOM_UTF32
b'\xff\xfe\x00\x00'
And because different computer architectures treat byte orders differently (called Endianess), there are two variants of the BOM, little and big endian:
>>> codecs.BOM_UTF32_LE
b'\xff\xfe\x00\x00'
>>> codecs.BOM_UTF32_BE
b'\x00\x00\xfe\xff'
The purpose of the BOM is to communicate that order to the decoder; read the BOM and you know if it is big or little endian. So, those last two null bytes in your UTF-32 string are part of the last encoded character.
The UTF-16 BOM is thus similar, in that there are two variants:
>>> codecs.BOM_UTF16
b'\xff\xfe'
>>> codecs.BOM_UTF16_LE
b'\xff\xfe'
>>> codecs.BOM_UTF16_BE
b'\xfe\xff'
It depends on your computer architecture which one is used by default.
UTF-8 doesn't need a BOM at all; UTF-8 uses 1 or more bytes per character (adding bytes as needed to encode more complex values), but the order of those bytes is defined in the standard. Microsoft deemed it necessary to introduce a UTF-8 BOM anyway (so its Notepad application could detect UTF-8), but since the order of the BOM never varies its use is discouraged.
As for what is stored by Python for unicode strings; that actually changed in Python 3.3. Before 3.3, internally at the C level, Python either stored UTF16 or UTF32 byte combinations, depending on whether or not Python was compiled with wide character support (see How to find out if Python is compiled with UCS-2 or UCS-4?, UCS-2 is essentially UTF-16 and UCS-4 is UTF-32). So, each character either takes 2 or 4 bytes of memory.
As of Python 3.3, the internal representation uses the minimal number of bytes required to represent all characters in the string. For plain ASCII and Latin1-encodable text 1 byte is used, for the rest of the BMP 2 bytes are used, and text containing characters beyond that 4 bytes are used. Python switches between the formats as needed. Thus, storage has become a lot more efficient for most cases. For more detail see What's New in Python 3.3.
I can strongly recommend you read up on Unicode and Python with:
- The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
- The Python Unicode HOWTO
I'm going to assume you're using Python 3 (in Python 2 a "string" is really a byte array, which causes Unicode pain).
A (Unicode) string is conceptually a sequence of Unicode code points, which are abstract entities corresponding to 'characters'. You can see the actual C++ implementation in the Python repository. Since computers have no inherent concept of a code point, an 'encoding' specifies a partial bijection between code points and byte sequences.
The encodings are set up so there is no ambiguity in the variable width encodings -- if you see a byte, you always know whether it completes the current code point or whether you need to read another one. Technically this is called being prefix-free. So when you do a .decode()
, Python walks the byte array, building up encoded characters one at a time and outputting them.
The two zero bytes are part of the utf32 BOM: big-endian UTF32 would have 0x0 0x0 0xff 0xfe
.