I'm looking for a table that maps a given character encoding to the (maximum, in the case of variable length encodings) bytes per character. For fixed-width encodings this is easy enough, though I don't know, in the case of some of the more esoteric encodings, what that width is. For UTF-8 and the like it would also be nice to determine the maximum bytes per character depending on the highest codepoint in a string, but this is less pressing.
For some background (which you can ignore, if you're not familiar with Numpy, I'm working on a prototype for an ndarray
subclass that can, with some transparency, represent arrays of encoded bytes (including plain ASCII) as arrays of unicode strings without actually converting the entire array to UCS4 at once. The idea is that the underlying dtype is still an S<N>
dtype, where <N>
is the (maximum) number of bytes per string in the array. But item lookups and string methods decode the strings on the fly using the correct encoding. A very rough prototype can be seen here, though eventually parts of this will likely be implemented in C. The most important thing for my use case is efficient use of memory, while repeated decoding and re-encoding of strings is acceptable overhead.
Anyways, because the underling dtype is in bytes, that does not tell users anything useful about the lengths of strings that can be written to a given encoded text array. So having such a map for arbitrary encodings would be very useful for improving the user interface if nothing else.
Note: I found an answer to basically the same question that is specific to Java here: How can I programatically determine the maximum size in bytes of a character in a specific charset? However, I haven't been able to find any equivalent in Python, nor a useful database of information whereby I might implement my own.
Although I accepted @dan04's answer, I am also adding my own answer here that was inspired by @dan04's, but goes a little further in that it gives the widths of encodings for all characters supported by a given encoding, and the character ranges that encode to that width (where a width of
0
means it is unsupported):from collections import defaultdict
For example:
For ranges of 2 or fewer characters it just records the codepoints themselves rather than the ranges, which is more useful for more awkward encodings like shift_jis.
The brute-force approach. Iterate over all possible Unicode characters and track the greatest number of bytes used.