Somewhere I read (rephrased):
If we compare a UTF-8 encoded file VS a UTF-16 encoded file, At some times, the UTF-8 file may give a 50% to 100% larger file size
Am I right to say that the article is wrong because at all times, text encoded in UTF-8 will never give us more than a +50% file size of the same text encoded in UTF-16?
Yes, you are correct. Code points in the range U+0800..U+FFFF gives a +50% size.
The answer is that in UTF-8, ASCII is just 1 byte, but that in general, most Western languages including English use a few characters here and there that require 2 bytes, so actual percentages vary. The Greek and Cyrillic languages all require at least 2 bytes per character in their script when encoded in UTF-8.
Common Eastern languages require for their characters 3 bytes in UTF-8 but 2 in UTF-16. Note however that “uncommon” Eastern characters require 4 bytes in both UTF-8 and UTF-16 alike.
3 is indeed only 50% greater than 2. But that is for a single code point only. It does not apply to an entire file.
The actual percentage is impossible to state with precision, because you do not know whether the balance of code points down in the 1- or 2-byte UTF-8 range, or in the 4-byte UTF-8 range. If there is white space in the Asian text, then that is only byte of UTF-8, and yet it is a costly 2 bytes of UTF-16.
These things do vary. You can only get precise numbers on precise text, not on general text. Code points in Asian text take 1, 2, 3, or 4 bytes of UTF-8, while in UTF-16 they variously require 2 or 4 bytes apiece.
Case Study
Compare the various languages’ Wikipedia pages on Tokyo to see what I mean. Even in Eastern languages, there is still plenty of ASCII going on. This alone makes your figures fluctuate. Consider:
Each of those is the Tokyo Wikipedia page saved as text, not as HTML. All text is in NFC, not in NFD. The meaning of each of the columns is as follows:
I’ve grouped the languages into Western Latin, Western non-Latin, and Eastern. Observations:
Western languages that use the Latin script suffer terribly upon conversion from UTF-8 to UTF-16, with English suffering the most by expanding by 96% and Hungarian the least by expanding by 80%. All are huge.
Western languages that do not use the Latin script still suffer, but only 15-20%.
Eastern languages DO NOT SUFFER in UTF-8 the way everyone claims that they do! Behold:
I hope that answers your question. There is simply no +50% to +100% size increase for Eastern languages when encoded in UTF-8 compared to when these same texts are encoded in UTF-16. Only when taking individual code points do you ever see numbers like that, which is a completely unreasonable metric.
If you have one byte for the character and add on a second byte, I'd call that a 100% increase, not 50%. I think that's what the author means.
If I write
X
characters withN
bytes/character to a file I'll haveNX
bytes in that file. So you can see where doubling or tripling the number of bytes per character will have a linear effect on the size of the file.Though UTF-8 characters may use up to 4 bytes (and more is theoretically possible), it is not used for the Basic Multilingual Plane which includes "almost all modern languages".
So I guess a 100% overhead, though theoretically possible, is not possible with a typical modern language. You'd have to use something exotic from the Supplementary Multilingual Plane, which uses 4 bytes in UTF-8, to achieve this.
For HTML documents or mixed text it's may not be necessary to switch to UTF-16 to save space:
See the UTF-8 to UTF-16 comparison on Wikipedia.
Joel Spolsky wrote a great article about Unicode, I can really recommend it:
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)