Encoding issues are among the one topic that have bitten me most often during development. Every platform insists on its own encoding, most likely some non-UTF-8 defaults are in the game. (I'm usually working on Linux, defaulting to UTF-8, my colleagues mostly work on german Windows, defaulting to ISO-8859-1 or some similar windows codepage)
I believe, that UTF-8 is a suitable standard for developing an i18nable application. However, in my experience encoding bugs are usually discovered late (even though I'm located in Germany and we have some special characters that along with ISO-8859-1 provide some detectable differences).
I believe that those developers with a completely non-ASCII character set (or those that know a language that uses such a character set) are getting a head start in providing test data. But there must be a way to ease this for the rest of us as well.
What [technique|tool|incentive] are people here using? How do you get your co-developers to care for these issues? How do you test for compliance? Are those tests conducted manually or automatically?
Adding one possible answer upfront:
I've recently discovered fliptitle.com (they are providing an easy way to get weird characters written "uʍop ǝpısdn" *) and I'm planning on using them to provide easily verifiable UTF-8 character strings (as most of the characters used there are at some weird binary encoding position) but there surely must be more systematic tests, patterns or techniques for ensuring UTF-8 compatibility/usage.
Note: Even though there's an accepted answer, I'd like to know of more techniques and patterns if there are some. Please add more answers if you have more ideas. And it has not been easy choosing only one answer for acceptance. I've chosen the regexp answer for the least expected angle to tackle the problem although there would be reasons to choose other answers as well. Too bad only one answer can be accepted.
Thank you for your input.
*) that's "upside down" written "upside down" for those that cannot see those characters due to font problems
In PHP we use the mb_ functions such as mb_detect_encoding() and mb_convert_encoding(). They aren't perfect, but they get us 99.9% of the way there. Than we have a few regular expressions to strip out funky characters that somehow make there way in at times.
If you are going international, you definitely want to use UTF-8. We have yet to find the perfect solution for getting all of our data into UTF-8, and i'm not sure one exists. You just have to keep tinkering with it.
Thank you for fliptitle!
I, too, am trying to lay out a proper test plan to make sure that an application supports Unicode strings throughout the system.
I am bilingual, but in two languages that only use ISO-8859-1. Therefore, I have been struggling to determine what is a "real-life," "meaningful" way to test the full range of Unicode possibilities.
I just came across this:
Follow-Up Post:
After devising some tests for my application, I realized that I had put together a small list of encoded values that might be helpful to others.
I am using the following international strings in my test:
(NOTE: here comes some UTF-8 encoded text... hopefully you can see this in your browser)
ユーザー別サイト
简体中文
크로스 플랫폼으로
מדורים מבוקשים
أفضل البحوث
Σὲ γνωρίζω ἀπὸ
Десятую Международную
แผ่นดินฮั่นเสื่อมโทรมแสนสังเวช
∮ E⋅da = Q, n → ∞, ∑ f(i) = ∏ g(i)
français langue étrangère
mañana olé
(End of UTF-8 foreign/non-English text)
However, at various points during testing, I realized that it was insufficient to only have information about how the strings were supposed to look when rendered in their respective foreign alphabets. I also needed to know the correct Unicode codepoint numbers, and also the correct hexadecimal values for these strings in at least two encodings (UCS-2 and UTF-8).
Here is the equivalent code-point numbering and hex values:
Also, here are a couple images that show some common "mis-renderings" that can happen in various editors, even though the underlying bytes are well-formed UTF8. If you see any of these renderings, it probably means that you correctly produced a UTF8 string, but that your editor/viewer is trying to interpret them under some encoding other than UTF8.
Sample Renderings Num. 1
Sample Renderings Num. 2
There is a regular expression to test if a string is valid UTF-8:
But this doesn’t ensure that the text actual is UTF-8.
An example: The byte sequence for the letter ö (U+00F6) and the corresponding UTF-8 sequence is 0xC3B6.
So when you get 0xC3B6 as input you can say that it is valid UTF-8. But you cannot surely say that the letter ö has been submitted.
This is because imagine that not UTF-8 has been used but ISO 8859-1 instead. There the sequence 0xC3B6 represents the character à (0xC3) and ¶ (0xB6) respectivly.
So the sequence 0xC3B6 can either represent ö using UTF-8 or ö using ISO 8859-1 (although the latter is rather unusual).
So in the end it’s only guessing.
Localization is pretty tough.
I think you are really asking two questions. One of them, how do you get everybody to correctly work on an i8n application, is not technical, but a project management issue in my opinion. If you want people to use a common standard, like UTF-8, then you will simply have to enforce that. Tools will help but people will first need to be told to do so.
Besides saying that UTF-8 is in my opinion the way to go, it is hard to give an answer to the questions about tools. It really depends on the kind of project you are doing. If it for example is a Java project that you are talking about then it is a simple matter of properly configuring the IDE to encode files in UTF-8. And to make sure your UTF-8 localizations are in external resource files.
One thing you can certainly do is to make unit tests that check compliance. If your localized messages/labels are in resource files then it is faily easy to check if they are properly UTF-8 encoded I think.
The real troublemaker with character encoding is quite often that there are multiple encoding-related bugs and that some incorrect behavior has been introduced because of other bugs. I have no count of how many times I have seen this happen.
The goal, as always, is to handle it correctly in every single place. So most of the time simple unit tests can do the trick, it doesn't even have to be very complex character sets. I find all out bugs just by testing on our national character "ø", because it maps differently in UTF-8 and most of the other character sets.
The aggregate works fine when all the pieces do it correctly. I know this sounds trivial, but when it comes to character set issues it's always worked for me ;)