I'm primarily interested in the Unix-like systems (e.g., portable POSIX) as it seems like Windows does strange things for wide characters.
Do the read and write wide character functions (like getwchar()
and putwchar()
) always "do the right thing", for example read from utf-8 and write to utf-8 when that is the set locale, or do I have to manually call wcrtomb()
and print the string using e.g. fputs()
? On my system (openSUSE 12.3) where $LANG
is set to en_GB.UTF-8
they do seem to do the right thing (inspecting the output I see what looks like UTF-8 even though strings were stored using wchar_t and written using the wide character functions).
However I am unsure if this is guaranteed. For example cprogramming.com states that:
[wide characters] should not be used for output, since spurious zero bytes and other low-ASCII characters with common meanings (such as '/' and '\n') will likely be sprinkled throughout the data.
Which seems to indicate that outputting wide characters (presumably using the wide character output functions) can wreak havoc.
Since the C standard does not seem to mention coding at all I really have no idea who/when/how coding is applied when using wchar_t. So my question is basically if reading, writing and using wide characters exclusively is a proper thing to do when my application has no need to know about the encoding used. I only need string lengths and console widths (wcswidth()
), so to me using wchar_t everywhere when dealing with text seems ideal.
The relevant text governing the behavior of the wide character stdio functions and their relationship to locale is from POSIX XSH 2.5.2 Stream Orientation and Encoding Rules:
http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_05_02
Basically, the wide character stdio functions always write in the encoding that's in effect (per the
LC_CTYPE
locale category) at the time theFILE
stream becomes wide-oriented; this means the first time a wide stdio function is called on it, orfwide
is used to set the orientation to wide. So as long as a properLC_CTYPE
locale is in effect matching the desired "system" encoding (e.g. UTF-8) when you start working with the stream, everything should be fine.However, one important consideration you should not overlook is that you must not mix byte and wide oriented operations on the same
FILE
stream. Failure to observe this rule is not a reportable error; it simply results in undefined behavior. As a good deal of library code assumesstderr
is byte oriented (and some even makes the same assumption aboutstdout
), I would strongly discourage ever using wide-oriented functions on the standard streams. If you do, you need to be very careful about which library functions you use.Really, I can't think of any reason at all to use wide-oriented functions.
fprintf
is perfectly capable of sending wide-character strings to byte-orientedFILE
streams using the%ls
specifier.Don't use
fputs
with anything else than ASCII.If you want to write down lets say UTF8, then use a function who return the real size used by the utf8 string and use fwrite to write the good number of bytes, without worrying of vicious '
\0
' inside the string.So long as the locale is set correctly, there shouldn't be any issues processing UTF-8 files on a system using UTF-8, using the wide character functions. They'll be able to interpret things correctly, i.e. they'll treat a character as 1-4 bytes as necessary (in both input and output). You can test it out by something like this:
If you use the standard functions (in particular character functions) on multibyte strings carelessly, things will start to break, e.g. the equivalent:
The string still prints correctly here because it's essentially just a stream of bytes, and as the system is expecting UTF-8 sequences, they're translated perfectly. Of course
strlen
is reporting the number of bytes in the string, 7 (plus the\0
), with no understanding that a character and a byte aren't equivalent.In this respect, because of the compatibility between ASCII and UTF-8, you can often get away with treating UTF-8 files as simply multibyte C strings, as long as you're careful.
There's a degree of flexibility as well. It's possible to convert a standard C string (as a multibyte string) to a wide character string easily:
Once you've used a wide character function on a stream, it's set to wide orientation. If you later want to use standard byte i/o functions, you'll need to re-open the stream first. This is probably why the recommendation is not to use it on
stdout
. However, if you only use wide character functions onstdin
andstdout
(including any code that you link to), you will not have any problems.