I'm writing a wrapper layer to be used with mingw which provides the application with a virtual UTF-8 environment. Functions which deal with filenames are wrappers which convert from UTF-8 and call the corresponding "_w" functions, and so on. The big problem I've run into is that Windows' wchar_t
is 16-bit.
For filesystem operations, it's not a big deal. I can just convert back and forth between UTF-8 and UTF-16, and everything will work. But the standard C multibyte/wide character conversion API does not allow multi-wchar_t characters.
Possible solutions:
- Provide a CESU-8 environment instead of UTF-8. I really don't like this one.
- Take the easy way out and only support the BMP. Treat UTF-8 sequences of length 4 as invalid.
- Extending the wrapper to replace mingw's
wchar_t
withtypedef int32_t wchar_t;
and dealing withWCHAR
andwchar_t
being different. This is a pain but it may be ideal for porting apps that expect a clean POSIX-type environment and don't usewchar_t
for any Windows-API purposes. - The following hack:
mbrtowc
outputs a wchar_t
corresponding to the high surrogate after reading the first 3 bytes of a 4-byte UTF-8 character, and keeps the remaining state in the mbstate_t
object. Upon receiving the next byte, it combines it with the saved state to output the low surrogate. If the last byte ends up being invalid, it returns -1 (with EILSEQ) and a lone surrogate ends up in the output stream (bad...).
wcrtomb
outputs the first 2 bytes of UTF-8 when it processes the high surrogate, and saves the remaining state in its mbstate_t
object. When it subsequently processes the low surrogate, it combines that with the saved state to output the last 2 bytes of UTF-8. If a valid low surrogate is not received, it returns -1 (with EILSEQ) and an incomplete UTF-8 sequence ends up in the output stream (bad...).
The plus side of this hack is that it works as long as input is valid, and allows access to any UTF-8 character and thus any possible filename/argument/etc. text the application might need to work with.
The cons are that it's not strictly conformant to ISO C (wchar_t
string is not allowed to be stateful) and that it delays detection of malformed characters until incorrect partial output has already been written.
I'm looking for feedback on the different options, and especially my proposed hack: whether it's reasonable, whether the cons are likely to cause severe errors, and whether there are any other cons I haven't yet considered which might keep the scheme from working entirely. I'd also be happy to hear any other possible solutions I haven't thought of.
If you are on windows, you convert between UTF-16 and UTF-8 a whole string at a time using MultiByteToWideChar and WideCharToMultiByte.
While the default mode in GCC is a 32bit wchar_t there are compile switches that change that, and more generally the c & c++ specs don't specify the size of wchar_t - in fact wchar_t can be the same size as char.
If you want to avoid using Windows APIs (in your windows wrapper code!?) then use mbstowcs to convert an entire string at a time.
I'd do something like #4, but don't generate any output until you're sure the input is valid.
mbrtowc
should decode the entire character. If it's outside the BMP, then output the high surrogate and store the low surrogate in thembstate_t
.wcrtomb
should store high surrogates in thembstate_t
, then output all 4 UTF-8 bytes if the character is valid.