How do I “decode” a UTF-8 character?

2019-04-02 12:45发布

Let's assume I want to write a function to compare two Unicode characters. How should I do that? I read some articles around (like this) but still didn't got that. Let's take as input. It's in range 0x0800 and 0xFFFF so it will use 3 bytes to encode it. How do I decode it? bitwise operation to get 3 bytes from wchar_t and store into 3 chars? A code in example in C could be great.

Here's my C code to "decode" but obviously show wrong value to decode unicode...

#include <stdio.h>
#include <wchar.h>

void printbin(unsigned n);
int length(wchar_t c);
void print(struct Bytes *b);

// support for UTF8 which encodes up to 4 bytes only
struct Bytes
{
    char v1;
    char v2;
    char v3;
    char v4;
};

int main(void)
{
    struct Bytes bytes = { 0 };
    wchar_t c = '€';
    int len = length(c);

    //c = 11100010 10000010 10101100
    bytes.v1 = (c >> 24) << 4; // get first byte and remove leading "1110"
    bytes.v2 = (c >> 16) << 5; // skip over first byte and get 000010 from 10000010
    bytes.v3 = (c >> 8)  << 5; // skip over first two bytes and 10101100 from 10000010
    print(&bytes);

    return 0;
}

void print(struct Bytes *b)
{
    int v1 = (int) (b->v1);
    int v2 = (int)(b->v2);
    int v3 = (int)(b->v3);
    int v4 = (int)(b->v4);

    printf("v1 = %d\n", v1);
    printf("v2 = %d\n", v2);
    printf("v3 = %d\n", v3);
    printf("v4 = %d\n", v4);
}

int length(wchar_t c)
{
    if (c >= 0 && c < 0x007F)
        return 1;
    if (c >= 0x0080 && c <= 0x07FF)
        return 2;
    if (c >= 0x0800 && c <= 0xFFFF)
        return 3;
    if (c >= 0x10000 && c <= 0x1FFFFF)
        return 4;
    if (c >= 0x200000 && c <= 0x3FFFFFF)
        return 5;
    if (c >= 0x4000000 && c <= 0x7FFFFFFF)
        return 6;

    return -1;
}

void printbin(unsigned n)
{
    if (!n)
        return;

    printbin(n >> 1);
    printf("%c", (n & 1) ? '1' : '0');
}

标签: c unicode utf-8
1条回答
叼着烟拽天下
2楼-- · 2019-04-02 13:07

It's not at all easy to compare UTF-8 encoded characters. Best not to try. Either:

  1. Convert them both to a wide format (32 bit integer) and compare this arithmetically. See wstring_convert or your favorite vendor-specific function; or

  2. Convert them into 1 character strings and use a function that compares UTF-8 encoded strings. There is no standard way to do this in C++, but it is the preferred method in other languages such as Ruby, PHP, whatever.


Just to make it clear, the thing that is hard is to take raw bits/bytes/characters encoded as UTF_8 and compare them. This is because your comparison has to take account of the encoding to know whether to compare 8 bits, 16 bits or more. If you can somehow turn the raw data bits into a null-terminated string then the comparison is trivially easy using regular string functions. This string may be more than one byte/octet in length, but it will represent a single character/code point.


Windows is a bit of a special case. Wide characters are short int (16-bit). Historically this meant UCS-2 but it has been redefined as UTF-16. This means that all valid characters in the Basic Multilingual Plane (BMP) can be compared directly, since they will occupy a single short int, but others cannot. I am not aware of any simple way to deal with 32-bit wide characters (represented as a simple int) outside the BMP on Windows.

查看更多
登录 后发表回答