Split unicode string into 300 byte chunks without

2020-02-12 03:42发布

I want to split u"an arbitrary unicode string" into chunks of say 300 bytes without destroying any characters. The strings will be written to a socket that expects utf8 using unicode_string.encode("utf8"). I don't want to destroy any characters. How would I do this?

5条回答
【Aperson】
2楼-- · 2020-02-12 03:45

If you can ensure that the utf-8 representation of your chars are only 2 byte long than you should be safe to split the unicode string into chunks of 150 chars (this should be true for most european encodings). But utf-8 is variable-width encoding. So might might split the unicode string into single characters, convert each char to utf-8 and fill your buffer until you reached the max chunk-size...this might be inefficient and a problem if high-throughput is an must...

查看更多
Melony?
3楼-- · 2020-02-12 03:47

UTF-8 has a special property that all continuation characters are 0x800xBF (start with bits 10). So just make sure you don't split right before one.

Something along the lines of:

def split_utf8(s, n):
    if len(s) <= n:
        return s, None
    while ord(s[n]) >= 0x80 and ord(s[n]) < 0xc0:
        n -= 1
    return s[0:n], s[n:]

should do the trick.

查看更多
倾城 Initia
4楼-- · 2020-02-12 03:54

Use unicode encoding which by design have fixed length of each character, for example utf-32:

>>> u_32 = u'Юникод'.encode('utf-32')
>>> u_32
'\xff\xfe\x00\x00.\x04\x00\x00=\x04\x00\x008\x04\x00\x00:\x04\x00\x00>\x04\x00\x
004\x04\x00\x00'
>>> len(u_32)
28
>>> len(u_32)%4
0
>>>

After encoding you can send chunk of any size (size must be multiple of 4 bytes) without destroying characters

查看更多
在下西门庆
5楼-- · 2020-02-12 04:00

Tested.

def split_utf8(s , n):
    assert n >= 4
    start = 0
    lens = len(s)
    while start < lens:
        if lens - start <= n:
            yield s[start:]
            return # StopIteration
        end = start + n
        while '\x80' <= s[end] <= '\xBF':
            end -= 1
        assert end > start
        yield s[start:end]
        start = end
查看更多
乱世女痞
6楼-- · 2020-02-12 04:05

UTF-8 is designed for this.

def split_utf8(s, n):
    """Split UTF-8 s into chunks of maximum length n."""
    while len(s) > n:
        k = n
        while (ord(s[k]) & 0xc0) == 0x80:
            k -= 1
        yield s[:k]
        s = s[k:]
    yield s

Not tested. But you find a place to split, then backtrack until you reach the beginning of a character.

However, if a user might ever want to see an individual chunk, you may want to split on grapheme cluster boundaries instead. This is significantly more complicated, but not intractable. For example, in "é", you might not want to split apart the "e" and the "´". Or you might not care, as long as they get stuck together again in the end.

查看更多
登录 后发表回答