I've been trying to debug this for far too long, and I obviously have no idea what I'm doing, so hopefully someone can help. I'm not even sure what I should be asking, but here it goes:
I'm trying to send Apple Push Notifications, and they have a payload size limit of 256 bytes. So subtract some overhead stuff, and I'm left with about 100 english characters of main message content.
So if a message is longer than the max, I truncate it:
MAX_PUSH_LENGTH = 100
body = (body[:MAX_PUSH_LENGTH]) if len(body) > MAX_PUSH_LENGTH else body
So that's fine and dandy, and no matter how long of a message I have (in english), the push notification sends successfully. However, now I have an Arabic string:
str = "هيك بنكون
عيش بجنون تون تون تون هيك بنكون
عيش بجنون تون تون تون
أوكي أ"
>>> print len(str)
109
So that should truncate. But, I always get an invalid payload size error! Curious, I kept lowering the MAX_PUSH_LENGTH threshold to see what it would take for it to succeed, and it's not until I set the limit to around 60 that the push notification succeeded.
I'm not exactly sure if this has something to do with the byte size of languages other than english. It is my understanding that an English character takes one byte, so does an Arabic character take 2 bytes? Might this have something to do with it?
Also, the string is JSON encoded before it is sent off, so it ends up looking something like this: \u0647\u064a\u0643 \u0628\u0646\u0643\u0648\u0646 \n\u0639\u064a\u0634 ...
Could it be that it is being interpreted as a raw string, and just u0647 is 5 bytes?
What should I be doing here? Are there any obvious errors or am I not asking the right question?
You need to cut to bytes length, so you need first to
.encode('utf-8')
your string, and then cut it at a code point boundary.In UTF-8, ASCII (
<= 127
) are 1-byte. Bytes with two or more most significant bits set (>= 192
) are character-starting bytes; the number of bytes that follow is determined by the number of most significant bits set. Anything else is continuation bytes.A problem may arise if you cut the multi-byte sequence in the middle; if a character did not fit, it should be cut completely, up to the starting byte.
Here's some working code:
Now test. If
.decode()
succeeds, we have made a correct cut.You can test that the code works with ASCII as well.
For a unicode string
s
, you would need to use something likelen(s.encode('utf-8'))
to get its length in bytes.len(s)
just returns the number of (unencoded) characters.Update: After further research I discovered that Python has support for incremental encoding which makes it possible to write a reasonably fast function to trim-off excess characters while avoiding the corruption of any multi-byte encoding sequences within the string. Here's example code using it for this task:
Output:
Using the algorithm I posted on your other question, this will encode a Unicode string at UTF-8 and truncate only whole UTF-8 sequences to arrive at an encoding length less than or equal to a maximum length:
Output
If you have a python unicode value and you want to truncate, the following is a very short, general, and efficient way to do it in Python.
So for example:
produces output: