I downloaded my Facebook messenger data (in your Facebook account, go to settings, then to Your Facebook information, then Download your information, then create a file with at least the Messages box checked) to do some cool statistics
However there is a small problem with encoding. I'm not sure, but it looks like Facebook used bad encoding for this data. When I open it with text editor I see something like this: Rados\u00c5\u0082aw
. When I try to open it with python (UTF-8) I get RadosÅ\x82aw
. However I should get: Radosław
.
My python script:
text = open(os.path.join(subdir, file), encoding='utf-8')
conversations.append(json.load(text))
I tried a few most common encodings. Example data is:
{
"sender_name": "Rados\u00c5\u0082aw",
"timestamp": 1524558089,
"content": "No to trzeba ostatnie treningi zrobi\u00c4\u0087 xD",
"type": "Generic"
}
I can indeed confirm that the Facebook download data is incorrectly encoded; a Mojibake. The original data is UTF-8 encoded but was decoded as Latin -1 instead. I’ll make sure to file a bug report.
In the meantime, you can repair the damage in two ways:
Decode the data as JSON, then re-encode any strings as Latin-1, decode again as UTF-8:
Load the data as binary, replace all
\u00hh
sequences with the byte the last two hex digits represent, decode as UTF-8 and then decode as JSON:From your sample data this produces:
My solution for parsing objects use
parse_hook
callback on load/loads function:Update:
Solution for parsing list with strings does not working. So here is updated solution: