How to solve this weird python encoding issue?

2019-05-29 01:49发布

I'm doing some NLP task on a corpus of strings from the web - and as you expect, there are encoding issues. Here're a few examples:

they don’t serve sushi : the apostrophe in don't is not standard ' but \xe2\x80\x99
Delicious food – Wow   : the hyphen before wow is \xe2\x80\x93

So now, I'm gonna read such lines, pass them to NLTK for parsing, use the parse information to train a CRF model through mallet.

Let's begin with the solution I've been seeing everywhere on stack-overflow. Here're a few experiments:-

st = "they don’t serve sushi"

st.encode('utf-8')
Out[2]: 'they don\xc3\xa2\xe2\x82\xac\xe2\x84\xa2t serve sushi'

st.decode('utf-8')
Out[3]: u'they don\u2019t serve sushi'

So these are just trial-and-error attempts to see if something might work.

I finally used the encoded sentence and passed it to the next part - pos tagging using nltk. posTags = nltk.pos_tag(tokens) and it throws an ugly exception known to everyone :-

 File "C:\Users\user\workspacePy\_projectname_\CRF\FeatureGen.py", line 95, in getSentenceFeatures
    posTags = nltk.pos_tag(tokens)
  File "C:\Users\user\Anaconda\lib\site-packages\nltk\tag\__init__.py", line 101, in pos_tag
    return tagger.tag(tokens)
  File "C:\Users\user\Anaconda\lib\site-packages\nltk\tag\sequential.py", line 61, in tag
    tags.append(self.tag_one(tokens, i, tags))
  File "C:\Users\user\Anaconda\lib\site-packages\nltk\tag\sequential.py", line 81, in tag_one
    tag = tagger.choose_tag(tokens, index, history)
  File "C:\Users\user\Anaconda\lib\site-packages\nltk\tag\sequential.py", line 634, in choose_tag
    featureset = self.feature_detector(tokens, index, history)
  File "C:\Users\user\Anaconda\lib\site-packages\nltk\tag\sequential.py", line 736, in feature_detector
    'prevtag+word': '%s+%s' % (prevtag, word.lower()),
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)

And when I tried decoding, it says UnicodeEncodeError: 'ascii' codec can't encode character u'\u2013' in position 42: ordinal not in range(128) in the line where I'm decoding the string.

So my current solution is to remove all the non-ascii characters. But it totally changes the word which causes a serious loss of data for unigram-bigram (word combination) based model.

What should be the right approach?

3条回答
放荡不羁爱自由
2楼-- · 2019-05-29 02:04

In your example st is a str (list of bytes). To do that it was encoded in some form (utf8 by the looks), but think of it as a list of bytes, and you need to know how it was encoded in order to decode it (though utf8 is always generally a good first punt).

>>> st = "they don’t serve sushi"
>>> st
'they don\xe2\x80\x99t serve sushi'
>>> type(st)
<type 'str'>

>>> st.encode('utf8')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 8: ordinal not in range(128)

So st.encode is non-sensical here. It's already encoded (as utf8 by the interpreter by the looks of things). For some mad reason, in python2 str.encode will first decode into a unicode and then encode back to a str. It chooses to decode as ascii by default, but your data is encoded as utf8. So the error you're seeing is in the decode step of your encode operation! It's looking at that list of bytes e2,80,99 and saying - 'hmmm, those aren't real ascii characters'.

Let's start with unicode data instead (notice the u):

>>> st = u"they don’t serve sushi"
>>> st
u'they don\u2019t serve sushi'
>>> type(st)
<type 'unicode'>
>>> st.encode('utf8')
'they don\xe2\x80\x99t serve sushi'

Really, all this is python2's fault. Python3 won't let you get away with these shenanigans of thinking of unicode and str as the same thing.

The rule of thumb is; always work with unicode within your code. Only encode/decode when you're getting data in and out of the system, and generally, encode as utf8 unless you have some other specific requirement.

In python2 you can ensure that 'data' in your code is automatically unicode u'data'

from __future__ import unicode_literals

>>> st = "they don’t serve sushi"
>>> st
u'they don\u2019t serve sushi'
>>> type(st)
<type 'unicode'>
查看更多
来,给爷笑一个
3楼-- · 2019-05-29 02:11

The trick that always works for me, related to @Aidan Kane's answer, is to first do str = str.decode('utf-8') so that the string is in unicode when you want to do string manipulation, and then str = str.encode('utf-8') when you want to write out to a file or something.

查看更多
时光不老,我们不散
4楼-- · 2019-05-29 02:16

This is not an easy problem with a magical solution. You can read more about it at http://blog.luminoso.com/2012/08/20/fix-unicode-mistakes-with-python/ TL;DR Use the Python Unicode module to find the category of the characters, and assume words don't use mixed categories.

查看更多
登录 后发表回答