Character detection in a text file in Python using

2020-02-07 19:26发布

I am trying to use the Universal Encoding Detector (chardet) in Python to detect the most probable character encoding in a text file ('infile') and use that in further processing.

While chardet is designed primarily for detecting the character encoding of webpages, I have found an example of it being used on individual text files.

However, I cannot work out how to tell the script to set the most likely character encoding to the variable 'charenc' (which is used several times throughout the script).

My code, based on a combination of the aforementioned example and chardet's own documentation is as follows:

import chardet    
rawdata=open(infile,"r").read()
chardet.detect(rawdata)

Character detection is necessary as the script goes on to run the following (as well as several similar uses):

inF=open(infile,"rb")
s=unicode(inF.read(),charenc)
inF.close()

Any help would be greatly appreciated.

1条回答
看我几分像从前
2楼-- · 2020-02-07 19:37

chardet.detect() returns a dictionary which provides the encoding as the value associated with the key 'encoding'. So you can do this:

import chardet    
rawdata = open(infile, 'rb').read()
result = chardet.detect(rawdata)
charenc = result['encoding']

The chardet documentation is not explicitly clear about whether text strings and/or byte strings are supposed to work with the module, but it stands to reason that if you have a text string you don't need to run character detection on it, so you should probably be passing byte strings. Hence the binary mode flag (b) in the call to open(). But chardet.detect() might also work with a text string depending on which versions of Python and of the library you're using, i.e. if you do omit the b you might find that it works anyway even though you're technically doing something wrong.

查看更多
登录 后发表回答