I have to write a script that support reading of a file which can be saved as either Unicode or Ansi (using MS's notepad).
I don't have any indication of the encoding format in the file, how can I support both encoding formats? (kind of a generic way of reading files with out knowing the format in advanced).
MS Notepad gives the user a choice of 4 encodings, expressed in clumsy confusing terminology:
"Unicode" is UTF-16, written little-endian. "Unicode big endian" is UTF-16, written big-endian. In both UTF-16 cases, this means that the appropriate BOM will be written. Use
utf-16
to decode such a file."UTF-8" is UTF-8; Notepad explicitly writes a "UTF-8 BOM". Use
utf-8-sig
to decode such a file."ANSI" is a shocker. This is MS terminology for "whatever the default legacy encoding is on this computer".
Here is a list of Windows encodings that I know of and the languages/scripts that they are used for:
If the file has been created on the computer where it is being read, then you can obtain the "ANSI" encoding by
locale.getpreferredencoding()
. Otherwise if you know where it came from, you can specify what encoding to use if it's not UTF-16. Failing that, guess.Be careful using
codecs.open()
to read files on Windows. The docs say: """Note Files are always opened in binary mode, even if no binary mode was specified. This is done to avoid data loss due to encodings using 8-bit values. This means that no automatic conversion of '\n' is done on reading and writing.""" This means that your lines will end in\r\n
and you will need/want to strip those off.Putting it all together:
Sample text file, saved with all 4 encoding choices, looks like this in Notepad:
Here is some demo code:
and here is the output when run in a Windows "Command Prompt" window using the command
\python27\python read_notepad.py "" t1-*.txt
Things to be aware of:
(1) "mbcs" is a file-system pseudo-encoding which has no relevance at all to decoding the contents of files. On a system where the default encoding is
cp1252
, it makes likelatin1
(aarrgghh!!); see below(2)
chardet
is very good at detecting encodings based on non-Latin scripts (Chinese/Japanese/Korean, Cyrillic, Hebrew, Greek) but not much good at Latin-based encodings (Western/Central/Eastern Europe, Turkish, Vietnamese) and doesn't grok Arabic at all.Notepad saves Unicode files with a byte order mark. This means that the first bytes of the file will be:
Other text editors may or may not have the same behavior, but if you know for sure Notepad is being used, this will give you a decent heuristic for auto-selecting the encoding. All these sequences are valid in the ANSI encoding as well, however, so it is possible for this heuristic to make mistakes. It is not possible to guarantee that the correct encoding is used.