This question already has an answer here:
- Determine the encoding of text in Python 8 answers
I'm writing some mail-processing software in Python that is encountering strange bytes in header fields. I suspect this is just malformed mail; the message itself claims to be us-ascii, so I don't think there is a true encoding, but I'd like to get out a unicode string approximating the original one without throwing a UnicodeDecodeError
.
So, I'm looking for a function that takes a str
and optionally some hints and does its darndest to give me back a unicode
. I could write one of course, but if such a function exists its author has probably thought a bit deeper about the best way to go about this.
I also know that Python's design prefers explicit to implicit and that the standard library is designed to avoid implicit magic in decoding text. I just want to explicitly say "go ahead and guess".
You may be interested in Universal Encoding Detector.
+1 for the chardet module (suggested by
@insin
).It is not in the standard library, but you can easily install it with the following command:
Example:
See Installing Pip if you don't have one.
As far as I can tell, the standard library doesn't have a function, though it's not too difficult to write one as suggested above. I think the real thing I was looking for was a way to decode a string and guarantee that it wouldn't throw an exception. The errors parameter to string.decode does that.
The best way to do this that I've found is to iteratively try decoding a prospective with each of the most common encodings inside of a try except block.