Looking for an easy way to get the charset/encoding information of an HTTP response using Python urllib2, or any other Python library.
>>> url = 'http://some.url.value'
>>> request = urllib2.Request(url)
>>> conn = urllib2.urlopen(request)
>>> response_encoding = ?
I know that it is sometimes present in the 'Content-Type' header, but that header has other information, and it's embedded in a string that I would need to parse. For example, the Content-Type header returned by Google is
>>> conn.headers.getheader('content-type')
'text/html; charset=utf-8'
I could work with that, but I'm not sure how consistent the format will be. I'm pretty sure it's possible for charset to be missing entirely, so I'd have to handle that edge case. Some kind of string split operation to get the 'utf-8' out of it seems like it has to be the wrong way to do this kind of thing.
>>> content_type_header = conn.headers.getheader('content-type')
>>> if '=' in content_type_header:
>>> charset = content_type_header.split('=')[1]
That's the kind of code that feels like it's doing too much work. I'm also not sure if it will work in every case. Does anyone have a better way to do this?
To parse http header you could use
cgi.parse_header()
:Or using the response object:
In general the server may lie about the encoding or do not report it at all (the default depends on content-type) or the encoding might be specified inside the response body e.g.,
<meta>
element in html documents or in xml declaration for xml documents. As a last resort the encoding could be guessed from the content itself.You could use
requests
to get Unicode text:Or
BeautifulSoup
to parse html (and convert to Unicode as a side-effect):Or
bs4.UnicodeDammit
directly for arbitrary content (not necessarily an html):Charsets can be specified in many ways, but it's often done so in the headers.
That last one didn't specify a charset anywhere, so
get_content_charset()
returnedNone
.The
requests
library makes this easy:If you happen to be familiar with the Flask/Werkzeug web development stack, you will be happy to know the Werkzeug library has an answer for exactly this kind of HTTP header parsing, and accounts for the case that the content-type is not specified at all, like you had wanted.
So then you can do:
Note that if
charset
is not supplied, this will produce instead:It even works if you don't supply anything but an empty string or dict:
Thus it seems to be EXACTLY what you were looking for! If you look at the source code, you will see they had your purpose in mind: https://github.com/mitsuhiko/werkzeug/blob/master/werkzeug/http.py#L320-329
Hope this helps someone some day! :)
To properly (i.e. in a browser-like way - we can't do better) decode html you need to take in account:
<meta>
tags in page body;All of the above is implemented in w3lib.encoding.html_to_unicode function: it has
html_to_unicode(content_type_header, html_body_str, default_encoding='utf8', auto_detect_fun=None)
signature and returns(detected_encoding, unicode_html_content)
tuple.requests, BeautifulSoup, UnicodeDamnnit, chardet or flask's parse_options_header are not correct solutions as they all fail at some of these points.