A good way to get the charset/encoding of an HTTP

2019-01-02 21:29发布

Looking for an easy way to get the charset/encoding information of an HTTP response using Python urllib2, or any other Python library.

>>> url = 'http://some.url.value'
>>> request = urllib2.Request(url)
>>> conn = urllib2.urlopen(request)
>>> response_encoding = ?

I know that it is sometimes present in the 'Content-Type' header, but that header has other information, and it's embedded in a string that I would need to parse. For example, the Content-Type header returned by Google is

>>> conn.headers.getheader('content-type')
'text/html; charset=utf-8'

I could work with that, but I'm not sure how consistent the format will be. I'm pretty sure it's possible for charset to be missing entirely, so I'd have to handle that edge case. Some kind of string split operation to get the 'utf-8' out of it seems like it has to be the wrong way to do this kind of thing.

>>> content_type_header = conn.headers.getheader('content-type')
>>> if '=' in content_type_header:
>>>  charset = content_type_header.split('=')[1]

That's the kind of code that feels like it's doing too much work. I'm also not sure if it will work in every case. Does anyone have a better way to do this?

5条回答
泪湿衣
2楼-- · 2019-01-02 21:38

To parse http header you could use cgi.parse_header():

_, params = cgi.parse_header('text/html; charset=utf-8')
print params['charset'] # -> utf-8

Or using the response object:

response = urllib2.urlopen('http://example.com')
response_encoding = response.headers.getparam('charset')
# or in Python 3: response.headers.get_content_charset(default)

In general the server may lie about the encoding or do not report it at all (the default depends on content-type) or the encoding might be specified inside the response body e.g., <meta> element in html documents or in xml declaration for xml documents. As a last resort the encoding could be guessed from the content itself.

You could use requests to get Unicode text:

import requests # pip install requests

r = requests.get(url)
unicode_str = r.text # may use `chardet` to auto-detect encoding

Or BeautifulSoup to parse html (and convert to Unicode as a side-effect):

from bs4 import BeautifulSoup # pip install beautifulsoup4

soup = BeautifulSoup(urllib2.urlopen(url)) # may use `cchardet` for speed
# ...

Or bs4.UnicodeDammit directly for arbitrary content (not necessarily an html):

from bs4 import UnicodeDammit

dammit = UnicodeDammit(b"Sacr\xc3\xa9 bleu!")
print(dammit.unicode_markup)
# -> Sacré bleu!
print(dammit.original_encoding)
# -> utf-8
查看更多
何处买醉
3楼-- · 2019-01-02 21:40

Charsets can be specified in many ways, but it's often done so in the headers.

>>> urlopen('http://www.python.org/').info().get_content_charset()
'utf-8'
>>> urlopen('http://www.google.com/').info().get_content_charset()
'iso-8859-1'
>>> urlopen('http://www.python.com/').info().get_content_charset()
>>> 

That last one didn't specify a charset anywhere, so get_content_charset() returned None.

查看更多
素衣白纱
4楼-- · 2019-01-02 21:50

The requests library makes this easy:

>>> import requests
>>> r = requests.get('http://some.url.value')
>>> r.encoding
'utf-8' # e.g.
查看更多
笑指拈花
5楼-- · 2019-01-02 22:02

If you happen to be familiar with the Flask/Werkzeug web development stack, you will be happy to know the Werkzeug library has an answer for exactly this kind of HTTP header parsing, and accounts for the case that the content-type is not specified at all, like you had wanted.

 >>> from werkzeug.http import parse_options_header
 >>> import requests
 >>> url = 'http://some.url.value'
 >>> resp = requests.get(url)
 >>> if resp.status_code is requests.codes.ok:
 ...     content_type_header = resp.headers.get('content_type')
 ...     print content_type_header
 'text/html; charset=utf-8'
 >>> parse_options_header(content_type_header) 
 ('text/html', {'charset': 'utf-8'})

So then you can do:

 >>> content_type_header[1].get('charset')
 'utf-8'

Note that if charset is not supplied, this will produce instead:

 >>> parse_options_header('text/html')
 ('text/html', {})

It even works if you don't supply anything but an empty string or dict:

 >>> parse_options_header({})
 ('', {})
 >>> parse_options_header('')
 ('', {})

Thus it seems to be EXACTLY what you were looking for! If you look at the source code, you will see they had your purpose in mind: https://github.com/mitsuhiko/werkzeug/blob/master/werkzeug/http.py#L320-329

def parse_options_header(value):
    """Parse a ``Content-Type`` like header into a tuple with the content
    type and the options:
    >>> parse_options_header('text/html; charset=utf8')
    ('text/html', {'charset': 'utf8'})
    This should not be used to parse ``Cache-Control`` like headers that use
    a slightly different format.  For these headers use the
    :func:`parse_dict_header` function.
    ...

Hope this helps someone some day! :)

查看更多
永恒的永恒
6楼-- · 2019-01-02 22:05

To properly (i.e. in a browser-like way - we can't do better) decode html you need to take in account:

  1. Content-Type HTTP header value;
  2. BOM marks;
  3. <meta> tags in page body;
  4. Differences between encoding names defined used in web an encoding names available in Python stdlib;
  5. As a last resort, if everything else fails, guessing based on statistics is an option.

All of the above is implemented in w3lib.encoding.html_to_unicode function: it has html_to_unicode(content_type_header, html_body_str, default_encoding='utf8', auto_detect_fun=None) signature and returns (detected_encoding, unicode_html_content) tuple.

requests, BeautifulSoup, UnicodeDamnnit, chardet or flask's parse_options_header are not correct solutions as they all fail at some of these points.

查看更多
登录 后发表回答