how to decode and encode web page with python?

2019-02-19 22:42发布

I use Beautifulsoup and urllib2 to download web pages, but different web page has a different encode method, such as utf-8,gb2312,gbk. I use urllib2 get sohu's home page, which is encoded with gbk, but in my code ,i also use this way to decode its web page:

self.html_doc = self.html_doc.decode('gb2312','ignore')

But how can I konw the encode method the pages use before I use BeautifulSoup to decode them to unicode? In most Chinese website, there is no content-type in http Header's field.

1条回答
小情绪 Triste *
2楼-- · 2019-02-19 23:11

Using BeautifulSoup you can parse the HTML and access the original_encoding attrbute:

import urllib2
from bs4 import BeautifulSoup

html = urllib2.urlopen('http://www.sohu.com').read()
soup = BeautifulSoup(html)

>>> soup.original_encoding
u'gbk'

And this agrees with the encoding declared in the <meta> tag in the HTML's <head>:

<meta http-equiv="content-type" content="text/html; charset=GBK" />

>>> soup.meta['content']
u'text/html; charset=GBK'

Now you can decode the HTML:

decoded_html = html.decode(soup.original_encoding)

but there not much point since the HTML is already available as unicode:

>>> soup.a['title']
u'\u641c\u72d0-\u4e2d\u56fd\u6700\u5927\u7684\u95e8\u6237\u7f51\u7ad9'
>>> print soup.a['title']
搜狐-中国最大的门户网站
>>> soup.a.text
u'\u641c\u72d0'
>>> print soup.a.text
搜狐

It is also possible to attempt to detect it using the chardet module (although it is a bit slow):

>>> import chardet
>>> chardet.detect(html)
{'confidence': 0.99, 'encoding': 'GB2312'}
查看更多
登录 后发表回答