Urllib2- fetch and show any language page, encodin

2019-04-02 03:50发布

I'm using Python Google App Engine to simply fetch html pages and show it. My aim is to be able to fetch any page in any language. Now I have a problem with encoding:

Simple

result = urllib2.urlopen(url).read() 

leaves artifacts in place of special letters and

urllib2.urlopen(url).read().decode('utf8')

throws error:

'utf8' codec can't decode bytes in position 3544-3546: invalid data

So how to solve it? Is there any lib that would check what encoding page is and convert so it would be readable?

4条回答
Luminary・发光体
2楼-- · 2019-04-02 04:29

rajax sugested at How to download any(!) webpage with correct charset in python? to use chardet lib from http://chardet.feedparser.org/

This code seems to work, now:

import urllib2
import chardet

def fetch(url):
 try:
    result = urllib2.urlopen(url)
    rawdata = result.read()
    encoding = chardet.detect(rawdata)
    return rawdata.decode(encoding['encoding'])

 except urllib2.URLError, e:
    handleError(e)
查看更多
小情绪 Triste *
3楼-- · 2019-04-02 04:33

This doesn't directly answer your question, but I think that urllib2.urlopen in Python 2.5 (and therefor in App Engine) is a mess. For starters, all 2xx status code other then 200 itself throw an exception (http://bugs.python.org/issue1177).

I found it much easier to fetch pages using GAE's urlfetch.

查看更多
▲ chillily
4楼-- · 2019-04-02 04:34

So how to solve it?

Well, you have to get the raw bytes. Once you have downloaded the raw bytes, you can actually print them and actually look at them to see what the problem actually is.

Is there any lib that would check what encoding page is and convert so it would be readable?

The page itself says what it's encoding is. You can assume it's UTF-8, but that's not always true.

If the page is XML or XHTML, the <?xml at the beginning includes the encoding.

The page has a content-type header Content-Type: text/plain; charset="UTF-8" which has the encoding.

It's quite easy to properly decode a page.

Step 1. Don't assume the page is UTF-8.

Step 2. Get the content, read the headers.

Step 3. Use the encoding specified in the header, not an assumed encoding of UTF-8.

查看更多
做自己的国王
5楼-- · 2019-04-02 04:37

Yes, it looks like urllib2 just ignores the Content-Type attribute.

Since most web pages are UTF-8 encoded nowadays, I just use a quick and dirty method to also handle ISO-8859-1 pages. Obviously if you're trying to scrape Chinese page that aren't UTF-8 encoded, this won't work.

It's not pretty, but it works for me:

def read_url(url):
    reader_req = urllib2.Request(url)
    reader_resp = urllib2.urlopen(reader_req)
    reader_resp_content = reader_resp.read()
    reader_resp.close()

    try:
        return reader_resp_content.decode('utf-8')
    except:
        pass

    try:
        iso_string = reader_resp_content.decode('iso-8859-1')
        print 'UTF-8 decoding failed, but ISO-8859-1 decoding succeeded'
        return iso_string 
    except Exception, e:
        print e
        raise

Edit: I have since realized that this too much of a hack and started using the Requests library, which seems to handle encoding just find: http://docs.python-requests.org/

r = requests.get('https://api.github.com/user', auth=('user', 'pass'))
t = r.text
查看更多
登录 后发表回答