I'm using Python Google App Engine to simply fetch html pages and show it. My aim is to be able to fetch any page in any language. Now I have a problem with encoding:
Simple
result = urllib2.urlopen(url).read()
leaves artifacts in place of special letters and
urllib2.urlopen(url).read().decode('utf8')
throws error:
'utf8' codec can't decode bytes in position 3544-3546: invalid data
So how to solve it? Is there any lib that would check what encoding page is and convert so it would be readable?
rajax sugested at How to download any(!) webpage with correct charset in python? to use chardet lib from http://chardet.feedparser.org/
This code seems to work, now:
This doesn't directly answer your question, but I think that urllib2.urlopen in Python 2.5 (and therefor in App Engine) is a mess. For starters, all 2xx status code other then 200 itself throw an exception (http://bugs.python.org/issue1177).
I found it much easier to fetch pages using GAE's urlfetch.
Well, you have to get the raw bytes. Once you have downloaded the raw bytes, you can actually print them and actually look at them to see what the problem actually is.
The page itself says what it's encoding is. You can assume it's UTF-8, but that's not always true.
If the page is XML or XHTML, the
<?xml
at the beginning includes the encoding.The page has a content-type header
Content-Type: text/plain; charset="UTF-8"
which has the encoding.It's quite easy to properly decode a page.
Step 1. Don't assume the page is UTF-8.
Step 2. Get the content, read the headers.
Step 3. Use the encoding specified in the header, not an assumed encoding of UTF-8.
Yes, it looks like urllib2 just ignores the
Content-Type
attribute.Since most web pages are UTF-8 encoded nowadays, I just use a quick and dirty method to also handle ISO-8859-1 pages. Obviously if you're trying to scrape Chinese page that aren't UTF-8 encoded, this won't work.
It's not pretty, but it works for me:
Edit: I have since realized that this too much of a hack and started using the Requests library, which seems to handle encoding just find: http://docs.python-requests.org/