I'm trying to download some content from a dictionary site like http://dictionary.reference.com/browse/apple?s=t
The problem I'm having is that the original paragraph has all those squiggly lines, and reverse letters, and such, so when I read the local files I end up with those funny escape characters like \x85, \xa7, \x8d, etc.
My question is, is there any way i can convert all those escape characters into their respective UTF-8 characters, eg if there is an 'à' how do i convert that into a standard 'a' ?
Python calling code:
import os
word = 'apple'
os.system(r'wget.lnk --directory-prefix=G:/projects/words/dictionary/urls/ --output-document=G:\projects\words\dictionary\urls/' + word + '-dict.html http://dictionary.reference.com/browse/' + word)
I'm using wget-1.11.4-1 on a Windows 7 system (don't kill me Linux people, it was a client requirement), and the wget exe is being fired off with a Python 2.6 script file.
Assume you have loaded your unicode into a variable called
my_unicode
... normalizing à into a is this simple...Explicit example...
How it works
unicodedata.normalize('NFD', "insert-unicode-text-here")
performs a Canonical Decomposition (NFD) of the unicode text; then we usestr.encode('ascii', 'ignore')
to transform the NFD mapped characters into ascii (ignoring errors).I needed something like this but to remove only accented characters, ignoring special ones and I did this small function:
I like that function because you can customize it in case you need to ignore other characters
The given URL returns UTF-8 as the HTTP response clearly indicates:
Investigating the saved file using vim also reveals that the data is correctly utf-8 encoded...the same is true fetching the URL using Python.