I've been fighting with this for an hour now. I'm parsing an XML-string with iterparse
. However, the data is not encoded properly, and I am not the provider of it, so I can't fix the encoding.
Here's the error I get:
lxml.etree.XMLSyntaxError: line 8167: Input is not proper UTF-8, indicate encoding !
Bytes: 0xEA 0x76 0x65 0x73
How can I simply ignore this error and still continue on parsing? I don't mind, if one character is not saved properly, I just need the data.
Here's what I've tried, all picked from internet:
data = data.encode('UTF-8','ignore')
data = unicode(data,errors='ignore')
data = unicode(data.strip(codecs.BOM_UTF8), 'utf-8', errors='ignore')
Edit:
I can't show the url, as it's a private API and involves my API key, but this is how I obtain the data:
ur = urlopen(url)
data = ur.read()
The character that causes the problem is: å
, I guess that ä
& ö
, etc, would also break it.
Here's the part where I try to parse it:
def fast_iter(context, func):
for event, elem in context:
func(elem)
elem.clear()
while elem.getprevious() is not None:
del elem.getparent()[0]
del context
def process_element(elem):
print elem.xpath('title/text( )')
context = etree.iterparse(StringIO(data), tag='item')
fast_iter(context, process_element)
Edit 2:
This is what happens, when I try to parse it in PHP. Just to clarify, F***ing Åmål is a drama movie =D
The file starts with <?xml version="1.0" encoding="UTF-8" ?>
Here's what I get from print repr(data[offset-10:offset+60])
:
ence des r\xeaves, La</title>\n\t\t<year>2006</year>\n\t\t<imdb>0354899</imdb>\n
You say:
The character that causes the problem is: å,
How do you know that? What are you viewing your text with?
So you can't publish the URL and your API key; what about reading the data, writing it to a file (in binary mode), and publishing that?
When you open that file in your web browser, what encoding does it detect?
At the very least, do this
data.decode('utf8') # where data is what you get from ur.read()
This will produce an exception that will tell you the byte offset of the non-UTF-8 stuff.
Then do this:
print repr(data[offset-10:offset+60])
and show us the results.
Assuming the encoding is actually cp1252
and decoding the bytes in the lxml error message:
>>> guff = "\xEA\x76\x65\x73"
>>> from unicodedata import name
>>> [name(c) for c in guff.decode('1252')]
['LATIN SMALL LETTER E WITH CIRCUMFLEX', 'LATIN SMALL LETTER V', 'LATIN SMALL LE
TTER E', 'LATIN SMALL LETTER S']
>>>
So are you seeing e-circumflex followed by ves
, or a-ring followed by ves
, or a-ring followed by something else?
Does the data start with an XML declaration like <?xml version="1.0" encoding="UTF-8"?>
? If not, what does it start with?
Clues for encoding guessing/confirmation: What language is the text written in? What country?
UPDATE based on further information supplied.
Based on the snippet that you showed in the vicinity of the error, the movie title is "La science des rêves" (the science of dreams).
Funny how PHP gags on "F***ing Åmål" but Python chokes on French dreams. Are you sure that you did the same query?
You should have told us it was IMDB up front, you would have got your answer much sooner.
SOLUTION before you pass data
to the lxml
parser, do this:
data = data.replace('encoding="UTF-8"', 'encoding="iso-8859-1"')
That's based on the encoding that they declare on their website, but that may be a lie too. In that case, try cp1252
instead. It's definitely not iso-8859-2.
However, the data is not encoded properly, and I am not the provider of it, so I can't fix the encoding.
It is encoded somehow. Determine the encoding, and specify that encoding, instead of UTF-8 encoding (since that is obviously not the encoding).
Iterparse allows you to override xml encodings in the document using its keyword-argument "encoding" (see https://lxml.de/api/lxml.etree.iterparse-class.html).
In your code above, you could also write
context = etree.iterparse(StringIO(data), tag='item', encoding='iso-8859-1')
to deal with all european characters in the file.
You can use encode with 'replace' -
>>> unicode('\x80abc', errors='replace')
this way the bad character is replaced with a valid one -
u'\ufffdabc'
To recover from errors during parsing you could use recover
option (some data might be ignored in this case):
import urllib2
from lxml import etree
data = urllib2.urlopen(URL).read()
root = etree.fromstring(data, parser=etree.XMLParser(recover=True))
for item in root.iter('item'):
# process item here
To override the document encoding you could use:
parser=etree.XMLParser(encoding=ENCODING)
Here's how feedparser
detects character encoding (it is not trivial).