I have some random HTML and I used BeautifulSoup to parse it, but in most of the cases (>70%) it chokes. I tried using Beautiful soup 3.0.8 and 3.2.0 (there were some problems with 3.1.0 upwards), but the results are almost same.
I can recall several HTML parser options available in Python from the top of my head:
- BeautifulSoup
- lxml
- pyquery
I intend to test all of these, but I wanted to know which one in your tests come as most forgiving and can even try to parse bad HTML.
They all are. I have yet to come across any html page found in the wild that lxml.html couldn't parse. If lxml barfs on the pages you're trying to parse you can always preprocess them using some regexps to keep lxml happy.
lxml itself is fairly strict, but lxml.html
is a different parser and can deal with very broken html. For extremely brokeh html, lxml also ships with lxml.html.soupparser
which interfaces with the BeautifulSoup library.
Some approaches to parsing broken html using lxml.html are described here: http://lxml.de/elementsoup.html
With pages that don't work with anything else (those that contain nested <form>
elements come to mind) I've had success with MinimalSoup and ICantBelieveItsBeautifulSoup. Each can handle certain types of error that the other one can't so often you'll need to try both.
I ended up using BeautifulSoup 4.0 with html5lib for parsing and is much more forgiving, with some modifications to my code it's now working considerabily well, thanks all for suggestions.
If beautifulsoup doesn't fix your html problem, the next best solution would be regular expression. lxml, elementtree, minidom are very strict in parsing and actually they are doing right.
Other tips:
I feed the html to lynx browser through command prompt, and take out the text version of the page/content and parse using regex.
Converting to html to text or html to markdown strips all the html tags and you will remain with text. That is easy to parse.