I want to do some screen-scraping with Python 2.7, and I have no context for the differences between HTMLParser
, SGMLParser
, or Beautiful Soup.
Are these all trying to solve the same problem, or do they exist for different reasons? Which is simplest, which is most robust, and which (if any) is the default choice?
Also, please let me know if I have overlooked a significant option.
Edit: I should mention that I'm not particularly experienced in HTML parsing, and I'm particularly interested in which will get me moving the quickest, with the goal of parsing HTML on one particular site.
I am using and would recommend lxml and pyquery for parsing HTML. I had to write a web scraping bot a few month ago and of all the popular alternatives I tried, including HTMLParser and BeautifulSoup, I went with lxml and the syntax sugar of pyquery. I haven't tried SGMLParser though.
For what I've seen, lxml is more or less the most feature-rich library and its underlying C core is quite performant when compared to its alternatives. As for pyquery, I really liked its jQuery-inspired syntax which makes navigating the DOM more enjoyable.
Here are some resources you might find useful in case you decide to give it a try:
Well, that's my 2c :) I hope this helps.
Well, software is like cars....different flavors about all do drive!
Go with BeautifulSoup (4).
Take a look at Scrapy. It is a python framework specifically for scrapping. It makes it very easy to extract information using the XPath to the element. It also has some very interesting capabilities such as defining models for the scraped data (to be able to export it in different formats), authentication and recursively following links.
BeautifulSoup in particular is for dirty HTML as found in the wild. It will parse any old thing, but is slow.
A very popular choice these days is lxml.html, which is fast, and can use BeautifulSoup if needed.