I'm trying to crawl about a thousand of web sites, from which I'm interested in the html content only.
Then I transform the HTML into XML to be parsed with Xpath to extract the specific content I'm interested in.
I've been using Heritrix 2.0 crawler for a few months, but I ran into huge performance, memory and stability problems (Heritrix crashes about every day, and no attemps with JVM parameters to limit memory usage were successful).
From your experiences in the field, which crawler would you use for extracting and parsing content from a thousand of sources?
Wow. State of the art crawlers like the search engines use crawl and index 1 million URLs On a sinlge box a day. Sure the HTML to XML rendering step takes a bit but I agree with you on the performance. I've only used private crawlers so I can't recommend one you'll be able to use but hope this performance numbers help in your evaluation.
I would not use the 2.x branch (which has been discontinued) or the 3.x (current development) for any 'serious' crawling unless you want to help improve Heritrix or just like being on the bleeding edge.
Heritrix 1.14.3 is the most recent stable release and it really is stable, used by many institutions for both small and large scale crawling. I'm using to run crawls against tens of thousands of domains, collecting tens of millions of URLs in under a week.
The 3.x branch is getting closer to a stable release, but even then I'd wait a bit for general use at The Internet Archive and others to improve its performance and stability.
Update: Since someone up-voted this recently I feel it is worth noting that Heritrix 3.x is now stable and is the recommended version for those starting out with Heritrix.
I would suggest writing your own using Python with the Scrapy and either lxml or BeautifulSoup packages. You should find a few good tutorials in Google for those. I use Scrapy+lxml at work to spider ~600 websites checking for broken links.