Scrapy - how to identify already scraped urls

2019-01-14 12:49发布

问题:

Im using scrapy to crawl a news website on a daily basis. How do i restrict scrapy from scraping already scraped URLs. Also is there any clear documentation or examples on SgmlLinkExtractor.

回答1:

You can actually do this quite easily with the scrapy snippet located here: http://snipplr.com/view/67018/middleware-to-avoid-revisiting-already-visited-items/

To use it, copy the code from the link and put it into some file in your scrapy project. To reference it, add a line in your settings.py to reference it:

SPIDER_MIDDLEWARES = { 'project.middlewares.ignore.IgnoreVisitedItems': 560 }

The specifics on WHY you pick the number that you do can be read up here: http://doc.scrapy.org/en/latest/topics/downloader-middleware.html

Finally, you'll need to modify your items.py so that each item class has the following fields:

visit_id = Field()
visit_status = Field()

And I think that's it. The next time you run your spider it should automatically try to start avoiding the same sites.

Good luck!



回答2:

I think jama22's answer is a little incomplete.

In the snippet if self.FILTER_VISITED in x.meta:, you can see that you require FILTER_VISITED in your Request instance in order for that request to be ignored. This is to ensure that you can differentiate between links that you want to traverse and move around and item links that well, you don't want to see again.



回答3:

Scrapy can auto-filter urls which are scraped, isn't it? Some different urls point to the same page will not be filtered, such as "www.xxx.com/home/" and "www.xxx.com/home/index.html".



回答4:

This is straight forward. Maintain all your previously crawled urls in python dict. So when you try to try them next time, see if that url is there in the dict. else crawl.

def load_urls(prev_urls):
    prev = dict()
    for url in prev_urls:
        prev[url] = True
    return prev

def fresh_crawl(prev_urls, new_urls):
    for url in new_urls:
        if url not in prev_urls:
            crawl(url)
    return

def main():
    purls = load_urls(prev_urls)
    fresh_crawl(purls, nurls)
    return

The above code was typed in SO text editor aka browser. Might have syntax errors. You might also need to make a few changes. But the logic is there...

NOTE: But beware that some websites constantly keep changing their content. So sometimes you might have to recrawl a particular webpage (i.e. same url) just to get the updated content.