Scrapy-redis framework, redis stored xxx: requests have been crawled finished, but the program is still running, how to automatically stop the program, rather than has been running?
The running code:
2017-08-07 09:17:06 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-08-07 09:18:06 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
- I use scrapy-redis to crawl a site, scrapy-redis will not automatically shut down, still have to ask url, but has no url. So it will always
scraped 0 items (at 0 items/min)
Well
scrapy-redis
is made to be always open waiting for more urls to be pushed in the redis queue, but if you want to close it you could do it with a pipeline, here:I will explain how it works in
open_spider
the pipeline get the total of keys in the redis queue and inprocess_item
it decrements theredis_len
variable and when it reach zero send a close signal in the last item.scrapy-redis
will always wait for new urls to be pushed in the redis queue. When the queue is empty, the spider goes in idle state and waits new urls. That's what I used to close my spider once the queue is empty.When the spider is in idle (when it does nothing), I check if there is still something left in the redis queue. If not, I close the spider with
close_spider
. The following code is located in thespider
class: