According to https://doc.scrapy.org/en/latest/topics/media-pipeline.html, both Scrapy's Files Pipeline and Images Pipeline "avoid re-downloading media that was downloaded recently".
I have a spider which I'm running using a job directory (JOBDIR
) in order to pause and resume crawls. Initially I was scraping items without downloading files; later on, I added a Files Pipeline. However, I forgot to delete the JOBDIR
before re-running the spider 'for real' with the Pipeline.
What I'm afraid of is that the requests.seen
file in the JOBDIR
will contain fingerprints of items which have been scraped, but of which there is no scraped file (because the pipeline was not yet in place when they were scraped). What I'm considering doing is to remove the JOBDIR
and start scraping again from a clean slate.
My question is: will this work without downloading all the files again? Or does the FilesPipeline
rely on the JOBDIR
to skip files that have already been downloaded recently? (My FILES_SOURCE
is a S3 bucket by the way).
As i know, scrapy calculate file name (usually it's base64 from url for image) and if file exists in folder, scrapy not try download it.