I'm using scrapy to crawl multiple pages on a site.
The variable start_urls
is used to define pages to be crawled.
I would initially start with 1st page, thus defining start_urls = [1st page]
in the file example_spider.py
Upon getting more info from 1st page, I would determine what are next pages to be crawled, then would assign start_urls
accordingly. Hence, I have to overwrite above example_spider.py with changes to start_urls = [1st page, 2nd page, ..., Kth page]
, then run scrapy crawl again.
Is that the best approach or is there a better way to dynamically assign start_urls
using scrapy API without having to overwrite example_splider.py
?
Thanks.
start_urls
class attribute contains start urls - nothing more. If you have extracted urls of other pages you want to scrape - yield fromparse
callback corresponding requests with [another] callback:If you still want to customize start requests creation, override method BaseSpider.start_requests()