I am using scrapy to crawl old sites that I own, I am using the code below as my spider. I don't mind having files outputted for each webpage, or a database with all the content within that. But I do need to be able to have the spider crawl the whole thing with out me having to put in every single url that I am currently having to do
import scrapy
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["www.example.com"]
start_urls = [
"http://www.example.com/contactus"
]
def parse(self, response):
filename = response.url.split("/")[-2] + '.html'
with open(filename, 'wb') as f:
f.write(response.body)
To crawl whole site you should use the CrawlSpider instead of the
scrapy.Spider
Here's an example
For your purposes try using something like this:
Also, take a look at this article