How to access a specific start_url in a Scrapy Cra

2020-06-18 05:12发布

问题:

I'm using Scrapy, in particular Scrapy's CrawlSpider class to scrape web links which contain certain keywords. I have a pretty long start_urls list which gets its entries from a SQLite database which is connected to a Django project. I want to save the scraped web links in this database.

I have two Django models, one for the start urls such as http://example.com and one for the scraped web links such as http://example.com/website1, http://example.com/website2 etc. All scraped web links are subsites of one of the start urls in the start_urls list.

The web links model has a many-to-one relation to the start url model, i.e. the web links model has a Foreignkey to the start urls model. In order to save my scraped web links properly to the database, I need to tell the CrawlSpider's parse_item() method which start url the scraped web link belongs to. How can I do that? Scrapy's DjangoItem class does not help in this respect as I still have to define the used start url explicitly.

In other words, how can I pass the currently used start url to the parse_item() method, so that I can save it together with the appropriate scraped web links to the database? Any ideas? Thanks in advance!

回答1:

By default you can not access the original start url.

But you can override make_requests_from_url method and put the start url into a meta. Then in a parse you can extract it from there (if you yield in that parse method subsequent requests, don't forget to forward that start url in them).


I haven't worked with CrawlSpider and maybe what Maxim suggests will work for you, but keep in mind that response.url has the url after possible redirections.

Here is an example of how i would do it, but it's just an example (taken from the scrapy tutorial) and was not tested:

class MySpider(CrawlSpider):
    name = 'example.com'
    allowed_domains = ['example.com']
    start_urls = ['http://www.example.com']

    rules = (
        # Extract links matching 'category.php' (but not matching 'subsection.php')
        # and follow links from them (since no callback means follow=True by default).
        Rule(SgmlLinkExtractor(allow=('category\.php', ), deny=('subsection\.php', ))),

        # Extract links matching 'item.php' and parse them with the spider's method parse_item
        Rule(SgmlLinkExtractor(allow=('item\.php', )), callback='parse_item'),
    )

    def parse(self, response): # When writing crawl spider rules, avoid using parse as callback, since the CrawlSpider uses the parse method itself to implement its logic. So if you override the parse method, the crawl spider will no longer work.
        for request_or_item in CrawlSpider.parse(self, response):
            if isinstance(request_or_item, Request):
                request_or_item = request_or_item.replace(meta = {'start_url': response.meta['start_url']})
            yield request_or_item

    def make_requests_from_url(self, url):
        """A method that receives a URL and returns a Request object (or a list of Request objects) to scrape. 
        This method is used to construct the initial requests in the start_requests() method, 
        and is typically used to convert urls to requests.
        """
        return Request(url, dont_filter=True, meta = {'start_url': url})

    def parse_item(self, response):
        self.log('Hi, this is an item page! %s' % response.url)

        hxs = HtmlXPathSelector(response)
        item = Item()
        item['id'] = hxs.select('//td[@id="item_id"]/text()').re(r'ID: (\d+)')
        item['name'] = hxs.select('//td[@id="item_name"]/text()').extract()
        item['description'] = hxs.select('//td[@id="item_description"]/text()').extract()
        item['start_url'] = response.meta['start_url']
        return item

Ask if you have any questions. BTW, using PyDev's 'Go to definition' feature you can see scrapy sources and understand what parameters Request, make_requests_from_url and other classes and methods expect. Getting into the code helps and saves you time, though it might seem difficult at the beginning.



回答2:

If I understand correctly the problem, You can get url from response.url and then write to item['url'].

In Spider: item['url'] = response.url

And in pipeline: url = item['url'].

Or put response.url into meta as warvariuc wrote.



回答3:

Looks like warvariuc's answer requires a slight modification as of Scrapy 1.3.3: you need to override _parse_response instead of parse. Overriding make_requests_from_url is no longer necessary.