Scrapy : Program organization when interacting wit

2019-09-12 13:07发布

I'm working with Scrapy 1.1 and I have a project where I have spider '1' scrape site A (where I aquire 90% of the information to fill my items). However depending on the results of the Site A scrape, I may need to scrape additional information from site B. As far as developing the program, does it make more sense to scrape site B within spider '1' or would it be possible to interact site B from within a pipeline object. I prefer the latter, thinking that it decouples the scraping of 2 sites, but I'm not sure if this is possible or the best way to handle this use case. Another approach might be to use a second spider (spider '2') for site B, but then I would assume that I would have to let spider '1' run, save to db then run spider '2' . Anyway any advice would be appreciated.

标签: python scrapy
1条回答
走好不送
2楼-- · 2019-09-12 14:03

Both approaches are very common and this just a question of preference. For your case containing everything in one spider sounds like a straight-forward solution.

You can add url field to your item and schedule and parse it later in the pipeline:

class MyPipeline(object):
    def __init__(self, crawler):
        self.crawler = crawler

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler)

    def process_item(self, item, spider):
        extra_url = item.get('extra_url', None)
        if not extra_url:
            return item
        req = Request(url=extra_url
                      callback=self.custom_callback,
                      meta={'item': item},)
        self.crawler.engine.crawl(req, spider)
        # you have to drop the item here since you will return it later anyway
        raise DropItem()

    def custom_callback(self, response):
        # retrieve your item
        item = response.mete['item']
        # do something to add to item
        item['some_extra_stuff'] = ...
        del item['extra_url'] 
        yield item

What the above code does is checks whether item has some url field, if it does it drops the item and schedules a new request. That requests fills up the item with some extra data and sends it back to the pipeline.

查看更多
登录 后发表回答