How to scrape the items loaded via a “view more” b

2019-06-12 07:16发布

Here is the inspection of View more button in a website. I can crawl through data that are shown in the website but I want it somehow that it can crawl through items that are hidden behind the view more button. How do I do that?

 <div id="view-more" class="p20px pt10px">
                        <div id="view-more-loader" class="tac"></div>

                        <a href="javascript:void(0);" onclick="add_more_product_classified();$('#load_more_a_id').hide();" class="xxxxlarge ffrc lightbginfo gbiwb bdr darkbdrinfo p10px20px db w180px m0a tac" id="load_more_a_id" style="display: block;"><b class="icon-refresh xsmall mr5px"></b>View More Products..</a>
                        </div>

My scrapy code:

import scrapy




class DummymartSpider(scrapy.Spider):
    name = 'dummymart'
    allowed_domains = ['dummymart.net']
    start_urls =['https://www.dummymart.com/catalog/car-dvd-player_cid100001018.html']



    def parse(self, response):
            Product = response.xpath('//div[@class="attr"]/h2/a/@title').extract()
            Company =  response.xpath('//div[@class="supplier"]/p/a/@title').extract()
            Country =  response.xpath('//*[@class="location a-color-secondary"]/span/text()').extract()
            Category = response.xpath('//*[@class="attr category hide--mobile"]/span/a/text()').extract()

            for item in zip(Product,Company,Country,Category):
                scraped_info = {
                    'Product':item[0],
                    'Company': item[1],
                    'Country':item[2],
                    'Category':item[3]

                }
                yield scraped_info

1条回答
Evening l夕情丶
2楼-- · 2019-06-12 07:30

The usual solution for a problem like this is:

  1. Fire up the Developer Tools in your browser;
  2. Go to the Network panel so that you can view the requests made by your browser;
  3. Click the "view more" button in the page and check which request your browser did to fetch the data;
  4. Make the same request on your spider.

This blog post may help you.

查看更多
登录 后发表回答