scrapy crawl spider ajax pagination

2020-06-22 21:55发布

I was trying to scrap link which has ajax call for pagination. I am trying to crawl http://www.demo.com link. and in .py file I provided this code for restrict XPATH and coding is:

# -*- coding: utf-8 -*-
import scrapy

from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.contrib.spiders import sumSpider, Rule
from scrapy.selector import HtmlXPathSelector
from sum.items import sumItem

class Sumspider1(sumSpider):
    name = 'sumDetailsUrls'
    allowed_domains = ['sum.com']
    start_urls = ['http://www.demo.com']
    rules = (
        Rule(LinkExtractor(restrict_xpaths='.//ul[@id="pager"]/li[8]/a'), callback='parse_start_url', follow=True),
    )

    #use parse_start_url if your spider wants to crawl from first page , so overriding 
    def parse_start_url(self, response):
        print '********************************************1**********************************************'
        #//div[@class="showMoreCars hide"]/a
        #.//ul[@id="pager"]/li[8]/a/@href
        self.log('Inside - parse_item %s' % response.url)
        hxs = HtmlXPathSelector(response)
        item = sumItem()
        item['page'] = response.url
        title = hxs.xpath('.//h1[@class="page-heading"]/text()').extract() 
        print '********************************************title**********************************************',title
        urls = hxs.xpath('.//a[@id="linkToDetails"]/@href').extract()
        print '**********************************************2***url*****************************************',urls

        finalurls = []       

        for url in urls:
            print '---------url-------',url
            finalurls.append(url)          

        item['urls'] = finalurls
        return item

My items.py file contains

from scrapy.item import Item, Field


class sumItem(Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    page = Field()
    urls = Field()

Still I'm not getting exact output not able to fetch all pages when I am crawling it.

2条回答
男人必须洒脱
2楼-- · 2020-06-22 22:12

I hope the below code will help.

somespider.py

# -*- coding: utf-8 -*-
import scrapy
import re
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from scrapy.spider import BaseSpider
from demo.items import DemoItem
from selenium import webdriver

def removeUnicodes(strData):
        if(strData):
            strData = strData.encode('utf-8').strip() 
            strData = re.sub(r'[\n\r\t]',r' ',strData.strip())
        return strData

class demoSpider(scrapy.Spider):
    name = "domainurls"
    allowed_domains = ["domain.com"]
    start_urls = ['http://www.domain.com/used/cars-in-trichy/']

    def __init__(self):
        self.driver = webdriver.Remote("http://127.0.0.1:4444/wd/hub", webdriver.DesiredCapabilities.HTMLUNITWITHJS)

    def parse(self, response):
        self.driver.get(response.url)
        self.driver.implicitly_wait(5)
        hxs = Selector(response)
        item = DemoItem()
        finalurls = []
        while True:
            next = self.driver.find_element_by_xpath('//div[@class="showMoreCars hide"]/a')

            try:
                next.click()
                # get the data and write it to scrapy items
                item['pageurl'] = response.url
                item['title'] =  removeUnicodes(hxs.xpath('.//h1[@class="page-heading"]/text()').extract()[0])
                urls = self.driver.find_elements_by_xpath('.//a[@id="linkToDetails"]')

                for url in urls:
                    url = url.get_attribute("href")
                    finalurls.append(removeUnicodes(url))          

                item['urls'] = finalurls

            except:
                break

        self.driver.close()
        return item

items.py

from scrapy.item import Item, Field

class DemoItem(Item):
    page = Field()
    urls = Field()
    pageurl = Field()
    title = Field()

Note: You need to have selenium rc server running because HTMLUNITWITHJS works with selenium rc only using Python.

Run your selenium rc server issuing the command :

java -jar selenium-server-standalone-2.44.0.jar

Run your spider using command:

spider crawl domainurls -o someoutput.json
查看更多
爷、活的狠高调
3楼-- · 2020-06-22 22:23

You can check with your browser how the requests are made.

Behind the scene, right after you click on that button "show more cars" your browser will request a JSON data to feed your next page. You can take advantage of this fact and deal directly with the JSON data without the necessity to work with a JavaScript engine as Selenium or PhantomJS.

In your case, as the first step you should simulate an user scrolling down the page given by your start_url parameter and profile at the same time your network requests to discover the endpoint used by the browser to request that JSON. To discover this endpoint in general there is a XHR(XMLHttpRequest) section on the browser's profile tool as here in Safari where you can navigate thought all resources/endpoints used to request the data.

Once you discover this endpoint it's a straightforward task: you give your Spider as start_url the endpoint that you just discovered and according you process and navigate through the JSON's you can discover if it a next page to request.

P.S.: I saw for you that the endpoint url is http://www.carwale.com/webapi/classified/stockfilters/?city=194&kms=0-&year=0-&budget=0-&pn=2

In this case my browser requested the second page, as you can see in the parameter pn. It's is important you set the some header parameters before you send the request. I noticed in your case the headers are:

Accept text/plain, /; q=0.01

Referer http://www.carwale.com/used/cars-in-trichy/

X-Requested-With XMLHttpRequest

sourceid 1

User-Agent Mozilla/5.0...

查看更多
登录 后发表回答