I have some problem with my spider. I use splash with scrapy to get link to "Next page" which is generate by JavaScript. After downloading the information from the first page, I want to download information from the following pages, but LinkExtractor function does not work properly. But it looks like start_request function doesn't work. Here is code:
class ReutersBusinessSpider(CrawlSpider):
name = 'reuters_business'
allowed_domains = ["reuters.com"]
start_urls = (
'http://reuters.com/news/archive/businessNews?view=page&page=1',
)
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url, self.parse, meta={
'splash': {
'endpoint': 'render.html',
'args': {'wait': 0.5}
}
})
def use_splash(self, request):
request.meta['splash'] = {
'endpoint':'render.html',
'args':{
'wait':0.5,
}
}
return request
def process_value(value):
m = re.search(r'(\?view=page&page=[0-9]&pageSize=10)', value)
if m:
return urlparse.urljoin('http://reuters.com/news/archive/businessNews',m.group(1))
rules = (
Rule(LinkExtractor(restrict_xpaths='//*[@class="pageNext"]',process_value='process_value'),process_request='use_splash', follow=False),
Rule(LinkExtractor(restrict_xpaths='//h2/*[contains(@href,"article")]',process_value='process_value'),callback='parse_item'),
)
def parse_item(self, response):
l = ItemLoader(item=PajaczekItem(), response=response)
l.add_xpath('articlesection','//span[@class="article-section"]/text()', MapCompose(unicode.strip), Join())
l.add_xpath('date','//span[@class="timestamp"]/text()', MapCompose(parse))
l.add_value('url',response.url)
l.add_xpath('articleheadline','//h1[@class="article-headline"]/text()', MapCompose(unicode.title))
l.add_xpath('articlelocation','//span[@class="location"]/text()')
l.add_xpath('articletext','//span[@id="articleText"]//p//text()', MapCompose(unicode.strip), Join())
return l.load_item()
Logs:
2016-02-12 08:20:29 [scrapy] INFO: Spider opened 2016-02-12 08:20:29 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-02-12 08:20:29 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-02-12 08:20:38 [scrapy] DEBUG: Crawled (200) <POST localhost:8050/render.html>; (referer: None)
2016-02-12 08:20:38 [scrapy] DEBUG: Filtered offsite request to 'localhost': <GET http://localhost:8050/render.html?page=2&pageSize=10&view=page%3E;
2016-02-12 08:20:38 [scrapy] INFO: Closing spider (finished)
Where is mistake? Thanks for help.
A quick glance, you're not calling your start_request property using splash... For example, you should be using SplashRequest.
Giving that you have Splash set up appropriate, that is in settings you have enabled the necessary middle where's and pointed to the correct /url also enabled them to fire and HTTP cache all correctly... No I have not run your code should be good to go now
EDIT: BTW... its not next page is not js generated
So... unless there is any other reason your using splash I see no reason to use it a simple for loop in the initial parsing of the articles request like...