The website that I am crawling contains many players and when I click on any player, I can go the his page.
The website structure is like this:
<main page>
<link to player 1>
<link to player 2>
<link to player 3>
..
..
..
<link to payer n>
</main page>
And when I click on any link, I go to player's page which is like this:
<player name>
<player team>
<player age>
<player salary>
<player date>
I want to scrap all the players those age is between 20 and 25 years.
what I am doing
scraping the main page using first spider.
getting links using first spider.
crawl each link using second spider.
get the player informatoin using second spider.
save this information in json file using pipeline.
my question
how can I return the date
value from second spider
to the first spider
what i have tried
I build my own middelware and i override the process_spider_output
. it allows me to print the request but I don't know what else should I do in order to return that date
value to my first spider
any help is appreciated
Edit
Here is some of the code:
def parse(self, response):
sel = Selector(response)
Container = sel.css('div[MyDiv]')
for player in Container:
extract LINK and TITLE
yield Request(LINK, meta={'Title': Title}, callback = self.parsePlayer)
def parsePlayer(self,response):
player = new PlayerItem();
extract DATE
return player
Something like (based on Robin's answer):
You want to discard players outside a range of dates
All you need to do is check the
date
inparsePlayer
, and return only the relevant.You want to scrap every link in order and stop when some date is reached
For example, if you have performance issues (you are scrapping way too much links and you don't need the ones after some limit).
Given that Scrapy work in asymmetric requests, there is no real good way to do that. The only way you have is trying to force linear behavior instead of default parallel requests.
Let me explain. When you have two callbacks like that, on default behavior scrapy will first parse the first page (main page) and put in its queue all requests for the player pages. Without waiting for that first page to finish being scrapped, it will start treating these requests for player pages (not necessarily in the order it found them).
Therefore, when you get the information that the player page
p
is out of date, it has already sent internal requests forp+1
,p+2
...p+m
(m
is basically a random number) AND has probably started treating some of these requests. Possibly evenp+1
beforep
(no fixed order, remember).So no way to stop exactly at the right page if you keep this pattern, and no way to interact with
parse
fromparsePlayer
.What you can do is force it to follow the links in order, so that you have full control. The drawback is that it will take a big toll on performance: if scrapy follows each link one after the other, it means it can't treat them simultaneously as it usually does and it slows things down.
The code could be something like:
That way scrapy will get the main page, then the first player, then the main page, then the second player, then the main, etc... until it finds a date that doesn't fit the criteria. Then there is no callback to the main function and the spider stops.
This gets a little more complex if you have to also increment the index of the main page (if there are n main pages for example), but the idea stays the same.
First of all, I want to thank @warwaruk, @Robin for helping me in this issue.
And the best thanks to my great teacher @pault
I found the solution and here is the algorithm:
In the callback for each player:
4.1 extract player's information.
4.2 check if the date in the rage, if no: do nothing, if yes: check if this is the last play in the main player list. if yes, callback to the second main page.
simple code
It works perfectly :)