So, my problem is relatively simple. I have one spider crawling multiple sites, and I need it to return the data in the order I write it in my code. It's posted below.
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from mlbodds.items import MlboddsItem
class MLBoddsSpider(BaseSpider):
name = "sbrforum.com"
allowed_domains = ["sbrforum.com"]
start_urls = [
"http://www.sbrforum.com/mlb-baseball/odds-scores/20110328/",
"http://www.sbrforum.com/mlb-baseball/odds-scores/20110329/",
"http://www.sbrforum.com/mlb-baseball/odds-scores/20110330/"
]
def parse(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//div[@id="col_3"]//div[@id="module3_1"]//div[@id="moduleData4952"]')
items = []
for site in sites:
item = MlboddsItem()
item['header'] = site.select('//div[@class="scoreboard-bar"]//h2//span[position()>1]//text()').extract()# | /*//table[position()<2]//tr//th[@colspan="2"]//text()').extract()
item['game1'] = site.select('/*//table[position()=1]//tr//td[@class="tbl-odds-c2"]//text() | /*//table[position()=1]//tr//td[@class="tbl-odds-c4"]//text() | /*//table[position()=1]//tr//td[@class="tbl-odds-c6"]//text()').extract()
items.append(item)
return items
The results are returned in a random order, for example it returns the 29th, then the 28th, then the 30th. I've tried changing the scheduler order from DFO to BFO, just in case that was the problem, but that didn't change anything.
Thanks in advance.
There is a much easier way to make scrapy follow the order of starts_url: you can just uncomment and change the concurrent requests in
settings.py
to 1.start_urls
defines urls which are used instart_requests
method. Yourparse
method is called with a response for each start urls when the page is downloaded. But you cannot control loading times - the first start url might come the last toparse
.A solution -- override
start_requests
method and add to generated requests ameta
withpriority
key. Inparse
extract thispriority
value and add it to theitem
. In the pipeline do something based in this value. (I don't know why and where you need these urls to be processed in this order).Or make it kind of synchronous -- store these start urls somewhere. Put in
start_urls
the first of them. Inparse
process the first response and yield the item(s), then take next url from your storage and make a request for it with callback forparse
.I doubt if it's possible to achieve what you want unless you play with scrapy internals. There are some similar discussions on scrapy google groups e.g.
http://groups.google.com/group/scrapy-users/browse_thread/thread/25da0a888ac19a9/1f72594b6db059f4?lnk=gst
The google group discussion suggests using priority attribute in Request object. Scrapy guarantees the urls are crawled in DFO by default. But it does not ensure that the urls are visited in the order they were yielded within your parse callback.
Instead of yielding Request objects you want to return an array of Requests from which objects will be popped till it is empty.
Can you try something like that?
Personally I like @user1460015's implementation after I managed to have my own work around solution.
My solution is to use subprocess of Python to call scrapy url by url until all urls have been took care of.
In my code, if user does not specify he/she wants to parse the urls sequentially, we can start the spider in a normal way.
If a user specifies it needs to be done sequentially, we can do this:
Note that: this implementation does not handle errors.
Off course, you can control it. The top secret is the method how to feed the greedy Engine/Schedulor. You requirement is just a little one. Please see I add a list named "task_urls".
If you want some more complicated case, please see my project: https://github.com/wuliang/TiebaPostGrabber