My scrapy project "drills down" from list pages, retrieving data for the listed items at varying levels up to several deep. There could be many pages of listed items with handfuls of different items/links on each page. I'm collecting details (and storing them in a single CSV Excel file) of each of the items from: the page it is listed on, the page link in that list ("more details" page), and yet another page - the original listing by the item's manufacturer, let's say.
Because I am building a CSV file, it would be VERY helpful to put each item's data on a single line before my parse process moves along to the next item. I could do it nicely if only I could get a Request to launch when I demand it while I am writing the CSV line for that item on the list page it appears on. I would just "drill down" as many levels as I need with a different parse function for each level, if needed, staying with a single item all the way until I have the entire CSV file line that it will need.
Instead of it being that easy, it appears that I am going to have to re-write the CSV file for EVERY ITEM at EVERY LEVEL because I can't get scrapy to give me the items' "more details" links responses until I've exited the entire parse function of the page of items listing, thus the end of my CSV file is no longer at the item being processed, and I'm having to have a unique field on each line to look each item up at each level, re-write the file, etc.
Understand, I can't know which callback level will be the last one for any particular item. That is determined on an item-by-item basis. Some items won't even have "deeper" levels. My only idea left is to have only a single recursive callback function that handles all callback levels, but is that way this kind of thing is done by the rest of you, or does scrapy have some means of "Request and wait for response" or something similar? I'm not wanting to install a sql database on my laptop, never having set one up before.
Thank you!!!
from scrapy.spider import Spider
from scrapy.selector import Selector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.exporter import CsvItemExporter
import csv
from meow.items import meowItem, meowPage
from scrapy.http import Request
import os
from mmap import mmap
class meowlistpage(Spider):
name="melist"
prefixhref='http://www.meow.com'
#add '2_p/', '3_p/', or '4_p/', etc. to get to meow's other pages
start_urls = [prefixhref+"/homes/for_sale/CO/house,mobile,land_type/10_rid/3000-30000_price/11-117_mp/800000-8000000_lot/lot_sort/46.377254,-96.82251,30.845647,-114.312744_rect/5_zm/1_p/1_rs/"]
print 'Retrieving first page...'
def parse(self, response):
print 'First page retrieved'
name="melist";prefixhref='http://www.meow.com';
csvfilename = 'C:\\Python27\\My scripts\\meow\\'+name+'.csv';csvfile = open(csvfilename, 'w');pass;csvfile.close()
hxs = Selector(response)
page_tags=hxs.xpath("//div[@id='search-results']/article")
for page_tags in page_tags:
item = meowItem()
item['ad_link']=prefixhref+str(page_tags.xpath(".//div[1]/dl[2]/dt[1]/span[1]/span[1]/a/@href").extract())[3:-2]
idendplace=str(item['ad_link']).index('_zpid')-12; backhashstr=str(item['ad_link'])[idendplace:];
idstartplace=backhashstr.index('/')+1; idendplace=len(backhashstr)-backhashstr.index('_zpid');
item['zpid']=str(backhashstr)[idstartplace:-idendplace]
item['sale_sold']=str(page_tags.xpath(".//div[1]/dl[1]/dt[1]/@class").extract())[8:-17]#"recentlySold" or "forSale"
item['prop_price']=str(page_tags.xpath(".//div[1]/dl[1]/dt[2]/strong/text()").extract())[3:-2]
if (str(item['sale_sold'])=='recentlySold'):item['prop_price']=str(item['prop_price'])+str(page_tags.xpath(".//div[1]/dl[1]/dt[1]/strong/text()").extract())[3:-2]
try:
dollrsgn=item['prop_price'].index('$');item['prop_price']=str(item['prop_price'])[dollrsgn:]
except:pass
item['ad_title']=str(page_tags.xpath(".//div[1]/dl[2]/dt[1]/span[1]/span[1]/a/@title").extract())[3:-2]
prop_latitude1=page_tags.xpath("@latitude").extract();item['prop_latitude']=str(prop_latitude1)[3:-8]+'.'+str(prop_latitude1)[5:-2]
prop_longitude1=page_tags.xpath("@longitude").extract();item['prop_longitude']=str(prop_longitude1)[3:-8]+'.'+str(prop_longitude1)[7:-2]
item['prop_address']=str(page_tags.xpath(".//div[1]/dl[2]/dt[1]/span[1]/span[1]/a/span[1]/text()").extract())[3:-2]+', '+str(page_tags.xpath(".//div[1]/dl[2]/dt[1]/span[1]/span[1]/a/span[2]/text()").extract())[3:-2]+', '+str(page_tags.xpath(".//div[1]/dl[2]/dt[1]/span[1]/span[1]/a/span[3]/text()").extract())[3:-2]+' '+str(page_tags.xpath(".//div[1]/dl[2]/dt[1]/span[1]/span[1]/a/span[4]/text()").extract())[3:-2]
mightmentionacres = str(page_tags.xpath(".//div[1]/dl[2]/dt[2]/text()").extract())[3:-2]+' | '+str(page_tags.xpath(".//div[1]/dl[2]/dt[2]/text()").extract())[3:-2]+' | '+str(page_tags.xpath(".//div[1]/dl[2]/dt[1]/span[1]/span[1]/a/@title").extract())[3:-2]+' | '#+str()[3:-2]#this last segment comes from full ad
item['prop_acres'] = mightmentionacres
#Here is where I'm talking about
yield Request(str(item['ad_link']), meta={'csvfilename':csvfilename, 'item':item}, dont_filter=True, callback = self.getthispage)
#By this point, I wanted all the callback[s] to have had executed, but they don't - Scrapy waits to launch them until after this function completes
csvfile = open(csvfilename, 'ab')
outwriter = csv.writer(csvfile, delimiter=';', quotechar='|', quoting=csv.QUOTE_MINIMAL)
outwriter.writerow(item['zpid'], [item['sale_sold'], item['prop_price'], item['ad_title'],
item['prop_address'], item['prop_latitude'],
item['prop_longitude'], item['prop_acres'],
item['ad_link'], item['parcelnum'], item['lot_width']])
csvfile.close()
#retrieve href of next page of ads
next_results_pg=1
page_tags=hxs.xpath("//div[@id='list-container']/div[@id='search-pagination-wrapper-2']/ul[1]")
while (str(page_tags.xpath(".//li["+str(next_results_pg)+"]/@class").extract())[3:-2]!='current'):
next_results_pg+=1;
if (next_results_pg>80):
break
next_results_pg+=1#;item['next_results_pg'] = next_results_pg
if (str(page_tags.xpath(".//li["+str(next_results_pg)+"]/@class").extract())[3:-2]=='next'):return
next_results_pg_href = prefixhref+str(page_tags.xpath(".//li["+str(next_results_pg)+"]/a/@href").extract())[3:-2]#
if (next_results_pg_href != prefixhref):#need to also avoid launching pages otherwise not desired
page = meowPage()
page['next_results_pg_href'] = next_results_pg_href
print 'Retrieving page '+ next_results_pg_href
# yield Request(next_results_pg_href, dont_filter=True, callback = self.parse)
return
# if (item['next_results_pg_href']==prefixhref):
# print 'No results pages found after this one, next+results_pg='+str(next_results_pg)
# else:
# print 'Next page to parse after this one is '+str(item['next_results_pg_href'])
def getthispage(self, response):
#Even though the yield statement was used,
#nothing here really gets executed until
#until the first parse function resumes and
#then finishes completely.
return
My solution using the standard SQLite3 that is packaged with Python 2.7:
I've rearranged your spider code a bit to make the "item in meta" a bit clearer (I hope)
Invoking your spider with
scrapy crawl melist -o melist_items.csv -t csv
should give you your items in CSV format