Empty .json file

2019-03-02 17:48发布

问题:

I have written this short spider code to extract titles from hacker news front page(http://news.ycombinator.com/).

import scrapy

class HackerItem(scrapy.Item): #declaring the item
    hackertitle = scrapy.Field()


class HackerSpider(scrapy.Spider):
    name = 'hackernewscrawler'
    allowed_domains = ['news.ycombinator.com'] # website we chose
    start_urls = ['http://news.ycombinator.com/']

   def parse(self,response):
        sel = scrapy.Selector(response) #selector to help us extract the titles
        item=HackerItem() #the item declared up

# xpath of the titles
        item['hackertitle'] = 
sel.xpath("//tr[@class='athing']/td[3]/a[@href]/text()").extract()


# printing titles using print statement.
        print (item['hackertitle']

However when i run the code scrapy scrawl hackernewscrawler -o hntitles.json -t json

i get an empty .json file that does not have any content in it.

回答1:

You should change print statement to yield:

import scrapy

class HackerItem(scrapy.Item): #declaring the item
    hackertitle = scrapy.Field()


class HackerSpider(scrapy.Spider):
    name = 'hackernewscrawler'
    allowed_domains = ['news.ycombinator.com'] # website we chose
    start_urls = ['http://news.ycombinator.com/']

    def parse(self,response):
        sel = scrapy.Selector(response) #selector to help us extract the titles
        item=HackerItem() #the item declared up

# xpath of the titles
        item['hackertitle'] = sel.xpath("//tr[@class='athing']/td[3]/a[@href]/text()").extract()


# return items
        yield item

Then run:

scrapy crawl hackernewscrawler -o hntitles.json -t json