Scrapy crawler spider doesn't follow links

2019-07-29 10:25发布

For this, I used example in Scrapy crawl spider example: http://doc.scrapy.org/en/latest/topics/spiders.html

I want to get links from a web page and follow them to parse table with statistics, but somehow I don't see that any links would be grabbed and followed to web page that has data. Here is my script:

from basketbase.items import BasketbaseItem
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request



class Basketspider(CrawlSpider):
    name = "basketsp"
    allowed_domains = ["euroleague.net"]
    start_urls = ["http://www.euroleague.net/main"]
    rules = (
        Rule(SgmlLinkExtractor(allow=("results/by-date?seasoncode=E2000")),follow=True),
        Rule(SgmlLinkExtractor(allow=("showgame?gamecode=165&seasoncode=E2000#!boxscore")), callback='parse_item'),
    )


    def parse_item(self, response):
        self.log('Hi, this is an item page! %s' % response.url)
        sel = HtmlXPathSelector(response)
        items=[]
        item = BasketbaseItem()
        item['date'] = sel.select('//div[@class="gs-dates"]/text()').extract() # Game date
        item['time'] = sel.select('//div[@class="gs-dates"]/span[@class="GameScoreTimeContainer"]/text()').extract() # Game time
        item['stage'] = sel.select('//div[@class="gs-dates"]/text()').extract() # Stage of tournament
        item['home'] = sel.select('//div[@class="gs-teams"]/a[@class="localClub"]/text()').extract() #Home team
        item['guest'] = sel.select('//div[@class="gs-teams"]/a[@class="roadClub"]/text()').extract() #Visitor team
        item['referees'] = sel.select('//span[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_lblReferees"]/text()').extract() #Referees
        item['attendance'] = sel.select('//span[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_lblAudience"]/text()').extract()
        item['fst'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[2][@class="AlternatingColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[2][@class="AlternatingColumn"]/text()').extract()
        item['snd'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[3][@class="NormalColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[3][@class="NormalColumn"]/text()').extract()
        item['trd'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[4][@class="AlternatingColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[4][@class="AlternatingColumn"]/text()').extract()
        item['tth'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[5][@class="NormalColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[5][@class="NormalColumn"]/text()').extract()
        item['xt1'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['xt2'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['xt3'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['xt4'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['game_id'] = sel.select('//span[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_lblReferees"]/text()').extract() # Game ID construct
        item['arena'] = sel.select('//div[@class="gs-dates"]/text()').extract() #Arena
        item['result'] = sel.select('//span[@class="score"]/text()').extract() #Result
        item['league'] = sel.select('//div[@class="gs-dates"]/text()').extract() #League
        print item['date'],item['time'], item['stage'], item['home'],item['guest'],item['referees'],item['attendance'],item['fst'],item['snd'],item['trd'],item['tth'],item['result']
        items.append(item)    

And here I have response from terminal:

scrapy crawl basketsp
2013-11-17 01:40:15+0200 [scrapy] INFO: Scrapy 0.16.2 started (bot: basketbase)
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole,   CloseSpider, WebService, CoreStats, SpiderState
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Enabled item pipelines: 
2013-11-17 01:40:15+0200 [basketsp] INFO: Spider opened
2013-11-17 01:40:15+0200 [basketsp] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-11-17 01:40:15+0200 [basketsp] DEBUG: Crawled (200) <GET http://www.euroleague.net/main> (referer: None)
2013-11-17 01:40:15+0200 [basketsp] INFO: Closing spider (finished)
2013-11-17 01:40:15+0200 [basketsp] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 228,
     'downloader/request_count': 1,
     'downloader/request_method_count/GET': 1,
     'downloader/response_bytes': 9018,
     'downloader/response_count': 1,
     'downloader/response_status_count/200': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2013, 11, 16, 23, 40, 15, 496752),
     'log_count/DEBUG': 7,
     'log_count/INFO': 4,
     'response_received_count': 1,
     'scheduler/dequeued': 1,
     'scheduler/dequeued/memory': 1,
     'scheduler/enqueued': 1,
     'scheduler/enqueued/memory': 1,
     'start_time': datetime.datetime(2013, 11, 16, 23, 40, 15, 229125)}
2013-11-17 01:40:15+0200 [basketsp] INFO: Spider closed (finished)

What I am doing, wrong here? Any ideas would be great help. I tried to leave SgmlLinkExtractor() empty that all links would be followed, but I get the same situation. There's no indication that crawler spider works at all.

I'm running Scrapy version 0.16.2 on Python 2.7.2+

2条回答
Rolldiameter
2楼-- · 2019-07-29 11:01

prepend "www" to allowed domains.

查看更多
等我变得足够好
3楼-- · 2019-07-29 11:21

Scrapy is misinterpreting the content type of the start url.

You can verify this by using scrapy shell:

$ scrapy shell 'http://www.euroleague.net/main' 
2013-11-18 16:39:26+0900 [scrapy] INFO: Scrapy 0.21.0 started (bot: scrapybot)
...

AttributeError: 'Response' object has no attribute 'body_as_unicode'

See my previous answer about the missing body_as_unicode attribute. I notice that the server does not set any content-type header.

CrawlSpider ignores non-html responses, so the responses are not processed and no links are followed.

I would suggest opening a issue on github, as I think Scrapy should be able to handle this case transparently.

As a work around you could override the CrawlSpider parse method, create an HtmlResponse from the response object passed, and pass that to the superclass parse method.

查看更多
登录 后发表回答