Scrapy - doesn't crawl

2019-07-18 19:40发布

问题:

I'm trying to get a recursive crawl running and since the one I wrote wasn't working fine, I pulled an example from web and tried. I really don't know, where the problem is, but the crawl doesn't display any ERRORS. Can anyone help me with this.

Also, Is there any step-by-step debugging tool to help understand the crawl flow of a spider.

Any help regarding this is greatly appreciated.

MacBook:spiders hadoop$ scrapy crawl craigs -o items.csv -t csv
/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/zope/__init__.py:1: UserWarning: Module pkg_resources was already imported from /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/pkg_resources.pyc, but /Library/Python/2.6/site-packages is being added to sys.path__import__('pkg_resources').declare_namespace(__name__)
/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/zope/__init__.py:1: UserWarning: Module site was already imported from /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site.pyc, but /Library/Python/2.6/site-packages is being added to sys.path__import__('pkg_resources').declare_namespace(__name__)
2013-02-08 20:35:55+0530 [scrapy] INFO: Scrapy 0.16.4 started (bot: myspider)
2013-02-08 20:35:55+0530 [scrapy] DEBUG: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2013-02-08 20:35:55+0530 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-02-08 20:35:55+0530 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-02-08 20:35:55+0530 [scrapy] DEBUG: Enabled item pipelines: 
2013-02-08 20:35:55+0530 [craigs] INFO: Spider opened
2013-02-08 20:35:55+0530 [craigs] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-02-08 20:35:55+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-02-08 20:35:55+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-02-08 20:35:58+0530 [craigs] DEBUG: Crawled (200) <GET http://sfbay.craigslist.org/npo/> (referer: None)
2013-02-08 20:35:58+0530 [craigs] INFO: Closing spider (finished)
2013-02-08 20:35:58+0530 [craigs] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 230,
     'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 7291,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2013, 2, 8, 15, 5, 58, 415553),
 'log_count/DEBUG': 7,
 'log_count/INFO': 4,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2013, 2, 8, 15, 5, 55, 343482)}
2013-02-08 20:35:58+0530 [craigs] INFO: Spider closed (finished)

the code I have used is as follows,

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
#from craigslist_sample.items import CraigslistSampleItem

class MySpider(CrawlSpider):
    name = "craigs"
    allowed_domains = ["sfbay.craigslist.org"]
    start_urls = ["sfbay.craigslist.org/npo/"]

     rules = (Rule (SgmlLinkExtractor(allow=("d00\.html", ),restrict_xpaths=('//p[@id="nextpage"]',))
 , callback="parse_items", follow= True),
 )

    def parse_items(self, response):
        hxs = HtmlXPathSelector(response)
        titles = hxs.select("//p")
        items = []
        for titles in titles:
            item = CraigslistSampleItem()
            item ["title"] = titles.select("a/text()").extract()
            item ["link"] = titles.select("a/@href").extract()
            items.append(item)
        return(items)

回答1:

  1. Modify your SgmlLinkExtractor as payala suggested
  2. Remove the restrict_xpaths section of the link extractor

These changes will fix the issue being experienced. I'd also make the following suggestion to the xpath used to select titles, as this will remove the empty items that will occur because the next page links are also being selected.

def parse_items(self, response):
    hxs = HtmlXPathSelector(response)
    titles = hxs.select("//p[@class='row']")


回答2:

Try substituting in your SgmlLinkExtractor "d00\.html" with ".*00\.html" or "index\d+00\.html"