Why I am Getting KeyError in Scrapy?

2019-09-15 03:08发布

I am using Scrapy spiders inside Celery and I am getting this kind of errors randomly

Unhandled Error
Traceback (most recent call last):
    File "/usr/lib/python2.7/site-packages/twisted/internet/base.py", line 428, in fireEvent
      DeferredList(beforeResults).addCallback(self._continueFiring)
    File "/usr/lib/python2.7/site-packages/twisted/internet/defer.py", line 321, in addCallback
      callbackKeywords=kw)
    File "/usr/lib/python2.7/site-packages/twisted/internet/defer.py", line 310, in addCallbacks
      self._runCallbacks()
    File "/usr/lib/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
      current.result = callback(current.result, *args, **kw)
  --- <exception caught here> ---
    File "/usr/lib/python2.7/site-packages/twisted/internet/base.py", line 441, in _continueFiring
      callable(*args, **kwargs)
    File "/usr/lib/python2.7/site-packages/twisted/internet/base.py", line 667, in disconnectAll
      selectables = self.removeAll()
    File "/usr/lib/python2.7/site-packages/twisted/internet/epollreactor.py", line 191, in removeAll
      [self._selectables[fd] for fd in self._reads],
  exceptions.KeyError: 94

The number changes from case to case (94 could be 97 in another case and so on)

I am using:

celery==3.1.19
Django==1.9.4
Scrapy==1.3.0

This is how I run Scrapy inside Celery:

from billiard import Process
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings

class MyCrawlerScript(Process):
    def __init__(self, **kwargs):
        Process.__init__(self)
        settings = get_project_settings('my_scraper')
        self.crawler = CrawlerProcess(settings)
        self.spider_name = kwargs.get('spider_name')
        self.kwargs = kwargs

    def run(self):
        self.crawler.crawl(self.spider_name, qwargs=self.kwargs)
        self.crawler.start()

def my_crawl_manager(**kwargs):
    crawler = MyCrawlerScript(**kwargs)
    crawler.start()
    crawler.join()

Inside a celery task, I am calling:

my_crawl_manager(spider_name='my_spider', url='www.google.com/any-url-here')

Please any idea why this is happening?

1条回答
可以哭但决不认输i
2楼-- · 2019-09-15 03:39

I had this issue once.

Check if you have an empty file __init__.py file in spiders folders or. It should be there.

查看更多
登录 后发表回答