Terminate Scrapy crawl manually

2019-09-10 00:20发布

When trying to run a spider in Scrapy, after having run it before with other parameters, I get this error message:

crawl: error: running 'scrapy crawl' with more than one spider is no longer supported

I interpret this as the first crawl still running in some sense. I am looking for some way to terminate all running Scrapy processes, in order to start clean with a new crawl.

标签: python scrapy
2条回答
手持菜刀,她持情操
2楼-- · 2019-09-10 00:47

I hope you are using multiple command line parameters in wrong way. Simply scrapy crawl <spidername> will work fine. You may missed any specifiers if you are trying to use multiple command line arguments.

For terminating all running Scrapy processes, in Linux OS you can simply find out and kill all Scrapy processes by using the following command in Linux terminal

pkill scrapy 

Please use Windows PsKill for Windows OS.

查看更多
ゆ 、 Hurt°
3楼-- · 2019-09-10 00:58

I use an incremented number to break the loop when I'm testing

 def parse(self, response):
     i = 0
     for sel in response.xpath('something'):
         if i > 2:
             break
         #something
         i += 1
         #something
查看更多
登录 后发表回答