Scrapy cmdline.execute stops script

2019-07-07 05:37发布

When I call

cmdline.execute("scrapy crawl website".split())
print "Hello World"

it stops the script after cmdline.execute, and doesn't run the rest of the script and print "Hello World". How do I fix this?

3条回答
ゆ 、 Hurt°
2楼-- · 2019-07-07 06:08

One can run subprocess.call. For example on Windows with powershell:

import subprocess

subprocess.call([r'C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe', '-ExecutionPolicy', 'Unrestricted', 'scrapy crawl website -o items.json -t json'])

查看更多
女痞
3楼-- · 2019-07-07 06:11

I have just tried the following code, and it works for me:

import os
os.system("scrapy crawl website")
print("Hello World")
查看更多
ゆ 、 Hurt°
4楼-- · 2019-07-07 06:15

By taking a look at the execute function in Scrapy's cmdline.py, you'll see the final line is:

sys.exit(cmd.exitcode)

There really is no way around this sys.exit call if you call the execute function directly, at least not without changing it. Monkey-patching is one option, albeit not a good one! A better option is to avoid calling the execute function entirely, and instead use the custom function below:

from twisted.internet import reactor

from scrapy import log, signals
from scrapy.crawler import Crawler as ScrapyCrawler
from scrapy.settings import Settings
from scrapy.xlib.pydispatch import dispatcher
from scrapy.utils.project import get_project_settings

def scrapy_crawl(name):

    def stop_reactor():
        reactor.stop()

    dispatcher.connect(stop_reactor, signal=signals.spider_closed)
    scrapy_settings = get_project_settings()
    crawler = ScrapyCrawler(scrapy_settings)
    crawler.configure()
    spider = crawler.spiders.create(name)
    crawler.crawl(spider)
    crawler.start()
    log.start()
    reactor.run()

And you can call it like this:

scrapy_crawl("your_crawler_name")
查看更多
登录 后发表回答