The docs says i could only execute the crawl command inside the project dir :
scrapy crawl tutor -o items.json -t json
but i really need to execute it in my python code (the python file is not inside current project dir)
Is there any approach fit my requirement ?
My project tree:
.
├── etao
│ ├── etao
│ │ ├── __init__.py
│ │ ├── items.py
│ │ ├── pipelines.py
│ │ ├── settings.py
│ │ └── spiders
│ │ ├── __init__.py
│ │ ├── etao_spider.py
│ ├── items.json
│ ├── scrapy.cfg
│ └── start.py
└── start.py <-------------- I want to execute the script here.
Any here's my code followed this link but it doesn't work:
#!/usr/bin/env python
import os
#Must be at the top before other imports
os.environ.setdefault('SCRAPY_SETTINGS_MODULE', 'project.settings')
from scrapy import project
from scrapy.conf import settings
from scrapy.crawler import CrawlerProcess
class CrawlerScript():
def __init__(self):
self.crawler = CrawlerProcess(settings)
if not hasattr(project, 'crawler'):
self.crawler.install()
self.crawler.configure()
def crawl(self, spider_name):
spider = self.crawler.spiders.create(spider_name) <--- line 19
if spider:
self.crawler.queue.append_spider(spider)
self.crawler.start()
self.crawler.stop()
# main
if __name__ == '__main__':
crawler = CrawlerScript()
crawler.crawl('etao')
the error is:
line 19: KeyError: 'Spider not found: etao'
you can actually call the
crawlprocess
yourself...its something like
Credits to @warwaruk.