How do I use the Python Scrapy module to list all

2019-01-21 14:45发布

问题:

I want to use the Python Scrapy module to scrape all the URLs from my website and write the list to a file. I looked in the examples but didn't see any simple example to do this.

回答1:

Here's the python program that worked for me:

from scrapy.selector import HtmlXPathSelector
from scrapy.spider import BaseSpider
from scrapy.http import Request

DOMAIN = 'example.com'
URL = 'http://%s' % DOMAIN

class MySpider(BaseSpider):
    name = DOMAIN
    allowed_domains = [DOMAIN]
    start_urls = [
        URL
    ]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        for url in hxs.select('//a/@href').extract():
            if not ( url.startswith('http://') or url.startswith('https://') ):
                url= URL + url 
            print url
            yield Request(url, callback=self.parse)

Save this in a file called spider.py.

You can then use a shell pipeline to post process this text:

bash$ scrapy runspider spider.py > urls.out
bash$ cat urls.out| grep 'example.com' |sort |uniq |grep -v '#' |grep -v 'mailto' > example.urls

This gives me a list of all the unique urls in my site.



回答2:

something cleaner (and maybe more useful) would be using LinkExtractor

from scrapy.linkextractors import LinkExtractor

    def parse(self, response):
        le = LinkExtractor() # empty for getting everything, check different options on documentation
        for link in le.extract_links(response):
            yield Request(link.url, callback=self.parse)