Scrapy-Recursively Scrape Webpages and save conten

2019-05-11 23:41发布

I am using scrapy to extract the information in tag of web pages and then save those webpages as HTML files.Eg http://www.austlii.edu.au/au/cases/cth/HCA/1945/ this site has some webpages related to judicial cases.I want to go to each link and save only the content related to the particular judicial case as an HTML page.eg go to this http://www.austlii.edu.au/au/cases/cth/HCA/1945/1.html and then save information related to case.

Is there a way to do this recursively in scrapy and save content in HTML page

标签: scrapy
1条回答
孤傲高冷的网名
2楼-- · 2019-05-12 00:22

Yes, you can do it with Scrapy, Link Extractors will help:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector


class AustliiSpider(CrawlSpider):
    name = "austlii"
    allowed_domains = ["austlii.edu.au"]
    start_urls = ["http://www.austlii.edu.au/au/cases/cth/HCA/1945/"]
    rules = (
        Rule(SgmlLinkExtractor(allow=r"au/cases/cth/HCA/1945/\d+.html"), follow=True, callback='parse_item'),
    )

    def parse_item(self, response):
        hxs = HtmlXPathSelector(response)

        # do whatever with html content (response.body variable)

Hope that helps.

查看更多
登录 后发表回答