how to bypass robots.txt with apache nutch 2.2.1

2019-09-15 07:38发布

Can anyone please tell me if there is any way for apache nutch to ignore or bypass robots.txt while crawling. I am using nutch 2.2.1. I found that "RobotRulesParser.java"(full path:-src/plugin/lib-http/src/java/org/apache/nutch/protocol/http/api/ RobotRulesParser.java) is responsible for the reading and parsing the robots.txt. Is there any way to modify this file to ignore robots.txt and go on with crawling?

Or is there any other way to achieve the same?

1条回答
干净又极端
2楼-- · 2019-09-15 08:09
  1. At first, we should respect the robots.txt file if you are crawling any external sites. Otherwise you are at risk - your IP banned or worse can be any legal case.

  2. If your site is internal and not expose to external world, then you should change the robots.txt file to allow your crawler.

  3. If your site is exposed to the Internet and if data is confidential, then you can try out the following option. Because here you cannot take a risk of modifying the robots.txt file since external crawler can use your crawler name and crawl the site.

    In Fetcher.java file:

    if (!rules.isAllowed(fit.u.toString())) { }
    

    This is the block that is responsible for blocking the URLs. You can play around this code block to resolve your issue.

查看更多
登录 后发表回答