Use crawler4j to download js files

2019-08-01 09:53发布

I'm trying to use crawler4j to download some websites. The only problem I have is that even though I return true for all .js files in the shouldVisit function, they never get downloaded.

@Override
public boolean shouldVisit(WebURL url) {
    return true;
}

@Override
public void visit(Page page) {
    String url = page.getWebURL().getURL();
    System.out.println("URL: " + url);
}

The URL for .js files never gets printed out.

2条回答
Ridiculous、
2楼-- · 2019-08-01 10:09

I noticed that "<script>" tags do not get processed by crawler4j. This was where all of the ".js" files occurred. So I don't think the problem is only limited to ".js" files - I think it's anything within the "<script>" tags (which usually happens to be ".js" files).

It does initially look like modifying HtmlContentHandler's Enumeration and startElement() method will solve problem. I tried that and it did not work. While debugging it, I observed that either the Tika Parser or TagSoup (which Tika uses) is not picking up the script tags. As a result it never even reaches crawler4j to get processed.

As a workaround, I used JSoup to parse the HTML for all "<script>" tags in my visit() method and then I schedule a crawl on those files.

I think the real solution is identifying why Tika (or TagSoup) is not picking up the script tags. It could be the way in which it is getting called by crawler4j. Once that is resolved, then modifying the HtmlContentHandler will work.

查看更多
三岁会撩人
3楼-- · 2019-08-01 10:10

Taking a look at the source, the reason is to be found in the HTMLContentHandler class.

This class is responsible for extracting link from downloaded webpages. The script tag is never processed.

If you want to download .js files, I suggest you clone the project, extend this class, which is quite simple. You need also to modify WebCrawler that calls the HTMLContentHandler.

查看更多
登录 后发表回答