I am a newbie in Apache Nutch and I would like to know whether it's possible to crawl selected area of a web page. For instance, select a particular div
and crawl contents in that div
only. Any help would be appreciated. Thanks!
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
回答1:
You will have to write a plugin that will extend HtmlParseFilter to achieve your goal.
I reckon you will be doing some of the stuff yourself like parsing the html's specific section, extracting the URLs that you want and add them as outlinks.
HtmlParseFilter implementation: (Code below gives the general idea)
ParseResult filter(Content content, ParseResult parseResult, HTMLMetaTags metaTags, DocumentFragment doc){
// get html content
String htmlContent = new String(content.getContent(), StandardCharsets.UTF_8);
// parse html using jsoup or any other library.
String url = content.getUrl();
Parse parse = parseResult.get(url);
ParseData parseData = parse.getData();
Outlink[] links = parseData.getOutlinks();
// modify/select only required outlinks
// return ParsePesult with modified outlinks
return parseResult;
}
Hope this will be helpful.
If you are new to plugin, I have written a simple plugin "nutch-fetch-page" which saves html pages and text content on a local drive using HtmlParseFilter
interface. You can fork/download and modify the code.