Wikipedia : Java library to remove wikipedia text

2019-01-26 09:46发布

问题:

I downloaded wikipedia dump and now want to remove the wikipedia markup in the contents of each page. I tried writing regular expressions but they are too many to handle. I found a python library but I need a java library because, I want to integrate into my code.

Thank you.

回答1:

Do it in two steps:

  1. let some existing tool convert the MediaWiki mark-up into plain HTML;
  2. convert the plain HTML into text.

The following demo:

import net.java.textilej.parser.MarkupParser;
import net.java.textilej.parser.builder.HtmlDocumentBuilder;
import net.java.textilej.parser.markup.mediawiki.MediaWikiDialect;
import javax.swing.text.html.HTMLEditorKit;
import javax.swing.text.html.parser.ParserDelegator;
import java.io.StringReader;
import java.io.StringWriter;

public class Test {

    public static void main(String[] args) throws Exception {

        String markup = "This is ''italic'' and '''that''' is bold. \n"+
                "=Header 1=\n"+
                "a list: \n* item A \n* item B \n* item C";

        StringWriter writer = new StringWriter();

        HtmlDocumentBuilder builder = new HtmlDocumentBuilder(writer);
        builder.setEmitAsDocument(false);

        MarkupParser parser = new MarkupParser(new MediaWikiDialect());
        parser.setBuilder(builder);
        parser.parse(markup);

        final String html = writer.toString();
        final StringBuilder cleaned = new StringBuilder();

        HTMLEditorKit.ParserCallback callback = new HTMLEditorKit.ParserCallback() {
                public void handleText(char[] data, int pos) {
                    cleaned.append(new String(data)).append(' ');
                }
        };
        new ParserDelegator().parse(new StringReader(html), callback, false);

        System.out.println(markup);
        System.out.println("---------------------------");
        System.out.println(html);
        System.out.println("---------------------------");
        System.out.println(cleaned);
    }
}

produces:

This is ''italic'' and '''that''' is bold. 
=Header 1=
a list: 
* item A 
* item B 
* item C
---------------------------
<p>This is <i>italic</i> and <b>that</b> is bold. </p><h1 id="Header1">Header 1</h1><p>a list: </p><ul><li>item A </li><li>item B </li><li>item C</li></ul>
---------------------------
This is  italic  and  that  is bold. Header 1 a list: item A item B item C 

Where do you download the java packages you are importing?

Here: Web Archive link of download.java.net/maven/2/net/java/textile-j/2.2



回答2:

If you need plain text you should use WikiClean library https://github.com/lintool/wikiclean.

I had the same problem and it looks like this was the only efficient solution that worked for me in java.

There are two usecases:

1) When you have the text not in XML format then you should add xml tags needed to do this processing. Supposing you are processing XML file earlier, and now you have the content without XML structure, then you just add xmlStartTag and xmlEndTag as in the code bellow, and it processes it.

String xmlStartTag = "<text xml:space=\"preserve\">";
String xmlEndTag = "</text>";
String articleWithXml = xmlStartTag + article.getText() + xmlEndTag;
WikiClean cleaner = new WikiClean.Builder().build();
String plainWikiText = cleaner.clean(articleWithXml);

2) When you are reading the Wikipedia dump file directly (xml file), in that case you just pass it through the file and it goes through.

WikiClean cleaner = new WikiClean.Builder().build();
String plainWikiText = cleaner.clean(XMLFileContents);


回答3:

Mylyn WikiText can convert various Wiki syntaxes into HTML and other formats. It also supports MediaWiki syntax, which is what Wikipedia uses. Although Mylyn WikiText is primarily an Eclipse plugin, it is also available as standalone library.



回答4:

Try the Mediawiki text to plain text approach. You probably have to improve the PlainTextConverter class for your needs. Combined with the example for converting Wikipedia texts to HTML you can transclude template contents.