Get first lines of Wikipedia Article

2019-02-03 16:30发布

I got a Wikipedia-Article and I want to fetch the first z lines (or the first x chars, or the first y words, doesn't matter) from the article.

The problem: I can get either the source Wiki-Text (via API) or the parsed HTML (via direct HTTP-Request, eventually on the print-version) but how can I find the first lines displayed? Normaly the source (both html and wikitext) starts with the info-boxes and images and the first real text to display is somewhere down in the code.

For example: Albert Einstein on Wikipedia (print Version). Look in the code, the first real-text-line "Albert Einstein (pronounced /ˈælbərt ˈaɪnstaɪn/; German: [ˈalbɐt ˈaɪ̯nʃtaɪ̯n]; 14 March 1879–18 April 1955) was a theoretical physicist." is not on the start. The same applies to the Wiki-Source, it starts with the same info-box and so on.

So how would you accomplish this task? Programming language is java, but this shouldn't matter.

A solution which came to my mind was to use an xpath query but this query would be rather complicated to handle all the border-cases. [update]It wasn't that complicated, see my solution below![/update]

Thanks!

9条回答
Root(大扎)
2楼-- · 2019-02-03 16:35

Well, when using the Wiki source itself you could just strip out all templates at the start. This might work well enough for most articles that have infoboxes or some messages at the top.

However, some articles might put the starting blurb into a template itself so that would be a little difficult there.

Another way, perhaps more reliable, would be to take the contents of the first <p> tag that appears directly in the article text (so not nested in a table or so). This should strip out infoboxes and other stuff at the start as those are probably (I'm not exactly sure) <table>s or <div>s.

Generally, Wikipedia is written for human consumption with only very minimal support for anything semantic. That makes automatic extraction of specific information from the articles pretty painful.

查看更多
别忘想泡老子
3楼-- · 2019-02-03 16:37

For example if you have the result in a string you would find the text:

<div id="bodyContent">

and after that index you would find the first

<p>

that would be the index of the first paragraph you mentioned.

try this url Link to the content (just works in the browser)

查看更多
冷血范
4楼-- · 2019-02-03 16:40

You don't need to.

The API's exintro parameter returns only the first (zeroth) section of the article.

Example: api.php?action=query&prop=extracts&exintro&explaintext&titles=Albert%20Einstein

There are other parameters, too:

  • exchars Length of extracts in characters.
  • exsentences Number of sentences to return.
  • exintro Return only zeroth section.
  • exsectionformat What section heading format to use for plaintext extracts:

    wiki — e.g., == Wikitext ==
    plain — no special decoration
    raw — this extension's internal representation
    
  • exlimit Maximum number of extracts to return. Because excerpts generation can be slow, the limit is capped at 20 for intro-only extracts and 1 for whole-page extracts.
  • explaintext Return plain-text extracts.
  • excontinue When more results are available, use this parameter to continue.

Source: https://www.mediawiki.org/wiki/Extension:MobileFrontend#prop.3Dextracts

查看更多
我命由我不由天
5楼-- · 2019-02-03 16:41

I worked out the following solution: Using a xpath-query on the XHTML-Source-Code (I took the print-version because it is shorter, but it also works on the normal version).

//html/body//div[@id='bodyContent']/p[1]

This works on German and on English Wikipedia and I haven't found an article where it doesn't output the first paragraph. The solution is also quite fast, I also thought of only taking the first x chars of the xhtml, but this would render the xhtml invalid.

If someone is searching for the JAVA-Code here it is then:

private static DocumentBuilderFactory dbf;
static {
    dbf = DocumentBuilderFactory.newInstance();
    dbf.setAttribute("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);
}
private static XPathFactory xpathf = XPathFactory.newInstance();
private static String xexpr = "//html/body//div[@id='bodyContent']/p[1]";


private static String getPlainSummary(String url) {
    try {
        // OPen Wikipage
        URL u = new URL(url);
        URLConnection uc = u.openConnection();
        uc.setRequestProperty("User-Agent", "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1) Gecko/20090616 Firefox/3.5");
        InputStream uio = uc.getInputStream();
        InputSource src = new InputSource(uio);

        //Construct Builder
        DocumentBuilder builder = dbf.newDocumentBuilder();
        Document docXML = builder.parse(src);

        //Apply XPath
        XPath xpath = xpathf.newXPath();
        XPathExpression xpathe = xpath.compile(xexpr);
        String s = xpathe.evaluate(docXML);

        //Return Attribute
        if (s.length() == 0) {
            return null;
        } else {
            return s;
        }
    }
    catch (IOException ioe) {
        logger.error("Cant get XML", ioe);
        return null;
    }
    catch (ParserConfigurationException pce) {
        logger.error("Cant get DocumentBuilder", pce);
        return null;
    }
    catch (SAXException se) {
        logger.error("Cant parse XML", se);
        return null;
    }
    catch (XPathExpressionException xpee) {
        logger.error("Cant parse XPATH", xpee);
        return null;
    }
}

use it by calling getPlainSummary("http://de.wikipedia.org/wiki/Uma_Thurman");

查看更多
The star\"
6楼-- · 2019-02-03 16:50

Wikipedia offers an Abstracts download. While this is quite a large file (currently 2.5GB), it offers exactly the info you want, for all articles.

查看更多
Bombasti
7楼-- · 2019-02-03 16:51

As you expect, you will probably have to end up parsing the source, the compiled HTML, or both. However, the Wikipedia:Lead_section may give you some indication of what to expect in well-written articles.

查看更多
登录 后发表回答