How to parse/extract data from a mediawiki marked-

2019-03-19 07:16发布

问题:

Source Mediawiki markup

Right now I'm using a variety of regexes to "parse" the data in the mediawiki mark-up into lists/dictionaries, so that elements within the article can be used.

This is hardly the best method, as the number of cases that have to be made are large.

How would one parse an article's mediawiki markup into a variety of python objects so that the data within can be used?

Example being:

  • Extract all headlines to a dictionary, hashing it with its section.
  • Grab all interwiki links, and stick them into a list (I know
    this can be done from the API but I'd rather only have one API call to
    reduce bandwidth use).
  • Extract all image names and hash them with their sections

A variety of regexes can achieve the above, but I'm finding the number I have to make rather large.

Here's the mediawiki unofficial specification (I don't find their official specification as useful).

回答1:

mwlib - MediaWiki parser and utility library

pediapress/mwlib:

mwlib provides a library for parsing MediaWiki articles and converting them to different output formats. mwlib is used by wikipedia's "Print/export" feature in order to generate PDF documents from wikipedia articles.

Here's the documentation page. The older doc page used have a one-liner example:

from mwlib.uparser import simpleparse
simpleparse("=h1=\n*item 1\n*item2\n==h2==\nsome [[Link|caption]] there\n")

If you want to see how it's used in action, see the test cases that come with the code. (mwlib/tests/test_parser.py from git repository):

from mwlib import parser, expander, uparser
from mwlib.expander import DictDB
from mwlib.xfail import xfail
from mwlib.dummydb import DummyDB
from mwlib.refine import util, core

parse = uparser.simpleparse

def test_headings():
    r=parse(u"""
= 1 =
== 2 ==
= 3 =
""")

    sections = [x.children[0].asText().strip() for x in r.children if isinstance(x, parser.Section)]
    assert sections == [u"1", u"3"]

Also see Markup spec and Alternative parsers for more information.



回答2:

This question is old, but for others coming here: There is a mediawiki parser written in Python on github. It seems very easy to transform articles into pure plain text, something I, if I remember correctly, couldn't solve in the past with mwlib.



回答3:

I was searching for simillar solution to parse certain wiki , and stumbled upon Pandoc which takes multiple input formats and generates multiple too.

From the site:

Pandoc - a universal document converter

If you need to convert files from one markup format into another, pandoc is your swiss-army knife. Pandoc can convert documents in markdown, reStructuredText, textile, HTML, DocBook, LaTeX, MediaWiki markup, TWiki markup, OPML, Emacs Org-Mode, Txt2Tags, Microsoft Word docx, EPUB, or Haddock markup to

HTML formats: XHTML, HTML5, and HTML slide shows using Slidy, reveal.js, Slideous, S5, or DZSlides.
Word processor formats: Microsoft Word docx, OpenOffice/LibreOffice ODT, OpenDocument XML
Ebooks: EPUB version 2 or 3, FictionBook2
Documentation formats: DocBook, GNU TexInfo, Groff man pages, Haddock markup
Page layout formats: InDesign ICML
Outline formats: OPML
TeX formats: LaTeX, ConTeXt, LaTeX Beamer slides
PDF via LaTeX
Lightweight markup formats: Markdown (including CommonMark), reStructuredText, AsciiDoc, MediaWiki markup, DokuWiki markup, Emacs Org-Mode, Textile
Custom formats: custom writers can be written in lua.


回答4:

Wiki Parser parses Wikipedia dumps into XML that preserves all content and article structure. Use that, and then process the resulting XML with your python program.