Right now I'm using a variety of regexes to "parse" the data in the mediawiki mark-up into lists/dictionaries, so that elements within the article can be used.
This is hardly the best method, as the number of cases that have to be made are large.
How would one parse an article's mediawiki markup into a variety of python objects so that the data within can be used?
Example being:
- Extract all headlines to a dictionary, hashing it with its section.
- Grab all interwiki links, and
stick them into a list (I know
this can be done from the API but I'd rather only have one API call to
reduce bandwidth use). - Extract all image names and hash them with their sections
A variety of regexes can achieve the above, but I'm finding the number I have to make rather large.
Here's the mediawiki unofficial specification (I don't find their official specification as useful).
This question is old, but for others coming here: There is a mediawiki parser written in Python on github. It seems very easy to transform articles into pure plain text, something I, if I remember correctly, couldn't solve in the past with mwlib.
mwlib - MediaWiki parser and utility library
pediapress/mwlib:
Here's the documentation page. The older doc page used have a one-liner example:
If you want to see how it's used in action, see the test cases that come with the code. (mwlib/tests/test_parser.py from git repository):
Also see Markup spec and Alternative parsers for more information.
I was searching for simillar solution to parse certain wiki , and stumbled upon Pandoc which takes multiple input formats and generates multiple too.
From the site:
Wiki Parser parses Wikipedia dumps into XML that preserves all content and article structure. Use that, and then process the resulting XML with your python program.