I've been searching for about 2 months now to find a script that gets the Wikipedia description section only. (It's for a bot i'm building, not for IRC.) That is, when I say
/wiki bla bla bla
it will go to the Wikipedia page for bla bla bla
, get the following, and return it to the chatroom:
"Bla Bla Bla" is the name of a song made by Gigi D'Agostino. He described this song as "a piece I wrote thinking of all the people who talk and talk without saying anything". The prominent but nonsensical vocal samples are taken from UK band Stretch's song "Why Did You Do It"
Here is the closest I've found, but it only gets the URL:
import json
import urllib.request, urllib.parse
def google(searchfor):
query = urllib.parse.urlencode({'q': searchfor})
url = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&%s' % query
search_response = urllib.request.urlopen(url)
search_results = search_response.read().decode("utf8")
results = json.loads(search_results)
data = results['responseData']
hits = data['results']
if len(hits) > 0:
return hits[0]['url']
else:
return "No results found."
(Python 3.1)
You can try the BeautifulSoup HTML parsing library for python,but you'll have to write a simple parser.
DBPedia is the perfect solution for this problem. Here: http://dbpedia.org/page/Metallica, look at the perfectly organised data using RDF. One can query for anything here at http://dbpedia.org/sparql using SPARQL, the query language for the RDF. There's always a way to find the pageID so as to get descriptive text but this should do for the most part.
There will be a learning curve for RDF and SPARQL for writing any useful code but this is the perfect solution.
For example, a query run for Metallica returns an HTML table with the abstract in several different languages:
SPARQL QUERY :
Change "Metallica" to any resource name (resource name as in wikipedia.org/resourcename) for queries pertaining to abstract.
Use the MediaWiki API, which runs on Wikipedia. You will have to do some parsing of the data yourself.
For instance:
means
You will probably want to search for the query and use the first result, to handle spelling errors and the like.
You can fetch just the first section using the API:
This will give you raw wikitext, you'll have to deal with templates and markup.
Or you can fetch the whole page rendered into HTML which has its own pros and cons as far as parsing:
I can't see an easy way to get parsed HTML of the first section in a single call but you can do it with two calls by passing the wikitext you receive from the first URL back with
text=
in place of thepage=
in the second URL.UPDATE
Sorry I neglected the "plain text" part of your question. Get the part of the article you want as HTML. It's much easier to strip HTML than to strip wikitext!
You can try WikiExtractor: http://medialab.di.unipi.it/wiki/Wikipedia_Extractor
It's for Python 2.7 and 3.3+.
There is also the opportunity to consume Wikipedia pages through a wrapper API like JSONpedia, it works both live (ask for the current JSON representation of a Wiki page) and storage based (query multiple pages previously ingested in Elasticsearch and MongoDB). The output JSON also include plain rendered page text.