How to speed up Pywikibot?

2019-04-25 04:59发布

问题:

I've built some report tools using Pywikibot. As things are growing it now takes up to 2 hours to finish the reports so I'm looking to speed things up. Main ideas:

  • Disable throttling, the script is read-only, so page.get(throttle=False) handles this
  • Cache
  • Direct database access

Unfortunately I can't find much documentation about caching and db access. Only way seems to dive into the code, and well, there's limited information about database access in user-config.py. If there is any, where can I find good documentation about pywikibot caching and direct db access?

And, are there other ways to speed things up?

回答1:

Use PreloadingGenerator so that pages are loaded in batches. Or MySQLPageGenerator if you use direct DB access.

See examples here.



回答2:

Looks like pagegenerators is indeed a good way to speed up things. The best documentation for that is directly in the source.

Even in there it's not directly clear where to put the MySQL connection details. (Will update this hopefully.)



回答3:

I'm using "-pt:1" option in the command to make one edit per second.

I'm currently running the command

python pwb.py category add -pt:1 -file:WX350.txt -to:"Taken with Sony DSC-WX350"

https://www.mediawiki.org/wiki/Manual:Pywikibot/Global_Options



回答4:

Using PreloadingGenerator from pagegenerators is the simplest way to speed some programs that need to read a lot from online wikis, as other answers have already pointed.

Alternative ways are:

  • Download a dump of the wiki and read it locally. Wikimedia projects offer dumps updated about once a week.
  • Create an account on Wikimedia Labs and work from there enjoying from faster connection with Wikipedias and updated dumps.

Modifying throttle might put you in danger of getting blocked if the target wiki has a policy against it - and I'm afraid Wikipedia has such a policy.



回答5:

You can download all the data in advance in a dump file in this site http://dumps.wikimedia.org You can then use a two passes - first pass reads the data from the local dump, then the second pass reads only the remote pages for which you found issues in the local dump.

Example:

dump_file = hewiktionary-latest-pages-articles.xml.bz2

all_wiktionary = XmlDump(dump_file).parse()
gen = (pywikibot.Page(site, p.title) for p in all_wiktionary if report_problem(p))
gen = pagegenerators.PreloadingGenerator(gen)
for page in gen:
    report_problem(page)