收集格式的内容从多个网页(Gathering Formatted Content From Mult

2019-10-19 17:24发布

我在做一个研究项目,以及需要一个展示的成绩单的数据内容。 问题是,成绩单被格式化为特定的维基( 发展受阻的wiki ),而我需要它们是机器可读的。

什么是去下载所有这些转录和重新格式化它们的最好方法? 是Python的HTMLParser的我最好的选择?

Answer 1:

我写在python脚本,需要维基成绩单作为输入的链接,然后给你谈话的明文版本在一个文本文件作为输出。 我希望这有助于你的项目。

from pycurl import *
import cStringIO
import re

link = raw_input("Link to transcript: ")
filename = link.split("/")[-1]+".txt"

buf = cStringIO.StringIO()

c = Curl()
c.setopt(c.URL, link)
c.setopt(c.WRITEFUNCTION, buf.write)
c.perform()
html = buf.getvalue()
buf.close()

Speaker = ""
SpeakerPositions = [m.start() for m in re.finditer(':</b>', html)]

file = open(filename, 'w')

for x in range(0, len(SpeakerPositions)):
    if html[SpeakerPositions[x] + 5] != "<":

        searchpos = SpeakerPositions[x] - 1
        char = ""
        while char != ">":
            char = html[searchpos]
            searchpos = searchpos - 1
            if char != ">":
                Speaker += char

        Speaker = Speaker[::-1]
        Speaker += ": "

        searchpos = SpeakerPositions[x] + 5
        char = ""
        while char != "<":
            char = html[searchpos]
            searchpos = searchpos + 1
            if char != "<":
                Speaker += char

        Speaker = Speaker.replace("&#160;", "")
        file.write(Speaker + "\n")
        Speaker = ""

file.close()


文章来源: Gathering Formatted Content From Multiple Webpages