How to make XPath select multiple table elements w

2019-05-30 05:30发布

I'm currently trying to extract information from a badly formatted web page. Specifically, the page has used the same id attribute for multiple table elements. The markup is equivalent to something like this:

<body>
    <div id="random_div">
        <p>Some content.</p>
        <table id="table_1">
            <tr>
                <td>Important text 1.</td>
            </tr>
        </table>
        <h4>Some heading in between</h4>
        <table id="table_1">
            <tr>
                <td>Important text 2.</td>
                <td>Important text 3.</td>
            </tr>
        </table>
        <p>How about some more text here.</p>
        <table id="table_1">
            <tr>
                <td>Important text 4.</td>
                <td>Important text 5.</td>
            </tr>
        </table>
    </div>
</body>

Clearly this is incorrectly formatted HTML, due to the multiple use of the same id for an element.

I'm using XPath to try and extract all the text in the various table elements, utilising the language through the Scrapy framework.

My call, looks something like this:

hxs.select('//div[contains(@id, "random_div")]//table[@id="table_1"]//text()').extract()

Thus the XPath expression is: //div[contains(@id, "random_id")]//table[@id="table_1"]//text()

This returns: [u'Important text 1.'], i.e., the contents of the first table that matches the id value "table_1". It seems to me that once it has come across an element with a certain id it ignores any future occurrences in the markup. Can anyone confirm this?

UPDATE

Thanks for the fast responses below. I have tested my code on a page hosted locally, which has the same test format as above and the correct response is returned, i.e.,

`[u'Important text 1.', u'Important text 2.', . . . . ,u'Important text 5.']`

There is therefore nothing wrong with either the Xpath expression or the Python calls I'm making.

I guess this means that there is a problem on the webpage itself which is either screwing up XPath or the html parser, which is libxml2.

Does anyone have any advice as to how I can dig into this a bit more?

UPDATE 2

I have successfully isolated the problem. It is actually with the underlying parsing library, which is lxml (which provides Python bindings for the libxml2 C library.

The problem is that the parser is unable to deal with vertical tabs. I have no idea who coded up the site I am dealing with but it is full of vertical tabs. Web browser seem to be able to ignore these, which is why running the XPath queries from Firebug on the site in question, for example, are successful.

Further, because the above simplified example doesn't contain vertical tabs it works fine. For anyone who comes across this issue in Scrapy (or in python generally), the following fix worked for me, to remove vertical tabs from the html responses:

def parse_item(self, response):
    # remove all vertical tabs from the html response
    response.body = filter(lambda c: c != "\v", response.body)
    hxs = HtmlXPathSelector(response)
    items = hxs.select('//div[contains(@id, \"random_div\")]' \
                       '//table[@id="table_1"]//text()').extract()

2条回答
Deceive 欺骗
2楼-- · 2019-05-30 06:10
count(//div[@id = "random_div"]/table[@id= "table_1"])

This xpath returns 3 for your sample input. So your problem is not with the xpath itself rather with the functions you use to extract the nodes.

查看更多
做自己的国王
3楼-- · 2019-05-30 06:21

With Firebug, this expression:

//table[@id='table_1']//td/text()

gives me this:

[<TextNode textContent="Important text 1.">,
 <TextNode textContent="Important text 2.">,
 <TextNode textContent="Important text 3.">,
 <TextNode textContent="Important text 4.">,
 <TextNode textContent="Important text 5.">]

I included the td filtering to give a nicer result, since otherwise, you would get the whitespace and newlines between the tags. But all in all, it seems to work.

What I noticed was that you query for //div[contains(@id, "random_id")], while your HTML snippet has a tag that reads <div id="random_div"> -- the _id and _div being different. I don't know Scrapy so I can't really say if that does something, but couldn't that be your issue as well?

查看更多
登录 后发表回答