Parsing HTML in python - lxml or BeautifulSoup? Wh

2019-01-03 05:10发布

From what I can make out, the two main HTML parsing libraries in Python are lxml and BeautifulSoup. I've chosen BeautifulSoup for a project I'm working on, but I chose it for no particular reason other than finding the syntax a bit easier to learn and understand. But I see a lot of people seem to favour lxml and I've heard that lxml is faster.

So I'm wondering what are the advantages of one over the other? When would I want to use lxml and when would I be better off using BeautifulSoup? Are there any other libraries worth considering?

7条回答
Root(大扎)
2楼-- · 2019-01-03 05:16

In summary, lxml is positioned as a lightning-fast production-quality html and xml parser that, by the way, also includes a soupparser module to fall back on BeautifulSoup's functionality. BeautifulSoup is a one-person project, designed to save you time to quickly extract data out of poorly-formed html or xml.

lxml documentation says that both parsers have advantages and disadvantages. For this reason, lxml provides a soupparser so you can switch back and forth. Quoting,

BeautifulSoup uses a different parsing approach. It is not a real HTML parser but uses regular expressions to dive through tag soup. It is therefore more forgiving in some cases and less good in others. It is not uncommon that lxml/libxml2 parses and fixes broken HTML better, but BeautifulSoup has superiour support for encoding detection. It very much depends on the input which parser works better.

In the end they are saying,

The downside of using this parser is that it is much slower than the HTML parser of lxml. So if performance matters, you might want to consider using soupparser only as a fallback for certain cases.

If I understand them correctly, it means that the soup parser is more robust --- it can deal with a "soup" of malformed tags by using regular expressions --- whereas lxml is more straightforward and just parses things and builds a tree as you would expect. I assume it also applies to BeautifulSoup itself, not just to the soupparser for lxml.

They also show how to benefit from BeautifulSoup's encoding detection, while still parsing quickly with lxml:

>>> from BeautifulSoup import UnicodeDammit

>>> def decode_html(html_string):
...     converted = UnicodeDammit(html_string, isHTML=True)
...     if not converted.unicode:
...         raise UnicodeDecodeError(
...             "Failed to detect encoding, tried [%s]",
...             ', '.join(converted.triedEncodings))
...     # print converted.originalEncoding
...     return converted.unicode

>>> root = lxml.html.fromstring(decode_html(tag_soup))

(Same source: http://lxml.de/elementsoup.html).

In words of BeautifulSoup's creator,

That's it! Have fun! I wrote Beautiful Soup to save everybody time. Once you get used to it, you should be able to wrangle data out of poorly-designed websites in just a few minutes. Send me email if you have any comments, run into problems, or want me to know about your project that uses Beautiful Soup.

 --Leonard

Quoted from the Beautiful Soup documentation.

I hope this is now clear. The soup is a brilliant one-person project designed to save you time to extract data out of poorly-designed websites. The goal is to save you time right now, to get the job done, not necessarily to save you time in the long term, and definitely not to optimize the performance of your software.

Also, from the lxml website,

lxml has been downloaded from the Python Package Index more than two million times and is also available directly in many package distributions, e.g. for Linux or MacOS-X.

And, from Why lxml?,

The C libraries libxml2 and libxslt have huge benefits:... Standards-compliant... Full-featured... fast. fast! FAST! ... lxml is a new Python binding for libxml2 and libxslt...

查看更多
Root(大扎)
3楼-- · 2019-01-03 05:20

A somewhat outdated speed comparison can be found here, which clearly recommends lxml, as the speed differences seem drastic.

查看更多
闹够了就滚
4楼-- · 2019-01-03 05:21

I've used lxml with great success for parsing HTML. It seems to do a good job of handling "soupy" HTML, too. I'd highly recommend it.

Here's a quick test I had lying around to try handling of some ugly HTML:

import unittest
from StringIO import StringIO
from lxml import etree

class TestLxmlStuff(unittest.TestCase):
    bad_html = """
        <html>
            <head><title>Test!</title></head>
            <body>
                <h1>Here's a heading
                <p>Here's some text
                <p>And some more text
                <b>Bold!</b></i>
                <table>
                   <tr>row
                   <tr><td>test1
                   <td>test2
                   </tr>
                   <tr>
                   <td colspan=2>spanning two
                </table>
            </body>
        </html>"""

    def test_soup(self):
        """Test lxml's parsing of really bad HTML"""
        parser = etree.HTMLParser()
        tree = etree.parse(StringIO(self.bad_html), parser)
        self.assertEqual(len(tree.xpath('//tr')), 3)
        self.assertEqual(len(tree.xpath('//td')), 3)
        self.assertEqual(len(tree.xpath('//i')), 0)
        #print(etree.tostring(tree.getroot(), pretty_print=False, method="html"))

if __name__ == '__main__':
    unittest.main()
查看更多
兄弟一词,经得起流年.
5楼-- · 2019-01-03 05:22

For starters, BeautifulSoup is no longer actively maintained, and the author even recommends alternatives such as lxml.

Quoting from the linked page:

Version 3.1.0 of Beautiful Soup does significantly worse on real-world HTML than version 3.0.8 does. The most common problems are handling tags incorrectly, "malformed start tag" errors, and "bad end tag" errors. This page explains what happened, how the problem will be addressed, and what you can do right now.

This page was originally written in March 2009. Since then, the 3.2 series has been released, replacing the 3.1 series, and development of the 4.x series has gotten underway. This page will remain up for historical purposes.

tl;dr

Use 3.2.0 instead.

查看更多
Luminary・发光体
6楼-- · 2019-01-03 05:24

Pyquery provides the jQuery selector interface to Python (using lxml under the hood).

http://pypi.python.org/pypi/pyquery

It's really awesome, I don't use anything else anymore.

查看更多
【Aperson】
7楼-- · 2019-01-03 05:24

Don't use BeautifulSoup, use lxml.soupparser then you're sitting on top of the power of lxml and can use the good bits of BeautifulSoup which is to deal with really broken and crappy HTML.

查看更多
登录 后发表回答