Finding all tags and attributes in a HTML

2019-05-25 13:58发布

问题:

I am a newbie and looking at HTML code for first time. For my research I need to know the number of tags and attributes in a webpage.

I looked at various parser and found Beautiful Soup to be one of the most preferred one. The following code (taken from Parsing HTML using Python) shows the way to parse a file:

import urllib2
from BeautifulSoup import BeautifulSoup

page = urllib2.urlopen('http://www.google.com/')
soup = BeautifulSoup(page)

x = soup.body.find('div', attrs={'class' : 'container'}).text

I found find_all quite useful, but needs an argument to find something.

Can someone guide me on how to know the count of all tags and attributes in a html page?

Can google developer tool help in that regard?

回答1:

If you would call find_all() without any arguments, it would find all elements on a page recursively. Demo:

>>> from bs4 import BeautifulSoup
>>> 
>>> data = """
... <html><head><title>The Dormouse's story</title></head>
... <body>
... <p class="title"><b>The Dormouse's story</b></p>
... 
... <p class="story">Once upon a time there were three little sisters; and their names were
... <a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
... <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
... <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
... and they lived at the bottom of a well.</p>
... 
... <p class="story">...</p>
... """
>>> 
>>> soup = BeautifulSoup(data)
>>> for tag in soup.find_all():
...     print tag.name
... 
html
head
title
body
p
b
p
a
a
a
p

Padraic showed you how to count elements and attributes via BeautifulSoup. In addition to it, here is how to do the same with lxml.html:

from lxml.html import fromstring

root = fromstring(data)
print int(root.xpath("count(//*)")) + int(root.xpath("count(//@*)"))

As a bonus, I've made a simple benchmark demonstrating that the latter approach is much faster (on my machine, with my setup and without specifying a parser which would make BeautifulSoup use lxml under-the-hood etc..a lot of things can affect the results, but anyway):

$ python -mtimeit -s'import test' 'test.count_bs()'
1000 loops, best of 3: 618 usec per loop
$ python -mtimeit -s'import test' 'test.count_lxml_html()'
10000 loops, best of 3: 114 usec per loop

where test.py contains:

from bs4 import BeautifulSoup
from lxml.html import fromstring

data = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""

def count_bs():
    return sum(len(ele.attrs) + 1 for ele in BeautifulSoup(data).find_all())


def count_lxml_html():
    root = fromstring(data)
    return int(root.xpath("count(//*)")) + int(root.xpath("count(//@*)"))


回答2:

If you want the count of all tags and attrs:

sum(len(ele.attrs) + 1 for ele in BeautifulSoup(page).find_all())