Python code to remove HTML tags from a string [dup

2019-01-16 06:59发布

This question already has an answer here:

I have a text like this:

text = """<div>
<h1>Title</h1>
<p>A long text........ </p>
<a href=""> a link </a>
</div>"""

using pure Python, with no external module I want to have this:

>>> print remove_tags(text)
Title A long text..... a link

I know I can do it using lxml.html.fromstring(text).text_content() but I need to achieve the same in pure Python using builtin or std library for 2.6+

How can I do that?

5条回答
\"骚年 ilove
2楼-- · 2019-01-16 07:23

Python has several XML modules built in. The simplest one for the case that you already have a string with the full HTML is xml.etree, which works (somewhat) similarly to the lxml example you mention:

def remove_tags(text):
    return ''.join(xml.etree.ElementTree.fromstring(text).itertext())
查看更多
甜甜的少女心
3楼-- · 2019-01-16 07:30

Note that this isn't perfect, since if you had something like, say, <a title=">"> it would break. However, it's about the closest you'd get in non-library Python without a really complex function:

import re

TAG_RE = re.compile(r'<[^>]+>')

def remove_tags(text):
    return TAG_RE.sub('', text)

However, as lvc mentions xml.etree is available in the Python Standard Library, so you could probably just adapt it to serve like your existing lxml version:

def remove_tags(text):
    return ''.join(xml.etree.ElementTree.fromstring(text).itertext())
查看更多
Melony?
4楼-- · 2019-01-16 07:38

Using a regex

Using a regex, you can clean everything inside <> :

import re

def cleanhtml(raw_html):
  cleanr = re.compile('<.*?>')
  cleantext = re.sub(cleanr, '', raw_html)
  return cleantext

Using BeautifulSoup

You could also use BeautifulSoup additional package to find out all the raw text

You will need to explicitly set a parser when calling BeautifulSoup I recommend "lxml" as mentioned in alternative answers (much more robust than the default one (i.e. available without additional install) 'html.parser'

from bs4 import BeautifulSoup
cleantext = BeautifulSoup(raw_html, "lxml").text

But it doesn't prevent you from using external libraries, so I recommend the first solution.

查看更多
ら.Afraid
5楼-- · 2019-01-16 07:47
global temp

temp =''

s = ' '

def remove_strings(text):

    global temp 

    if text == '':

        return temp

    start = text.find('<')

    end = text.find('>')

    if start == -1 and end == -1 :

        temp = temp + text

    return temp

newstring = text[end+1:]

fresh_start = newstring.find('<')

if newstring[:fresh_start] != '':

    temp += s+newstring[:fresh_start]

remove_strings(newstring[fresh_start:])

return temp
查看更多
SAY GOODBYE
6楼-- · 2019-01-16 07:48

There's a simple way to this in any C-like language. The style is not Pythonic but works with pure Python:

def remove_html_markup(s):
    tag = False
    quote = False
    out = ""

    for c in s:
            if c == '<' and not quote:
                tag = True
            elif c == '>' and not quote:
                tag = False
            elif (c == '"' or c == "'") and tag:
                quote = not quote
            elif not tag:
                out = out + c

    return out

The idea based in a simple finite-state machine and is detailed explained here: http://youtu.be/2tu9LTDujbw

You can see it working here: http://youtu.be/HPkNPcYed9M?t=35s

PS - If you're interested in the class(about smart debugging with python) I give you a link: http://www.udacity.com/overview/Course/cs259/CourseRev/1. It's free!

查看更多
登录 后发表回答