Python的urllib2的如下3xx的重定向到获得最终的内容。 有没有一种方法,使的urllib2(或其他一些库,如使用httplib2 )也跟着元刷新 ? 或者我需要手动解析HTML的刷新meta标签?
Answer 1:
下面是使用BeautifulSoup和httplib2的(和基于证书的认证)的溶液中:
import BeautifulSoup
import httplib2
def meta_redirect(content):
soup = BeautifulSoup.BeautifulSoup(content)
result=soup.find("meta",attrs={"http-equiv":"Refresh"})
if result:
wait,text=result["content"].split(";")
if text.strip().lower().startswith("url="):
url=text[4:]
return url
return None
def get_content(url, key, cert):
h=httplib2.Http(".cache")
h.add_certificate(key,cert,"")
resp, content = h.request(url,"GET")
# follow the chain of redirects
while meta_redirect(content):
resp, content = h.request(meta_redirect(content),"GET")
return content
Answer 2:
使用请求和LXML库类似的解决方案。 也做了简单的检查被测试的东西实际上是HTML(我在执行一项要求)。 也能够捕捉到,并通过使用请求库的会话使用Cookie(有时是必要的,如果重定向+饼干被用作抗刮机制)。
import magic
import mimetypes
import requests
from lxml import html
from urlparse import urljoin
def test_for_meta_redirections(r):
mime = magic.from_buffer(r.content, mime=True)
extension = mimetypes.guess_extension(mime)
if extension == '.html':
html_tree = html.fromstring(r.text)
attr = html_tree.xpath("//meta[translate(@http-equiv, 'REFSH', 'refsh') = 'refresh']/@content")[0]
wait, text = attr.split(";")
if text.lower().startswith("url="):
url = text[4:]
if not url.startswith('http'):
# Relative URL, adapt
url = urljoin(r.url, url)
return True, url
return False, None
def follow_redirections(r, s):
"""
Recursive function that follows meta refresh redirections if they exist.
"""
redirected, url = test_for_meta_redirections(r)
if redirected:
r = follow_redirections(s.get(url), s)
return r
用法:
s = requests.session()
r = s.get(url)
# test for and follow meta redirects
r = follow_redirections(r, s)
Answer 3:
OK,好像没有库支持它,所以我一直使用此代码:
import urllib2
import urlparse
import re
def get_hops(url):
redirect_re = re.compile('<meta[^>]*?url=(.*?)["\']', re.IGNORECASE)
hops = []
while url:
if url in hops:
url = None
else:
hops.insert(0, url)
response = urllib2.urlopen(url)
if response.geturl() != url:
hops.insert(0, response.geturl())
# check for redirect meta tag
match = redirect_re.search(response.read())
if match:
url = urlparse.urljoin(url, match.groups()[0].strip())
else:
url = None
return hops
Answer 4:
如果你不想使用BS4,您可以使用lxml的是这样的:
from lxml.html import soupparser
def meta_redirect(content):
root = soupparser.fromstring(content)
result_url = root.xpath('//meta[@http-equiv="refresh"]/@content')
if result_url:
result_url = str(result_url[0])
urls = result_url.split('URL=') if len(result_url.split('url=')) < 2 else result_url.split('url=')
url = urls[1] if len(urls) >= 2 else None
else:
return None
return url
Answer 5:
使用BeautifulSoup或LXML解析HTML。
文章来源: how to follow meta refreshes in Python