Some servers have a robots.txt file in order to stop web crawlers from crawling through their websites. Is there a way to make a web crawler ignore the robots.txt file? I am using Mechanize for python.
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
回答1:
The documentation for mechanize has this sample code:
br = mechanize.Browser()
....
# Ignore robots.txt. Do not do this without thought and consideration.
br.set_handle_robots(False)
That does exactly what you want.
回答2:
This looks like what you need:
from mechanize import Browser
br = Browser()
# Ignore robots.txt
br.set_handle_robots( False )
but you know what you're doing…