Stop abusive bots from crawling?

2019-02-24 10:59发布

问题:

Is this a good idea??

http://browsers.garykeith.com/stream.asp?RobotsTXT

What does abusive crawling mean? How is that bad for my site?

回答1:

Not really. Most "bad bots" ignore the robots.txt file anyway.

Abuse crawling usually means scraping. These bots are showing up to harvest email addresses or more commonly, content.

As to how you can stop them? That's really tricky and often not wise. Anti-crawl techniques have a tendency to be less than perfect and cause problems for regular humans.

Sadly, like "shrinkage" in retail, it's a cost of doing business on the web.



回答2:

A user-agent (which includes crawlers) is under no obligation to honour your robots.txt. The best you can do is try to identify abusive access patterns (via web-logs, etc.), and block the corresponding IP.