Not really. Most "bad bots" ignore the robots.txt file anyway.
Abuse crawling usually means scraping. These bots are showing up to harvest email addresses or more commonly, content.
As to how you can stop them? That's really tricky and often not wise. Anti-crawl techniques have a tendency to be less than perfect and cause problems for regular humans.
Sadly, like "shrinkage" in retail, it's a cost of doing business on the web.
A user-agent (which includes crawlers) is under no obligation to honour your robots.txt. The best you can do is try to identify abusive access patterns (via web-logs, etc.), and block the corresponding IP.
Not really. Most "bad bots" ignore the robots.txt file anyway.
Abuse crawling usually means scraping. These bots are showing up to harvest email addresses or more commonly, content.
As to how you can stop them? That's really tricky and often not wise. Anti-crawl techniques have a tendency to be less than perfect and cause problems for regular humans.
Sadly, like "shrinkage" in retail, it's a cost of doing business on the web.
A user-agent (which includes crawlers) is under no obligation to honour your robots.txt. The best you can do is try to identify abusive access patterns (via web-logs, etc.), and block the corresponding IP.