how to allow known web crawlers and block spammers

2019-03-20 17:48发布

How can I configure my site to allow crawling from well known robots like google, bing, yahoo, alexa etc. and stop other harmful spammers, robots

should i block particular IP? please discuss any pros, cons Anything to be done in web.config or IIS?

Can I do it server wide If i have vps with root access?

Thanks.

4条回答
【Aperson】
2楼-- · 2019-03-20 18:34

Blocking by IP can be useful, but the method that I use is blocking by user-agent, that way you can trap many different IPs using apps that you don't want, especially site grabbers. I won't provide our list as you need to concentrate on those that affect you. For our use we have identified more than 130 applications that are not web browsers and not search engines that we don't want accessing our web. But you can start with a web search on user-agents for site grabbing.

查看更多
放荡不羁爱自由
3楼-- · 2019-03-20 18:37

the simplest way of doing this is to use a robots.txt file in the root directory of the website.

The syntax of the robots.txt file is as follows:

User-agent: *
Disallow: /

which effectively disallows all robots which respect the robots.txt convention from the defined pages.

The thing to remember though is not all web-crawlers respect this convention.

It can be very useful from preventing bots from hitting the server an insane number of times and it can also be useful for preventing some bots which you would prefer didn't touch the site at all, but it is unfortunately not a cure-all. As has been mentioned already, there is no such animal, spam is a constant headache.

For more info, have a look at http://www.robotstxt.org/

查看更多
在下西门庆
4楼-- · 2019-03-20 18:40

I'd recommend that you take a look the answer I posted to a similar question: How to identify web-crawler?

Robots.txt
The robots.txt is useful for polite bots, but spammers are generally not polite so they tend to ignore the robots.txt; it's great if you have robots.txt since it can help the polite bots. However, be careful not to block the wrong path as it can block the good bots from crawling content that you actually want them to crawl.

User-Agent
Blocking by user-agent is not fool-proof either, because spammers often impersonate browsers and other popular user agents (such as the Google bots). As a matter of fact, spoofing the user agent is one of the easiest thing that a spammer can do.

Bot Traps
This is probably the best way protect yourself from bots that are not polite and that don't correctly identify themselves with the User-Agent. There are at least two types of traps:

  • The robots.txt trap (which only works if the bot reads the robots.txt): dedicate an off-limits directory in the robots.txt and set up your server to block the IP address of any entity which tries to visit that directory.
  • Create "hidden" links in your web pages that also lead to the forbidden directory and any bot that crawls those links AND doesn't abide by your robots.txt will step into the trap and get the IP blocked.

A hidden link is one which is not visible to a person, such as an anchor tag with no text: <a href="http://www.mysite.com/path/to/bot/trap"></a>. Alternately, you can have text in the anchor tag, but you can make the font really small and change the text color to match the background color so that humans can't see the link. The hidden link trap can catch any non-human bot, so I'd recommend that you combine it with the robots.txt trap so that you only catch bad bots.

Verifying Bots
The above steps will probably help you get rid of 99.9% of the spammers, but there might be a handful of bad bots who impersonate a popular bot (such as Googlebot) AND abide by your robots.txt; those bots can eat up the number of requests you've allocated for Googlebot and may cause you to temporarily disallow Google from crawling your website. In that case you have one more option and that's to verify the identity of the bot. Most major crawlers (that you'd want to be crawled by) have a way that you can identify their bots, here is Google's recommendation for verifying their bot: http://googlewebmastercentral.blogspot.com/2006/09/how-to-verify-googlebot.html

Any bot that impersonates another major bot and fails verification can be blocked by IP. That should probably get you closer to preventing 99.99% of the bad bots from crawling your site.

查看更多
forever°为你锁心
5楼-- · 2019-03-20 18:42

I like to use the .htaccess file, once you have a list of known bots add these lines to the bottom of your file.

RewriteCond %{HTTP_REFERER} ^http(s)?://([^.]+.)suspectIP.$ [NC,OR]

RewriteCond %{HTTP_REFERER} ^http(s)?://([^.]+.)suspectURL.com.$ [NC]

RewriteRule (.*) - [F]

查看更多
登录 后发表回答