I'm trying to get accurate download numbers for some files on a web server. I look at the user agents and some are clearly bots or web crawlers, but many for many I'm not sure, they may or may not be a web crawler and they are causing many downloads so it's important for me to know.
Is there somewhere a list of know web crawlers with some documentation like user agent, IPs, behavior, etc?
I'm not interested in the official ones, like Google's, Yahoo's, or Microsoft's. Those are generally well behaved and self-indentified.
I'm using http://www.user-agents.org/ usually as reference, hope this helps you out.
You can also try http://www.robotstxt.org/db.html or http://www.botsvsbrowsers.com.
I'm maintaining a list of crawler's user-agent patterns at https://github.com/monperrus/crawler-user-agents/.
It's collaborative, you can contribute to it with pull requests.
Unfortunately we've found that bot activity is too numerous and varied to be able to accurately filter it. If you want accurate download counts, your best bet is to require javascript to trigger the download. That's basically the only thing that is going to reliably filter out the bots. It's also why all site traffic analytics engines these days are javascript based.
http://www.robotstxt.org/db.html is a good place to start. They have an automatable raw feed if you need that too. http://www.botsvsbrowsers.com/ is also helpful.