Code lines per file, methods per class, cyclomatic complexity and so on. Developers resist and workaround most if not all of them! There is a good Joel article on it (no time to find it now).
What code metric(s) you recommend for use to automatically identify "crappy code"?
What can convince most (you can't convince all of us to some crappy metric! :O) ) of developers that this code is "crap".
Only metrics that can be automatically measured counts!
Developers are always concerned with metrics being used against them and calling "crappy" code is not a good start. This is important because if you are worried about your developers gaming around them then don't use the metrics for anything that is to their advantage/disadvantage.
The way this works best is don't let the metric tell you where the code is crappy but use the metric to determine where you need to look. You look by having a code review and the decision of how to fix the issue is between the developer and the reviewer. I would also error on the side of the developer against the metric. If the code is still popping on the metric but the reviewers think it is good, leave it alone.
But it is important to keep in mind this gaming effect when your metrics start to improve. Great, I now have 100% coverage but are the unit tests any good? The metric tells me I am ok, but I still need to check it out and look at what got us there.
Bottom line, the human trumps the machine.
Not an automated solution, but I find WTF's per minute useful.
WTF's Per Minute http://www.osnews.com/images/comics/wtfm.jpg
My bet: combination of cyclomatic complexity(CC) and code coverage from automated tests(TC).
crap4j - possible tool (for java) and concept explanation ... in search for C# friendly tool :(
Non-existent tests (revealed by code coverage). It's not necessarily an indicator that the code is bad, but it's a big warning sign.
Profanity in comments.
Number of warnings the compiler spits out when I do a build.