Has anyone done any comparisons on static code analysis tools available to Linux? What are the strengths and weaknesses of the following tools:
- Lintian,
- Sparse,
- Splint,
- RATS,
- Using the -Wall option.
Would you consider that using just one of these tools is adequate?
I'm not looking for recommendations (I can find plenty of those) but direct comparisons between available tools.
Using -Wall should be a matter of course for every c developer. Also, additionally using -Wextra could be a good idea.
Splint can find other potential weaknesses in your application but in most cases (!) it prints false warnings so you have to really understand what splint means with what warning and most times you have to insert annotations like /out/ or /unused/ in your code so splint doesn't yell on you. With splint, you should filter out warnings which are not important for you, otherwise you spent too much time in analyzing and scrolling through lots of messages.
Note that these tools do only static code checking. You should use valgrind to find runtime memory leaks.
There is of course the wikipedia list. That list is just that, a list, and not a comparison, but one of the links on the page seems to at least partially answer your question and (very briefly) mentions a couple of the programs you listed.
I have used splint a couple of times and found it too verbose: I disabled most of the warnings. I think that this tool may provide interesting results if you correctly annotate your code. Without code annotation, this tool is not very helpful.
I sometimes use sparse and consider it as a valuable tool. It provides a wrapper around gcc, called "cgcc". As a result, it is simple to run sparse on a program even if it contains many source files (
export CC=cgcc
and voilà). This program works best if you are analyzing Kernel source code.As a sidenote, I also use pmccabe on a regular basis. pmccabe is not a static analyzer: it calculates cyclomatic complexity. It may help you find the most complex functions in your program. Those functions are likely to be error prone and hard to test.