Pitfalls of code coverage [closed]

2019-03-08 07:34发布

I'm looking for real world examples of some bad side effects of code coverage.

I noticed this happening at work recently because of a policy to achieve 100% code coverage. Code quality has been improving for sure but conversely the testers seem to be writing more lax test plans because 'well the code is fully unit tested'. Some logical bugs managed to slip through as a result. They were a REALLY BIG PAIN to debug because 'well the code is fully unit tested'.

I think that was partly because our tool did statement coverage only. Still, it could have been time better spent.

If anyone has other negative side effects of having a code coverage policy please share. I'd like to know what kind of other 'problems' are happening out there in the real-world.

Thanks in advance.

EDIT: Thanks for all the really good responses. There are a few which I would mark as the answer but I can only mark one unfortunately.

13条回答
叛逆
2楼-- · 2019-03-08 08:04

Sometimes corner cases are so rare they're not worth testing, yet a strict code-coverage rule requires you test it anyway.

For example, in Java the MD5 algorithm is built-in, but technically it's possible that an "unsupported algorithm" type exception is thrown. It's never thrown and your test would have to go through significant gyrations to test that path.

It would be a lot of work wasted.

查看更多
走好不送
3楼-- · 2019-03-08 08:04

!00% code coverage means well tested code is a complete myth. As developers we know the hard/complex/delicate parts of a system, and I would much rather see those areas properly tested, and only get 50% coverage, rather than the meaningless figure that every line has been run at least once.

In terms of a real world example, the only team that I was on that had 100% coverage wrote some of the worst code I've ever seen. 100% coverage was used to replace code review - the result was predicatably awful, to the extent that most code was thrown away, even though it passed the tests.

查看更多
爱情/是我丢掉的垃圾
4楼-- · 2019-03-08 08:04

We have good tools for measuring code-coverage from unit tests. So it's tempting to rely on code-coverage of 100% to represent that you're "done testing." This is not true.

As other folks have mentioned, 100% code coverage doesn't prove that you have tested adequately, nor does 50% code coverage necessarily mean that you haven't tested adequately.

Measuring lines of code executed by tests is just one metric. You also have to test for a reasonable variety of function inputs, and also how the function or class behaves depending on some other external state. For example, some code functions differently based on the data in a database or in a file.

I've also blogged about this recently: http://karwin.blogspot.com/2009/02/unit-test-coverage.html

查看更多
相关推荐>>
5楼-- · 2019-03-08 08:07

100% code coverage doesn't mean you're done with usnit tests

function int divide(int a, int b) {
    return a/b;
}

With just 1 unit test, I get 100% code coverage for this function:

return divide(4,2) == 2;

Now, nobody would argue that this unit code with 100% coverage indicates that he feature works just fine.

I think code coverage is a good element to know if you are missing any obvious code path, but I would use it carefully.

查看更多
神经病院院长
6楼-- · 2019-03-08 08:09

There are tools out there, Jumble for one, that perform analysis through branch coverage, by mutating your code to see if your test fails for all different permutations.

Directly from their website:

Jumble is a class level mutation testing tool that works in conjunction with JUnit. The purpose of mutation testing is to provide a measure of the effectiveness of test cases. A single mutation is performed on the code to be tested, the corresponding test cases are then executed. If the modified code fails the tests, then this increases confidence in the tests. Conversely, if the modified code passes the tests this indicates a testing deficiency.

查看更多
不美不萌又怎样
7楼-- · 2019-03-08 08:10

I know this isn't a direct answer to your question, but...

Any testing, regardless of what type, is insufficient by itself. Unit testing/code coverage is for developers. QA still needs to test the system as a whole. Business users still need to test the system as a whole as well.

The converse, QA tests the code completely, so developers shouldn't test is equally as bad. Testing is complimentary and different tests provide different things. Each test type can miss things that another might find.

Just like the rest of development, don't take shortcuts with testing, it'll only let bugs through.

查看更多
登录 后发表回答