Is there such a thing as excessive unit testing? [

2020-02-09 02:00发布

I'm not brand new to the concept of unit testing but at the same time I've not yet mastered them either.

The one question that has been going through my head recently as I've been writing unit tests while writing my code using the TDD methodology is: to what level should I be testing?

Sometimes I wonder if I'm being excessive in the use of unit testing.

At what point should a developer stop writing unit tests and get actual work done?

I might need to clarify that question before people assume I'm against using TDD...

What I'm struggling with is the granularity of my test....

  • When my app has a config file do I test that values can be retrieved from the file? I lean towards yes....but....
  • Do I then write a unit test for each possible config value that will be present? ie check that they exist...and can be parsed to the correct type...
  • When my app writes errors to a log do I need to test that it is able to write to the log? Do I then need to write tests to verify that entries are actually made to the log?

I want to be able to use my unit tests to verify the behavior of my app...but I'm not quite sure where to stop. Is it possible to write tests that are too trivial?

18条回答
Rolldiameter
2楼-- · 2020-02-09 02:03

Unit tests need to test each piece of functionality, edge cases and sometimes corner cases.

If you find that after testing edge and corner cases, you're doing "middle" cases, then that's probably excessive.

Moreover, depending on your environment, unit tests might be either quite time consuming to write, or quite brittle.

Tests do require ongoing maintenance, so every test you write will potentially break in the future and need to be fixed (even though it's not detected an actual bug) - trying to do sufficient testing with the minimum number of tests seems like a good goal (but don't just cobble several tests into one needlessly - test one thing at a time)

查看更多
Root(大扎)
3楼-- · 2020-02-09 02:04

It's very possible, but the problem isn't having too many tests - it's testing stuff you don't care about, or investing too much in testing stuff fewer and simpler tests would have been enough for.

My guiding principle is the level of confidence I have when changing a piece of code: if it will never fail, I won't need the test. If it's straightforward, a simple sanity will do. If it's tricky, I crank up the tests until I feel confident to make changes.

查看更多
够拽才男人
4楼-- · 2020-02-09 02:05

I believe that a good test tests a bit of specification. Any test that tests something that is not part of a specification is worthless and should thus be omitted, e.g. testing methods that are just means of implementing the specified functionality of a unit. It is also questionable if it is worthwhile testing truly trivial functionality such as getters and setters, although you never know how long they will be trivial.

The problem with testing according to specification is that many people use tests as specifications, which is wrong for many reasons -- partly since it stops you from being able to actually know what you should test and what not (another important reason is that tests are always testing only some examples, while specifications should always specify behaviour for all possible inputs and states).

If you have proper specifications for your units (and you should), then it should be obvious what needs testing and anything beyond that is superfluous and thus waste.

查看更多
smile是对你的礼貌
5楼-- · 2020-02-09 02:07

For determining how much testing effort I put on a program, I define criteria for this testing campaign in terms of what is to be tested: all the branches of the code, all the functions, all the input or output domains, all the features...

Given this, my test work is done when my criteria are entirely covered.

I just need to be aware that certain goals are impossible to reach such as all the program paths or all the input values.

查看更多
We Are One
6楼-- · 2020-02-09 02:11

It's definitely possible to overdo unit tests, and testing features is a good place to start. But don't overlook testing error handling as well. Your units should respond sensibly when given inputs that don't meet their precondition. If your own code is responsible for the bad input, an assertion failure is a sensible response. If a user can cause the bad input, then you'll need to be unit testing exceptions or error messages.

Every reported bug should result in at least one unit test.

Regarding some of your specifics: I would definitely test my config-file parser to see that it can parse every value of every expected type. (I tend to rely on Lua for config files and parsing, but that still leaves me with some testing to do.) But I wouldn't write a unit test for every entry in the config file; instead I'd write a table-driven test framework that would describe each possible entry and would generate the tests from that. I would probably generate documentation from the same description. I might even generate the parser.

When your app writes entries to a log you are veering into integration tests. A better approach would be to have a separate logging component like syslog. Then you can unit test the logger, put it on the shelf, and reuse it. Or even better, reuse syslog. A short integration test can then tell you whether your app is interoperating correctly with syslog.

In general if you find yourself writing a lot of unit tests, perhaps your units are too large and not orthogonal enough.

I hope some of this helps.

查看更多
唯我独甜
7楼-- · 2020-02-09 02:11

Of course one can overtest the same way one can over-engineer.

As you follow Test-Driven Development, you should gain confidence about your code and stop when you're confident enough. When in doubt, add a new test.

About trivial tests, an eXtreme Programming saying about this is "test everything that could break".

查看更多
登录 后发表回答