I'm having a hard time understanding why there is only one test per function in most professional TDD code that I have seen. When I approached TDD initially I tended to group 4-5 tests per function if they were related but I see that doesn't seem to be the standard. I know that it is more descriptive to have just one test per function because you can more easily narrow down what the problem is, but I find myself struggling to come up with function names to differentiate the different tests since many are so similar.
So my question is: Is it truly a bad practice to put multiple tests in one function and if so why? Is there a consensus out there? Thanks
Edit: Wow tons of great answers. I'm convinced. You need to really separate them all out. I went through some recent tests I had written and separated them all and lo and behold it was way more easier to read and helped my understand MUCH better what I was testing. Also by giving the tests their own long verbose names it gave me ideas like "Oh wait I didn't test this other thing", so all around I think it's the way to go.
Great Answers. Gonna be hard to pick a winner
High granularity of tests is recommended, not just for ease of identification of problems, but because sequencing tests inside a function can accidentally hide problems. Suppose for example that calling method
foo
with argumentbar
is supposed to return23
-- but due to a bug in the way the object initializes its state, it returns42
instead if it's called as the very first method on the newly constructed object (after that, it does correctly switch to returning23
). If your test offoo
doesn't come right after the object's creation, you're going to miss this problem; and if you bunch tests up 5 at a time, you only have a 20% chance of accidentally getting it right. With one test per function (and a setup/teardown arrangement that resets and rebuilds everything cleanly each time, of course), you'll nail the bug immediately. Now this is an artificially-simple problem just for reasons of exposition, but the general issue -- that tests should not influence each other, but often will unless they're each bracketed by set up and tear down functionality -- does loom large.Yes, naming things well (including tests) is not a trivial problem, but it must not be taken as an excuse to avoid proper granularity. A useful naming hint: each test checks for a given, specific behavior -- e.g., something like "Easter in 2008 falls on March 23" -- not for generic "functionality", such as "compute the Easter date correctly".
When a test function performs only one test it is much easier to identify which case failed.
You also isolate the tests, so one test failing doesn't affect the execution of the other tests.
Yes, you should test one behavior per function in TDD. Here's why.
And, a final question - why not have one test per function? What is the benefit? I don't think there's a tax on function declarations.
looks like you're asking "why there is only one assertion per test in most professional TDD code I have seen". That's probably to increase test isolation, as well as test coverage in presence of failures. That's certainly the reason why I made my TDD library (for PHP) that way. say you have
If the first assert fails, you don't get to see what would happen with the other two. That doesn't exactly help pinpoint the problem: is this something specific to the inputs, or is it systemic?
I'm assuming that you mean 'assert' when you say 'test'. In general, a test should only test a single 'use case' of a function. By 'use case' I mean: a path that the code can flow through via control flow statements (don't forget about handled exceptions, etc.). Essentially you are testing all of the 'requirements' of that function. For example, say you have a function such as:
In this case, there are 2 'use cases' or control flows that the function can take. This function should have at minimum 2 tests for it. One that accepts foo as true and branches down the if(true) code, and one that accepts foo as false and goes down the second branch. If you have more if statements or flows the code can go though, then it will require more tests. This is for several reason - the most important one to me is that without it, the tests would be too complicated and hard to read. There's other reasons too, like in the case of the above function, the control flow is based on input parameter - which means you must call the function twice to test all code paths. You should never call the function more then once that you are testing in your test IMO.
Maybe you are over-thinking it?? Don't be scared of writing crazy, overly verbose names for your test function. Whatever that test does, write it in english, use underscores, and come up with a set of standards for names so that someone else looking at the code (including yourself 6 months later) can easily figure out what it does. Remember, you never actually have to call this function yourself (at least in most testing frameworks), so who cares if the name of it is 100 characters. Go Crazy. In the above example, my 2 tests would be named:
Also - this is just a general guideline. There are definitely cases where you will have multiple asserts in the same unit test. This will happen when you are testing the same control flow, but multiple fields need to be checked when you write your assert statement(s). Take this for example - a test for a function which parses a CSV file into a business object which has a Header, a Body, and Footer field:
Here, we are really testing the same use case, but we needed multiple asserts to check all our data and make sure our code actually worked.
-Drew
I think the good way is not to think in term of tests number per function but is to think in term of code coverage :
EDIT : I reread what I wrote and I found it kind of "scary" ... that remind me a good thought I heard some weeks a go about code coverage :