So regards logging from SO and other sites on the Internet the best response seems to be:
void DoSomething() {
Logger.Log("Doing something!");
// Code...
}
Now generally you'd avoid static methods but in the case of logging (a special case) this is the easiest and cleaniest route. Within the static class you can easily inject an instance via a config file/framework to give you the same effect as DI.
My problem comes from a unit testing perspective.
In the example code above imagine the point of DoSomething() was to add two numbers together. I'd write my unit tests for this fine. What about the logging?
Would I write a unit test for the logging (yet use a mock instance for the logger itself)? I know if this was the case I would have to write an integration test to prove the logger actually wrote to a log file but I'm not sure.
Following Test Driven Development (which I do) the unit test would be required for me to dictate the interface no?
Any advice?
Personally, I practice TDD/BDD pretty religiously and I almost never test logging. With some exceptions logging is either a developer convenience or a usability factor, not part of the method's core specification. It also tends to have a MUCH higher rate of change than the actual semantics of the method, so you wind up breaking tests just because you added some more informational logging.
It's probably worthwhile to have some tests that simply exercise the logging subsystem, but for most apps I wouldn't test that each class uses the log in a particular way.
I would probably have a separate body of unit tests for the logger itself, to test out its various functions separately from everything else. In methods that are using the logger, I would just test that the logger was invoked (i.e. expect that it was called) with the right parameters. For example, if I have a method:
I would write a test that shows the logger logged a fatal error message when FatalErrorOccurred was true. I would not, of course, test the contents of the error message itself, as that is very susceptible to change.
Most logging frameworks allow you to provide custom implementation for components. You can use that configuration mechanism to provide your own implementations.
For instance, Java's Log4J allows you to declare custom appenders, which are the components responsible for 'delivering' a LoggingEvent.
A logger can be easily mocked and injected using:
This test only verifies that a logging event is sent, but you can refine it much more using EasyMock.
Although I agree with others that I wouldn't apply TDD to logging, I would try to ensure that unit testing covers all code paths that contain logging statements. And importantly ensure that the highest verbosity level is configured while running the unit tests, so that all logging statements are executed.
For example, the following code has a bug which will throw a FormatException only if if Debug level tracing is enabled.
I would divide the logging into three categories:
1) A requirement. Some systems require logging for audit purposes, or to fill some other requirement of the project (such as a logging standard in an app server). Then it is indeed a requirement and deserves unit tests and acceptance tests to the point where you can be confident the requirement is met. So in this case the exact string of the log may be tested for.
2) Problem solving. In case you start getting weird state in QA or production, you want to be able to trace what is going on. In general, I would say that if this is important (say in a heavily threaded application where state can get complicated but can't be reproduced via known steps) then testing that the given state values end up logged can be valuable (so you aren't testing the whole readability of the log, just that certain facts get in there). Even if the class is changed later, that state is still likely to be logged (along with additional state) so the coupling between the test and the logging is reasonable. So in this case, only parts of the logging is tested for (a contains test).
3) A development aid. In many cases I use logging as a more robust form of commenting. You can write a statement like:
So that you can document the code and at the same time have a useful artifact if you do ever need to debug what is going on. In that case I would not unit test at all, as the existence or not of a given statement is not important on its own.
As for the view that you have to test 100% of everything, see Kent Beck's answer here. I think that the "test everything" is good advice for beginners, because when you start with TDD, the temptation will be to not test anything that is hard to test, or that pushes you to think about the design to make it testable, and rationalize it as unimportant. But once you do know what you are doing, and appreciate the value of the tests, then it is important to balance out what you are doing with what is worth testing.
I usually do not unit test logging statements by asserting on what's logged but I check that the code paths taken by my unit tests cover logging statements just to make sure that I don't get an exception while logging an exception!