Since I'm a TDD newbie, I'm currently developing a tiny C# console application in order to practice (because practice makes perfect, right?). I started by making a simple sketchup of how the application could be organized (class-wise) and started developing all domain classes that I could identify, one by one (test first, of course).
In the end, the classes have to be integrated together in order to make the application runnable, i.e. placing necessary code in the Main method which calls the necessary logic. However, I don't see how I can do this last integration step in a "test first" manner.
I suppose I wouldn't be having these issues had I used a "top down" approach. The question is: how would I do that? Should I have started by testing the Main() method?
If anyone could give me some pointers, it will be much appreciated.
You can test your console application as well. I found it quite hard, look at this example:
You can look at the full post here.
"bottom up" can save you a lot of work cos you already have low-level classes (tested) and you can use them in higher level tests. in opposite case you'd have to write a lot of (probably complicated) mockups. also "top down" often doesn't work as it's supposed: you think out some nice high-level model, test and implement it, and then moving down you realize that some of your assumptions were wrong (it's quite typical situation even for experienced programmers). of course patterns help here but it's not silver bullet too.
to have complete test coverage, you'll need to test your Main in any case. I'm not sure it's worth the effort in all cases. Just move all logic from Main to separate function (as Marcus Lindblom proposed), test it and let Main to just pass command-line arguments to your function.
console output is a simplest possible testing ability, and if you use TDD it can be used just to output the final result w/o any diagnosis and debug messages. so make your top-level function to return solid result, test it and just output it in Main
I recommend "top down" (I call that high level testing). I wrote about that:
http://www.hardcoded.net/articles/high-level-testing.htm
So yes, you should test your console output directly. Sure, initially it's a pain to setup a test so that it's easy to test console output directly, but if you create appropriate helper code, your next "high level" tests will be much easier to write. By doing that, you empower yourself to unlimited re-factoring potential. With "bottom up", your initial class relationship is very rigid because changing the relationship means changing the tests (lots of work and dangerous).
"Top-down" is already used in computing to describe an analysis technique. I suggest using the term "outside-in" instead.
Outside-in is a term from BDD, in which we recognise that there are often multiple user interfaces to a system. Users can be other systems as well as people. A BDD approach is similar to a TDD approach; it might help you so I will describe it briefly.
In BDD we start with a scenario - usually a simple example of a user using the system. Conversations around the scenarios help us work out what the system should really do. We write a user interface, and we can automate the scenarios against that user interface if we want.
When we write the user interface, we keep it as thin as possible. The user interface will use another class - a controller, viewmodel, etc. - for which we can define an API.
At this stage, the API will be either an empty class or a (programme) interface. Now we can write examples of how the user interface might use this controller, and show how the controller delivers value.
The examples also show the scope of the controller's responsibility, and how it delegates its responsibilities to other classes like repositories, services, etc. We can express that delegation using mocks. We then write that class to make the examples (unit tests) work. We write just enough examples to make our system-level scenarios pass.
I find it's common to rework mocked examples as the interfaces of the mocks are only guessed at first, then emerge more fully as the class is written. That will help us to define the next layer of interfaces or APIs, for which we describe more examples, and so on until no more mocks are needed and the first scenario passes.
As we describe more scenarios, we create different behaviour in the classes and we can refactor to remove duplication where different scenarios and user interfaces require similar behaviour.
By doing this in an outside-in fashion we get as much information as to what the APIs should be as possible, and rework those APIs as quickly as we can. This fits with the principles of Real Options (never commit early unless you know why). We don't create anything we don't use, and the APIs themselves are designed for usability - rather than for ease of writing. The code tends to be written in a more natural style using domain language more than programming language, making it more readable. Since code is read about 10x more than it's written, this also helps make it maintainable.
For that reason, I'd use an outside-in approach, rather than the intelligent guesswork of bottom-up. My experience is that it results in simpler, more strongly decoupled, more readable and more maintainable code.
If you moved stuff out of main(), could you not test that function?
It makes sense to do that, since you might want to run with different cmd-args and you'd want to test those.
There was an interesting piece in MSDN magazine of December, which describes "how the BDD cycle wraps the traditional Test-Driven Development (TDD) cycle with feature-level tests that drive unit-level implementation". The details may be too technology specific, but the ideas and process outline sound relevant to your question.