Design - Where should objects be registered when u

2019-01-01 03:17发布

问题:

I will have the following components in my application

  • DataAccess
  • DataAccess.Test
  • Business
  • Business.Test
  • Application

I was hoping to use Castle Windsor as IoC to glue the layers together but I am bit uncertain about the design of the gluing.

My question is who should be responsible for registering the objects into Windsor? I have a couple of ideas;

  1. Each layer can register its own objects. To test the BL, the test bench could register mock classes for the DAL.
  2. Each layer can register the object of its dependencies, e.g. the business layer registers the components of the data access layer. To test the BL, the test bench would have to unload the \"real\" DAL object and register the mock objects.
  3. The application (or test app) registers all objects of the dependencies.

Can someone help me with some ideas and pros/cons with the different paths? Links to example projects utilizing Castle Windsor in this way would be very helpful.

回答1:

In general, all components in an application should be composed as late as possible, because that ensures maximum modularity, and that modules are as loosely coupled as possible.

In practice, this means that you should configure the container at the root of your application.

  • In a desktop app, that would be in the Main method (or very close to it)
  • In an ASP.NET (including MVC) application, that would be in Global.asax
  • In WCF, that would be in a ServiceHostFactory
  • etc.

The container is simply the engine that composes modules into a working application. In principle, you could write the code by hand (this is called Poor Man\'s DI), but it is just so much easier to use a DI Container like Windsor.

Such a Composition Root will ideally be the only piece of code in the application\'s root, making the application a so-called Humble Executable (a term from the excellent xUnit Test Patterns) that doesn\'t need unit testing in itself.

Your tests should not need the container at all, as your objects and modules should be composable, and you can directly supply Test Doubles to them from the unit tests. It is best if you can design all of your modules to be container-agnostic.

Also specifically in Windsor you should encapsulate your component registration logic within installers (types implementing IWindsorInstaller) See the documentation for more details



回答2:

While Mark\'s answer is great for web scenarios, the key flaw with applying it for all architectures (namely rich-client - ie: WPF, WinForms, iOS, etc.) is the assumption that all components needed for an operation can/should be created at once.

For web servers this makes sense since every request is extremely short-lived and an ASP.NET MVC controller gets created by the underlying framework (no user code) for every request that comes in. Thus the controller and all its dependencies can easily be composed by a DI framework, and there is very little maintenance cost to doing so. Note that the web framework is responsible for managing the lifetime of the controller and for all purposes the lifetime of all its dependencies (which the DI framework will create/inject for you upon the controller\'s creation). It is totally fine that the dependencies live for the duration of the request and your user code does not need to manage the lifetime of components and sub-components itself. Also note that web servers are stateless across different requests (except for session state, but that\'s irrelevant for this discussion) and that you never have multiple controller/child-controller instances that need to live at the same time to service a single request.

In rich-client apps however this is very much not the case. If using an MVC/MVVM architecture (which you should!) a user\'s session is long-living and controllers create sub-controllers / sibling controllers as the user navigates through the app (see note about MVVM at the bottom). The analogy to the web world is that every user input (button click, operation performed) in a rich-client app is the equivalent of a request being received by the web framework. The big difference however is that you want the controllers in a rich-client app to stay alive between operations (very possible that the user does multiple operations on the same screen - which is governed by a particular controller) and also that sub-controllers get created and destroyed as the user performs different actions (think about a tab control that lazily creates the tab if the user navigates to it, or a piece of UI that only needs to get loaded if the user performs particular actions on a screen).

Both these characteristics mean that it\'s the user code that needs to manage the lifetime of controllers/sub-controllers, and that the controllers\' dependencies should NOT all be created upfront (ie: sub-controllers, view-models, other presentation components etc.). If you use a DI framework to perform these responsibilities you will end up with not only a lot more code where it doesn\'t belong (See: Constructor over-injection anti-pattern) but you will also need to pass along a dependency container throughout most of your presentation layer so that your components can use it to create their sub-components when needed.

Why is it bad that my user-code has access to the DI container?

1) The dependency container holds references to a lot of components in your app. Passing this bad boy around to every component that needs to create/manage anoter sub-component is the equivalent of using globals in your architecture. Worse off any sub-component can also register new components into the container so soon enough it will become a global storage as well. Developers will throw objects into the container just to pass around data between components (either between sibling controllers or between deep controller hierarchies - ie: an ancestor controller needs to grab data from a grandparent controller). Note that in the web world where the container is not passed around to user code this is never a problem.

2) The other problem with dependency containers versus service locators / factories / direct object instantiation is that resolving from a container makes it completely ambiguous whether you are CREATING a component or simply REUSING an existing one. Instead it is left up to a centralized configuration (ie: bootstrapper / Composition Root) to figure out what the lifetime of the component is. In certain cases this is okay (ie: web controllers, where it is not user code that needs to manage component\'s lifetime but the runtime request processing framework itself). This is extremely problematic however when the design of your components should INDICATE whether it\'s their responsibility to manage a component and what it\'s lifetime should be (Example: A phone app pops up a sheet that asks the user for some info. This is achieved by a controller creating a sub-controller which governs the overlaying sheet. Once the user enters some info the sheet is resigned, and control is returned to the initial controller, which still maintains state from what the user was doing prior). If DI is used to resolve the sheet sub-controller it\'s ambiguous what the lifetime of it should be or whom should be responsible for managing it (the initiating controller). Compare this to the explicit responsibility dictated by the use of other mechanisms.

Scenario A:

// not sure whether I\'m responsible for creating the thing or not
DependencyContainer.GimmeA<Thing>()

Scenario B:

// responsibility is clear that this component is responsible for creation

Factory.CreateMeA<Thing>()
// or simply
new Thing()

Scenario C:

// responsibility is clear that this component is not responsible for creation, but rather only consumption

ServiceLocator.GetMeTheExisting<Thing>()
// or simply
ServiceLocator.Thing

As you can see DI makes it unclear whom is responsible for the lifetime management of the sub-component.

NOTE: Technically speaking many DI frameworks do have some way of creating components lazily (See: How not to do dependency injection - the static or singleton container) which is a lot better than passing the container around, but you are still paying the cost of mutating your code to pass around creation functions everywhere, you lack first-level support for passing in valid constructor parameters during creation, and at the end of the day you are still using an indirection mechanism unnecessarily in places where the only benefit is to achieve testability, which can be achieved in better, simpler ways (see below).

What does all this mean?

It means DI is appropriate for certain scenarios, and inappropriate for others. In rich-client applications it happens to carry a lot of the downsides of DI with very few of the upsides. The further your app scales out in complexity the bigger the maintenance costs will grow. It also carries the grave potential for misuse, which depending on how tight your team communication and code review processes are, can be anywhere from a non-issue to a severe tech debt cost. There is a myth going around that Service Locators or Factories or good old Instantiation are somehow bad and outdated mechanisms simply because they may not be the optimal mechanism in the web app world, where perhaps a lot of people play in. We should not over-generalize these learnings to all scenarios and view everything as nails just because we\'ve learned to wield a particular hammer.

My recommendation FOR RICH-CLIENT APPS is to use the minimal mechanism that meets the requirements for each component at hand. 80% of the time this should be direct instantitation. Service locators can be used to house your main business layer components (ie: application services which are generally singleton in nature), and of course Factories and even the Singleton pattern also have their place. There is nothing to say you can\'t use a DI framework hidden behind your service locator to create your business layer dependencies and everything they depend on in one go - if that ends up making your life easier in that layer, and that layer doesn\'t exhibit the lazy loading which rich-client presentation layers overwhelmingly do. Just make sure to shield your user code from access to that container so that you can prevent the mess that passing a DI container around can create.

What about testability?

Testability can absolutely be achieved without a DI framework. I recommend using an interception framework such as UnitBox (free) or TypeMock (pricey). These frameworks give you the tools you need to get around the problem at hand (how do you mock out instantiation and static calls in C#) and do not require you to change your whole architecture to get around them (which unfortunately is where the trend has gone in the .NET/Java world). It is wiser to find a solution to the problem at hand and use the natural language mechanisms and patterns optimal for the underlying component then to try to fit every square peg into the round DI hole. Once you start using these simpler, more specific mechanisms you will notice there is very little need for DI in your codebase if any at all.

NOTE: For MVVM architectures

In basic MVVM architectures view-models effectively take on the responsibility of controllers, so for all purposes consider the \'controller\' wording above to apply to \'view-model\'. Basic MVVM works fine for small apps but as the complexity of an app grows you may want to use an MVCVM approach. View-models become mostly dumb DTOs to facilitate data-binding to the view while interaction with the business layer and between groups of view-models representing screens/sub-screens gets encapsulated into explicit controller/sub-controller components. In either architecture the responsibility of controllers exists and exhibits the same characteristics discussed above.