Over the years my application has grown from 1MB to 25MB and I expect it to grow further to 40, 50 MB. I don't use DLL's, but put everything in this one big executable.
Having one big executable has certain advantages:
- Installing my application at the customer is really: copy and run.
- Upgrades can be easily zipped and sent to the customer
- There is no risk of having conflicting DLL's (where the customer has version X of the EXE, but version Y of the DLL)
The big disadvantage of the big EXE is that linking times seem to grow exponentially.
Additional problem is that a part of the code (let's say about 40%) is shared with another application. Again, the advantages are that:
- There is no risk on having a mix of incorrect DLL versions
- Every developer can make changes on the common code which speeds up developments.
But again, this has a serious impact on compilation times (everyone compiles the common code again on his PC) and on linking times.
The question Grouping DLL's for use in Executable mentions the possibility of mixing DLL's in one executable, but it looks like this still requires you to link all functions manually in your application (using LoadLibrary, GetProcAddress, ...).
What is your opinion on executable sizes, the use of DLL's and the best 'balance' between easy deployment and easy/fast development?
A single executable has a huge positive impact on maintainability. It is easier to debug, deploy (size issues aside) and diagnose in the field. As you point out, it completely sidesteps DLL hell.
The most straightforward solution to your problem is to have two compilation modes, one that builds a single exe for production and one that builds lots of little DLLs for development.
The tenet is: reduce the number of your .NET assemblies to the strict minimum. Having a single assembly is the ideal number. This is for example the case for Reflector or NHibernate that both come as a very few assemblies. My company published free two white books on the topic One big executable or many small DLL's:
- Partitioning code base through .NET assemblies and Visual Studio projects (8 pages)
- Defining .NET Components with Namespaces (7 pages)
Arguments are developed in these white-books come with invalid/valid reasons to create an assembly and a case study on the code base of the tool NDepend.
The problem is that MS fosters(and is still fostering) the idea that assemblies are components while assemblies are just physical artifact to pack code. The notion of component is a logical artifact and typically an assemblies should contains several components. It is a good idea to partition component with the notion of namespaces although it is not always practicable (especially in the case of a framework with a public API where namespace are used to partition the API and not necessarily the components)
One big executable is definitely beneficial - you can have whole program optimization and less overhead and maintenance is much simpler.
As for the link time - you could have both the "many DLLs" and "one big executable" at the same time. For each DLL have a project configuration that builds a static library. So when you debug things you compile the "DLL" configuration of the project and when you need to ship you compile the "static library" configurations of your projects. Sometimes you will have different behavior in different configurations, but this will have to be addressed per incident.
An easier way to maintain large programs is to compose them into smaller manageable parts. A program can be composed into a shell and modules that add feature to the shell. Large programs like Visual Studio, outlook all use the same concepts. Try this approach to make a more maintainable and robust programs.