I love to organize my code, so ideally I want one class per file or, when I have non-member functions, one function per file.
The reasons are:
When I read the code I will always know in what file I should find a certain function or class.
If it's one class or one non-member function per header file, then I won't include a whole mess when I
include
a header file.If I make a small change in a function then only that function will have to be recompiled.
However, splitting everything up into many header and many implementation files can considerately slow down compilation. In my project, most functions access a certain number of templated other library functions. So that code will be compiled over and over, once for each implementation file. Compiling my whole project currently takes 45 minutes or so on one machine. There are about 50 object files, and each one uses the same expensive-to-compile headers.
Maybe, is it acceptable to have one class (or non-member function) per header file, but putting the implementations of many or all of these functions into one implementation file, like in the following example?
// foo.h
void foo(int n);
// bar.h
void bar(double d);
// foobar.cpp
#include <vector>
void foo(int n) { std::vector<int> v; ... }
void bar(double d) { std::vector<int> w; ... }
Again, the advantage would be that I can include just the foo function or just the bar function, and compilation of the whole project will be faster because foobar.cpp
is one file, so the std::vector<int>
(which is just an example here for some other expensive-to-compile templated construction) has to be compiled in only once, as opposed to twice if I compiled a foo.cpp
and bar.cpp
separately. Of course, my reason (3) above is not valid for this scenario: After just changing foo(){...} I have to recompile the whole, potentially big, file foobar.cpp
.
I'm curious what your opinions are!
An old programming professor of mine suggested breaking up modules every several hundred lines of code for maintainability. I don't develop in C++ anymore, but in C# I restrict myself to one class per file, and size of the file doesn't matter as long as there's nothing unrelated to my object. You can make use of #pragma regions to gracefully reduce editor space, not sure if the C++ compiler has them, but if it does then definitely make use of them.
If I were still programming in C++ I would group functions by usage using multiple functions per file. So I may have a file called 'Service.cpp' with a few functions that define that "service". Having one function per file will in turn cause regret to find its way back into your project somehow, someway.
Having several thousand lines of code per file isn't necessary some of the time though. Functions themselves should never be much more than a few hundred lines of code at most. Always remember that a function should only do one thing and be kept minimal. If a function does more than one thing, it should be refactored into helper methods.
It never hurts to have multiple source files that define a single entity either. Ie: 'ServiceConnection.cpp' 'ServiceSettings.cpp', and so on so forth.
Sometimes if I make a single object, and it owns other objects I will combine multiple classes into a single file. For example a button control that contains 'ButtonLink' objects, I might combine that into the Button class. Sometimes I don't, but that's a "preference of the moment" decision.
Do what works best for you. Experiment a little with different styles on smaller projects can help. Hope this helps you out a bit.
I can see some advantages to your approach but there are several disadvantages.
1) Including a package is nightmare. You can end up with 10-20 includes to get the functions you need. For example image if STDIO or StdLib was implemented this way.
2) Browsing the code will be a bit of pain since in general it is easier to scroll through a file than to switch files. Obviously too big of file is hard but even there with modern IDEs it is pretty easy to collapse the file down to what you need and a lot of them have function short cut lists.
3) make file maintenance is a pain.
4) I am a huge fan of small functions and refactoring. When you add overhead (making a new file, adding it to source control,...) it encourages people to write longer functions where instead of breaking 1 function into 3 parts, you just make 1 big one.
One function per file has a technical advantage if you're making a static library (which I guess it's one of the reasons why projects like the Musl-libc project follow this pattern).
Static libraries are linked with object-file granularity and so if you have a static library
libfoobar.a
composed of*:then if you link the lib for the
bar
function, thebar.o
archive member will get linked but not thefoo.o
member. If you link forfoo1
, then thefoo.o
member will get linked, bringing in the possibly unnecessaryfoo2
function.There are possibly other ways of preventing unneeded functions from being linked in (
-ffunction-sections -fdata-sections
and--gc-sections
) but one function per file is probably most reliable.