First off, I am not looking for a way to force the compiler to inline the implementation of every function.
To reduce the level of misguided answers make sure you understand what the inline
keyword actually means. Here is good description, inline vs static vs extern.
So my question, why not mark every function definition inline
? ie Ideally, the only compilation unit would be main.cpp
. Or possibly a few more for the functions that cannot be defined in a header file (pimpl idiom, etc).
The theory behind this odd request is it would give the optimizer maximum information to work with. It could inline function implementations of course, but it could also do "cross-module" optimization as there is only one module. Are there other advantages?
Has any one tried this in with a real application? Did the performance increase? decrease?!?
What are the disadvantages of marking all function definitions inline
?
- Compilation might be slower and will consume much more memory.
- Iterative builds are broken, the entire application will need to be rebuilt after every change.
- Link times might be astronomical
All of these disadvantage only effect the developer. What are the runtime disadvantages?
That's pretty much the philosophy behind Whole Program Optimization and Link Time Code Generation (LTCG) : optimization opportunities are best with global knowledge.
From a practical point of view it's sort of a pain because now every single change you make will require a recompilation of your entire source tree. Generally speaking you need an optimized build less frequently than you need to make arbitrary changes.
I tried this in the Metrowerks era (it's pretty easy to setup with a "Unity" style build) and the compilation never finished. I mention it only to point out that it's a workflow setup that's likely to tax the toolchain in ways they weren't anticipating.
The assumption here is that the compiler cannot optimize across functions. That is a limitation of specific compilers and not a general problem. Using this as a general solution for a specific problem might be bad. The compiler may very well just bloat your program with what could have been reusable functions at the same memory address (getting to use the cache) being compiled elsewhere (and losing performance because of the cache).
Big functions in general cost on optimization, there is a balance between the overhead of local variables and the amount of code in the function. Keeping the number of variables in the function (both passed in, local, and global) to within the number of disposable variables for the platform results in most everything being able to stay in registers and not have to be evicted to ram, also a stack frame is not required (depends on the target) so function calling overhead is noticeably reduced. Hard to do in real world applications all the time, but the alternative a small number of big functions with lots of local variables the code is going to spend a significant amount of time evicting and loading registers with variables to/from ram (depends on the target).
Try llvm it can optimize across the entire program not just function by function. Release 27 had caught up to gcc's optimizer, at least for a test or two, I didnt do exhaustive performance testing. And 28 is out so I assume it is better. Even with a few files the number of tuning knob combinations are too many to mess with. I find it best to not optimize at all until you have the whole program into one file, then perform your optimization, giving the optimizer the whole program to work with, basically what you are trying to do with inlining, but without the baggage.
We (and some other game companies) did try it via making one uber-.CPP that
#include
ed all others; it's a known technique. In our case, it didn't seem to affect runtime much, but the compile-time disadvantages you mention turned out to be utterly crippling. With a half an hour compile after every single change, it becomes impossible to iterate effectively. (And this is with the app divvied up into over a dozen different libraries.)We tried making a different configuration such that we would have multiple .objs while debugging and then have the uber-CPP only in release-opt builds, but then ran into the problem of the compiler simply running out of memory. For a sufficiently large app, the tools simply are not up to compiling a multimillion line cpp file.
We tried LTCG as well, and that provided a small but nice runtime boost, in the rare cases where it didn't simply crash during the link phase.
Did you really mean
#include
everything? That would give you only a single module and let the optimizer see the entire program at once.Actually, Microsoft's Visual C++ does exactly this when you use the
/GL
(Whole Program Optimization) switch, it doesn't actually compile anything until the linker runs and has access to all code. Other compilers have similar options.The problem with inlining is that you want high performance functions to fit in cache. You might think function call overhead is the big performance hit, but in many architectures a cache miss will blow the couple pushes and pops out of the water. For example, if you have a large (maybe deep) function that needs to be called very rarely from your main high performance path, it could cause your main high performance loop to grow to the point where it doesn't fit in L1 icache. That will slow your code down way, way more than the occasional function call.
This is semi-related, but note that Visual C++ does have the ability to do cross-module optimization, including inline across modules. See http://msdn.microsoft.com/en-us/library/0zza0de8%28VS.80%29.aspx for info.
To add an answer to your original question, I don't think there would be a downside at run time, assuming the optimizer was smart enough (hence why it was added as an optimization option in Visual Studio). Just use a compiler smart enough to do it automatically, without creating all the problems you mention. :)