Will C++ linker automatically inline functions (wi

2020-06-01 05:12发布

Will the C++ linker automatically inline "pass-through" functions, which are NOT defined in the header, and NOT explicitly requested to be "inlined" through the inline keyword?

For example, the following happens so often, and should always benefit from "inlining", that it seems every compiler vendor should have "automatically" handled it through "inlining" through the linker (in those cases where it is possible):

//FILE: MyA.hpp
class MyA
{
  public:
    int foo(void) const;
};

//FILE: MyB.hpp
class MyB
{
  private:
    MyA my_a_;
  public:
    int foo(void) const;
};

//FILE: MyB.cpp
// PLEASE SAY THIS FUNCTION IS "INLINED" BY THE LINKER, EVEN THOUGH
// IT WAS NOT IMPLICITLY/EXPLICITLY REQUESTED TO BE "INLINED"?
int MyB::foo(void)
{
  return my_a_.foo();
}

I'm aware the MSVS linker will perform some "inlining" through its Link Time Code Generation (LTGCC), and that the GCC toolchain also supports Link Time Optimization (LTO) (see: Can the linker inline functions?).

Further, I'm aware that there are cases where this cannot be "inlined", such as when the implementation is not "available" to the linker (e.g., across shared library boundaries, where separate linking occurs).

However, if this is code is linked into a single executable that does not cross DLL/shared-lib boundaries, I'd expect the compiler/linker vendor to automatically inline the function, as a simple-and-obvious optimization (benefiting both performance-and-size)?

Are my hopes too naive?

8条回答
够拽才男人
2楼-- · 2020-06-01 05:48

Is it common ? Yes, for mainstream compilers.

Is it automatic ? Generally not. MSVC requires the /GL switch, gcc and clang the -flto flag.

How does it work ? (gcc only)

The traditional linker used in the gcc toolchain is ld, and it's kind of dumb. Therefore, and it might be surprising, link-time optimization is not performed by the linker in the gcc toolchain.

Gcc has a specific intermediate representation on which the optimizations are performed that is language agnostic: GIMPLE. When compiling a source file with -flto (which activates the LTO), it saves the intermediate representation in a specific section of the object file.

When invoking the linker driver (note: NOT the linker directly) with -flto, the driver will read those specific sections, bundle them together into a big chunk, and feed this bundle to the compiler. The compiler reapplies the optimizations as it usually does for a regular compilation (constant propagation, inlining, and this may expose new opportunities for dead code elimination, loop transformations, etc...) and produces a single big object file.

This big object file is finally fed to the regular linker of the toolchain (probably ld, unless you're experimenting with gold), which performes its linker magic.

Clang works similarly, and I surmise that MSVC uses a similar trick.

查看更多
我只想做你的唯一
3楼-- · 2020-06-01 05:49

It depends. Most compilers (linkers, really) support this kind of optimizations. But in order for it to be done, the entire code-generation phase pretty much has to be deferred to link-time. MSVC calls the option link-time code generation (LTCG), and it is by default enabled in release builds, IIRC.

GCC has a similar option, under a different name, but I can't remember which -O levels, if any, enables it, or if it has to be enabled explicitly.

However, "traditionally", C++ compilers have compiled a single translation unit in isolation, after which the linker has merely tied up the loose ends, ensuring that when translation unit A calls a function defined in translation unit B, the correct function address is looked up and inserted into the calling code.

if you follow this model, then it is impossible to inline functions defined in another translation unit.

It is not just some "simple" optimization that can be done "on the fly", like, say, loop unrolling. It requires the linker and compiler to cooperate, because the linker will have to take over some of the work normally done by the compiler.

Note that the compiler will gladly inline functions that are not marked with the inline keyword. But only if it is aware of how the function is defined at the site where it is called. If it can't see the definition, then it can't inline the call. That is why you normally define such small trivial "intended-to-be-inlined" functions in headers, making their definitions visible to all callers.

查看更多
倾城 Initia
4楼-- · 2020-06-01 05:52

The inline keyword only acts as a guidance for the compiler to inline functions when doing optimization. In g++, the optimization levels -O2 and -O3 generate different levels of inlining. The g++ doc specifies the following : (i) If O2 is specified -finline-small-functions is turned ON.(ii) If O3 is specified -finline-functions is turned ON along with all options for O2. (iii) Then there is one more relevant options "no-default-inline" which will make member functions inline only if "inline" keyword is added.

Typically, the size of the functions (number of instructions in the assembly), if recursive calls are used determine whether inlining happens. There are plenty more options defined in the link below for g++:

http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html

Please take a look and see which ones you are using, because ultimately the options you use determine whether your function is inlined.

查看更多
贼婆χ
5楼-- · 2020-06-01 05:57

Here is my understanding of what the compiler will do with functions:

If the function definition is inside the class definition, and assuming no scenarios which prevent "inline-ing" the function, such as recursion, exist, the function will be "inline-d".

If the function definition is outside the class definition, the function will not be "inline-d" unless the function definition explicitly includes the inline keyword.

Here is an excerpt from Ivor Horton's Beginning Visual C++ 2010:

Inline Functions

With an inline function, the compiler tries to expand the code in the body of the function in place of a call to the function. This avoids much of the overhead of calling the function and, therefore, speeds up your code.

The compiler may not always be able to insert the code for a function inline (such as with recursive functions or functions for which you have obtained an address), but generally, it will work. It's best used for very short, simple functions, such as our Volume() in the CBox class, because such functions execute faster and inserting the body code does not significantly increase the size of the executable module.

With function definitions outside of the class definition, the compiler treats the functions as a normal function, and a call of the function will work in the usual way; however, it's also possible to tell the compiler that, if possible, you would like the function to be considered as inline. This is done by simply placing the keyword inline at the beginning of the function header. So, for this function, the definition would be as follows:

inline double CBox::Volume()
{
    return l * w * h;
}
查看更多
smile是对你的礼貌
6楼-- · 2020-06-01 06:00

Here's a quick test of your example (with a MyA::foo() implementation that simply returns 42). All these tests were with 32-bit targets - it's possible that different results might be seen with 64-bit targets. It's also worth noting that using the -flto option (GCC) or the /GL option (MSVC) results in full optimization - wherever MyB::foo() is called, it's simply replaced with 42.

With GCC (MinGW 4.5.1):

gcc -g -O3 -o test.exe myb.cpp mya.cpp test.cpp

the call to MyB::foo() was not optimized away. MyB::foo() itself was slightly optimized to:

Dump of assembler code for function MyB::foo() const:
   0x00401350 <+0>:     push   %ebp
   0x00401351 <+1>:     mov    %esp,%ebp
   0x00401353 <+3>:     sub    $0x8,%esp
=> 0x00401356 <+6>:     leave
   0x00401357 <+7>:     jmp    0x401360 <MyA::foo() const>

Which is the entry prologue is left in place, but immediately undone (the leave instruction) and the code jumps to MyA::foo() to do the real work. However, this is an optimization that the compiler (not the linker) is doing since it realizes that MyB::foo() is simply returning whatever MyA::foo() returns. I'm not sure why the prologue is left in.

MSVC 16 (from VS 2010) handled things a little differently:

MyB::foo() ended up as two jumps - one to a 'thunk' of some sort:

0:000> u myb!MyB::foo
myb!MyB::foo:
001a1030 e9d0ffffff      jmp     myb!ILT+0(?fooMyAQBEHXZ) (001a1005)

And the thunk simply jumped to MyA::foo():

myb!ILT+0(?fooMyAQBEHXZ):
001a1005 e936000000      jmp     myb!MyA::foo (001a1040)

Again - this was largely (entirely?) performed by the compiler, since if you look at the object code produced before linking, MyB::foo() is compiled to a plain jump to MyA::foo().

So to boil all this down - it looks like without explicitly invoking LTO/LTCG, linkers today are unwilling/unable to perform the optimization of removing the call to MyB::foo() altogether, even if MyB::foo() is a simple jump to MyA::foo().

So I guess if you want link time optimization, use the -flto (for GCC) or /GL (for the MSVC compiler) and /LTCG (for the MSVC linker) options.

查看更多
Lonely孤独者°
7楼-- · 2020-06-01 06:06

Compiled code must be able to see the content of the function for a chance of inlining. The chance of this happening more can be done though the use of unity files and LTCG.

查看更多
登录 后发表回答