How does gcc's -pg flag work?

2019-01-16 20:13发布

问题:

I'm trying to understand how the -pg (or -p) flag works when compiling C code with gcc.

The official gcc documentation only states:

-pg
Generate extra code to write profile information suitable for the analysis program gprof. You must use this option when compiling the source files you want data about, and you must also use it when linking.

This really interests me, as I'm doing a small research on profilers - trying to pick the best tool for the job.

回答1:

Compiling with -pg instruments your code so that gprof reports detailed information, see gprof's manual, 9.1 Implementation of Profiling

Profiling works by changing how every function in your program is compiled so that when it is called, it will stash away some information about where it was called from. From this, the profiler can figure out what function called it, and can count how many times it was called. This change is made by the compiler when your program is compiled with the -pg option, which causes every function to call mcount (or _mcount, or __mcount, depending on the OS and compiler) as one of its first operations.

The mcount routine, included in the profiling library, is responsible for recording in an in-memory call graph table both its parent routine (the child) and its parent's parent. This is typically done by examining the stack frame to find both the address of the child, and the return address in the original parent. Since this is a very machine-dependent operation, mcount itself is typically a short assembly-language stub routine that extracts the required information, and then calls __mcount_internal (a normal C function) with two arguments—frompc and selfpc. __mcount_internal is responsible for maintaining the in-memory call graph, which records frompc, selfpc, and the number of times each of these call arcs was traversed.

...

Please note that with such an instrumenting profiler, you're profiling the same code you would compile in release without profiling instrumentation. There is an overhead associated with the instrumentation code itself. Also, the instrumentation code may alter instruction and data cache usage.

Contrary to an instrumenting profiler, a sampling profiler like Intel VTune works on non instrumented code by looking at the target program's program counter at regular intervals using operating system interrupts. It can also query special CPU registers to give you even more insight of what's going on.

See also Profilers Instrumenting Vs Sampling



回答2:

This link gives a brief explanation of how gprof works.

This link gives an extensive critique of it. (Check my answer to the archived question.)



回答3:

From this source: https://elinux.org/images/0/0c/Bird-LS-2009-Measuring-function-duration-with-ftrace.pdf :

" Instrumentation comes in two main forms—explicitly declared tracepoints, and implicit tracepoints. Explicit tracepoints consist of developer defined declarations which specify the location of the tracepoint, and additional information about what data should be collected at a particular trace site. Implicit tracepoints are placed into the code automatically by the compiler, either due to compiler flags or by developer redefinition of commonly used macros.

To instrument functions implicitly, when the kernel is configured to support function tracing, the kernel build system adds -pg to the flags used with the compiler. This causes the compiler to add code to the prologue of each function, which calls a special assembly routine called mcount. This compiler option is specifically intended to be used for profiling and tracing purposes. "