I have a C++ application, running on Linux, which I'm in the process of optimizing. How can I pinpoint which areas of my code are running slowly?
相关问题
- Sorting 3 numbers without branching [closed]
- How to compile C++ code in GDB?
- Why does const allow implicit conversion of refere
- thread_local variables initialization
- What uses more memory in c++? An 2 ints or 2 funct
At work we have a really nice tool that helps us monitoring what we want in terms of scheduling. This has been useful numerous times.
It's in C++ and must be customized to your needs. Unfortunately I can't share code, just concepts. You use a "large"
volatile
buffer containing timestamps and event ID that you can dump post mortem or after stopping the logging system (and dump this into a file for example).You retrieve the so-called large buffer with all the data and a small interface parses it and shows events with name (up/down + value) like an oscilloscope does with colors (configured in
.hpp
file).You customize the amount of events generated to focus solely on what you desire. It helped us a lot for scheduling issues while consuming the amount of CPU we wanted based on the amount of logged events per second.
You need 3 files :
The concept is to define events in
tool_events_id.hpp
like that :You also define a few functions in
toolname.hpp
:Wherever in you code you can use :
The
probe
function uses a few assembly lines to retrieve the clock timestamp ASAP and then sets an entry in the buffer. We also have an atomic increment to safely find an index where to store the log event. Of course buffer is circular.Hope the idea is not obfuscated by the lack of sample code.
You can use a logging framework like
loguru
since it includes timestamps and total uptime which can be used nicely for profiling:The answer to run
valgrind --tool=callgrind
is not quite complete without some options. We usually do not want to profile 10 minutes of slow startup time under Valgrind and want to profile our program when it is doing some task.So this is what I recommend. Run program first:
Now when it works and we want to start profiling we should run in another window:
This turns profiling on. To turn it off and stop whole task we might use:
Now we have some files named callgrind.out.* in current directory. To see profiling results use:
I recommend in next window to click on "Self" column header, otherwise it shows that "main()" is most time consuming task. "Self" shows how much each function itself took time, not together with dependents.
Use Valgrind, callgrind and kcachegrind:
generates callgrind.out.x. Read it using kcachegrind.
Use gprof (add -pg):
(not so good for multi-threads, function pointers)
Use google-perftools:
Uses time sampling, I/O and CPU bottlenecks are revealed.
Intel VTune is the best (free for educational purposes).
Others: AMD Codeanalyst (since replaced with AMD CodeXL), OProfile, 'perf' tools (apt-get install linux-tools)
You can use Valgrind with the following options
It will generate a file called
callgrind.out.x
. You can then usekcachegrind
tool to read this file. It will give you a graphical analysis of things with results like which lines cost how much.If your goal is to use a profiler, use one of the suggested ones.
However, if you're in a hurry and you can manually interrupt your program under the debugger while it's being subjectively slow, there's a simple way to find performance problems.
Just halt it several times, and each time look at the call stack. If there is some code that is wasting some percentage of the time, 20% or 50% or whatever, that is the probability that you will catch it in the act on each sample. So that is roughly the percentage of samples on which you will see it. There is no educated guesswork required. If you do have a guess as to what the problem is, this will prove or disprove it.
You may have multiple performance problems of different sizes. If you clean out any one of them, the remaining ones will take a larger percentage, and be easier to spot, on subsequent passes. This magnification effect, when compounded over multiple problems, can lead to truly massive speedup factors.
Caveat: Programmers tend to be skeptical of this technique unless they've used it themselves. They will say that profilers give you this information, but that is only true if they sample the entire call stack, and then let you examine a random set of samples. (The summaries are where the insight is lost.) Call graphs don't give you the same information, because
They will also say it only works on toy programs, when actually it works on any program, and it seems to work better on bigger programs, because they tend to have more problems to find. They will say it sometimes finds things that aren't problems, but that is only true if you see something once. If you see a problem on more than one sample, it is real.
P.S. This can also be done on multi-thread programs if there is a way to collect call-stack samples of the thread pool at a point in time, as there is in Java.
P.P.S As a rough generality, the more layers of abstraction you have in your software, the more likely you are to find that that is the cause of performance problems (and the opportunity to get speedup).
Added: It might not be obvious, but the stack sampling technique works equally well in the presence of recursion. The reason is that the time that would be saved by removal of an instruction is approximated by the fraction of samples containing it, regardless of the number of times it may occur within a sample.
Another objection I often hear is: "It will stop someplace random, and it will miss the real problem". This comes from having a prior concept of what the real problem is. A key property of performance problems is that they defy expectations. Sampling tells you something is a problem, and your first reaction is disbelief. That is natural, but you can be sure if it finds a problem it is real, and vice-versa.
ADDED: Let me make a Bayesian explanation of how it works. Suppose there is some instruction
I
(call or otherwise) which is on the call stack some fractionf
of the time (and thus costs that much). For simplicity, suppose we don't know whatf
is, but assume it is either 0.1, 0.2, 0.3, ... 0.9, 1.0, and the prior probability of each of these possibilities is 0.1, so all of these costs are equally likely a-priori.Then suppose we take just 2 stack samples, and we see instruction
I
on both samples, designated observationo=2/2
. This gives us new estimates of the frequencyf
ofI
, according to this:The last column says that, for example, the probability that
f
>= 0.5 is 92%, up from the prior assumption of 60%.Suppose the prior assumptions are different. Suppose we assume P(f=0.1) is .991 (nearly certain), and all the other possibilities are almost impossible (0.001). In other words, our prior certainty is that
I
is cheap. Then we get:Now it says P(f >= 0.5) is 26%, up from the prior assumption of 0.6%. So Bayes allows us to update our estimate of the probable cost of
I
. If the amount of data is small, it doesn't tell us accurately what the cost is, only that it is big enough to be worth fixing.Yet another way to look at it is called the Rule Of Succession. If you flip a coin 2 times, and it comes up heads both times, what does that tell you about the probable weighting of the coin? The respected way to answer is to say that it's a Beta distribution, with average value (number of hits + 1) / (number of tries + 2) = (2+1)/(2+2) = 75%.
(The key is that we see
I
more than once. If we only see it once, that doesn't tell us much except thatf
> 0.)So, even a very small number of samples can tell us a lot about the cost of instructions that it sees. (And it will see them with a frequency, on average, proportional to their cost. If
n
samples are taken, andf
is the cost, thenI
will appear onnf+/-sqrt(nf(1-f))
samples. Example,n=10
,f=0.3
, that is3+/-1.4
samples.)ADDED, to give an intuitive feel for the difference between measuring and random stack sampling:
There are profilers now that sample the stack, even on wall-clock time, but what comes out is measurements (or hot path, or hot spot, from which a "bottleneck" can easily hide). What they don't show you (and they easily could) is the actual samples themselves. And if your goal is to find the bottleneck, the number of them you need to see is, on average, 2 divided by the fraction of time it takes. So if it takes 30% of time, 2/.3 = 6.7 samples, on average, will show it, and the chance that 20 samples will show it is 99.2%.
Here is an off-the-cuff illustration of the difference between examining measurements and examining stack samples. The bottleneck could be one big blob like this, or numerous small ones, it makes no difference.
Measurement is horizontal; it tells you what fraction of time specific routines take. Sampling is vertical. If there is any way to avoid what the whole program is doing at that moment, and if you see it on a second sample, you've found the bottleneck. That's what makes the difference - seeing the whole reason for the time being spent, not just how much.