What is FLOP/s and is it a good measure of perform

2020-01-27 10:31发布

I've been asked to measure the performance of a fortran program that solves differential equations on a multi-CPU system. My employer insists that I measure FLOP/s (Floating operations per second) and compare the results with benchmarks (LINPACK) but I am not convinced that it's the way to go, simply because no one can explain to me what a FLOP is.

I did some research on what exactly a FLOP is and I got some pretty contradicting answers. One of the most popular answers I got was '1 FLOP = An addition and a multiplication operation'. Is that true? If so, again, physically, what exactly does that mean?

Whatever method I end up using, it has to be scalable. Some of versions of the code solve systems with multi-million unknowns and takes days to execute.

What would be some other, effective, ways of measuring performance in my case (summary of my case being 'fortran code that does a whole lot of arithmetic calculations over and over again for days on several hundred CPUs)?

8条回答
祖国的老花朵
2楼-- · 2020-01-27 10:37

It's a pretty decent measure of performance, as long as you understand exactly what it measures.

FLOPS is, as the name implies FLoating point OPerations per Second, exactly what constitutes a FLOP might vary by CPU. (Some CPU's can perform addition and multiplication as one operation, others can't, for example). That means that as a performance measure, it is fairly close to the hardware, which means that 1) you have to know your hardware to compute the ideal FLOPS on the given architecture, and you have to know your algorithm and implementation to figure out how many floating point ops it actually consists of.

In any case, it's a useful tool for examining how well you utilize the CPU. If you know the CPU's theoretical peak performance in FLOPS, you can work out how efficiently you use the CPU's floating point units, which are often one of the hard to utilize efficiently. A program which runs 30% of the FLOPS the CPU is capable of, has room for optimization. One which runs at 70% is probably not going to get much more efficient unless you change the basic algorithm. For math-heavy algorithms like yours, that is pretty much the standard way to measure performance. You could simply measure how long a program takes to run, but that varies wildly depending on CPU. But if your program has a 50% CPU utilization (relative to the peak FLOPS count), that is a somewhat more constant value (it'll still vary between radically different CPU architectures, but it's a lot more consistent than execution time).

But knowing that "My CPU is capable of X GFLOPS, and I'm only actually achieving a throughput of, say, 20% of that" is very valuable information in high-performance software. It means that something other than the floating point ops is holding you back, and preventing the FP units from working efficiently. And since the FP units constitute the bulk of the work, that means your software has a problem.

It's easy to measure "My program runs in X minutes", and if you feel that is unacceptable then sure, you can go "I wonder if I can chop 30% off that", but you don't know if that is possible unless you work out exactly how much work is being done, and exactly what the CPU is capable of at peak. How much time do you want to spend optimizing this, if you don't even know whether the CPU is fundamentally capable of running any more instructions per second?

It's very easy to prevent the CPU's FP unit from being utilized efficiently, by having too many dependencies between FP ops, or by having too many branches or similar preventing efficient scheduling. And if that is what is holding your implementation back, you need to know that. You need to know that "I'm not getting the FP throughput that should be possible, so clearly other parts of my code are preventing FP instructions from being available when the CPU is ready to issue one".

Why do you need other ways to measure performance? What's wrong with just working out the FLOPS count as your boss asked you to? ;)

查看更多
Viruses.
3楼-- · 2020-01-27 10:43

A FLOPS is, as you said, a floating-point operation per second. As an example, if you take exactly one second for an operation (such as adding, subtracting, multiplying or dividing two values and returning the result), your performance is simply 1 FLOPS. A recent CPU will easily achieve several GigaFLOPS, i.e. several billion floating-point operations per second.

查看更多
爱情/是我丢掉的垃圾
4楼-- · 2020-01-27 10:43

I don't think measuring FLOPS will be very useful.

The number of FLOPS achieved will tell you how busy your algorithm is keeping the CPU, but won't tell you how well your algorithm itself is performing.

You may find two different algorithms which cause the processor to perform the same number of FLOPS but one provides you with the desired result in half the time.

I think you'd be better off looking at a much 'higher level' statistic such as the number of differential equations solved per unit of time (that is, after all, the purpose of your algorithm).

On the other hand, measuring the number of FLOPS achieved may help you to improve your algorithm as it will tell you how busy you are keeping the CPU.

查看更多
放我归山
5楼-- · 2020-01-27 10:44

I would just try to make it go as fast as possible, and that requires finding out where it is spending time, especially if there are function calls that could be avoided.

I do this by the simple method of just interrupting it a few times while it is running, and seeing what it is doing. Here are the kinds of things I find:

  • Much of the time it is in the process of computing the derivative and/or Jacobian. Much of this time can go into math function calls such as exp(), log(), and sqrt(). Often these are repeated with identical arguments and can be memo-ized. (Massive speedup.)

  • Much of the time is spent calculating derivatives too many times because the integration tolerances are tighter than necessary. (Faster)

  • If an implicit integration algorithm (such as DLSODE Gear) is being used because the equations are thought to be stiff, chances are they are not, and something like Runge-Kutta could be used. (DVERK). (Faster still)

  • Possibly a matrix-exponent algorithm could be used if the model is linear (DGPADM). This is a big win both for performance and precision, and is immune to stiffness. (Way faster)

  • Higher up the call-stack, it could be that the same integrations are being performed repeatedly with slightly different parameters, so as to determine a forward or central-difference gradient of the solution with respect to those parameters. If the differential equations are themselves differentiable, it may be possible to get those gradients analytically, or by augmenting the equations with sensitivity equations. This is not only much faster, but much more precise, which can speed things up still higher up the stack.

You can look at each level of the stack as an opportunity to find things to optimize, and the speedups will compound. Then when you go to multi-cpu, assuming it is parallelizable, that should provide its own multiplicative factor.

So back to FLOPs. You could try to maximize FLOPs / second, but it can also be much more useful to minimze FLOPs / run, by optimizing at all levels of the stack. In any case, just measuring them tells you almost nothing.

查看更多
劳资没心,怎么记你
6楼-- · 2020-01-27 10:49

Your employer is right.
The only way to measure effectiveness of your Fortran program (or of any other program, btw) is to test it against standard benchmarks, if they exist.

And, about FLOPs, it stands for "floating point operations per second" - see the definition on Wikipedia.

查看更多
聊天终结者
7楼-- · 2020-01-27 10:53

I'd just like to add a couple of finer points:

  • division is special. Since most processors can do an addition, comparison, or multiplication in a single cycle, those are all counted as one flop. But division always takes longer. How much longer depends on the processor, but there's sort of a defacto standard in the HPC community to count one division as 4 flops.

  • If a processor has a fused multiply-add instruction that does a multiplication and an addition in a single instruction -- generally A += B * C -- that counts as 2 operations.

  • Always be careful in distinguishing between single-precision flops and double-precision flops. A processor that is capable of so many single-precision gigaflops may only be capable of a small fraction of that many double-precision gigaflops. The AMD Athlon and Phenom processors can generally do half as many double-precision flops as single precision. The ATI Firestream processors can generally do 1/5th as many double-precision flops as single precision. If someone is trying to sell you a processor or a software package and they just quote flops without saying which, you should call them on it.

  • The terms megaflop, gigaflop, teraflop, etc. are in common use. These refer to factors of 1000, not 1024. E.g., 1 megaflop = 1,000,000 flop/sec not 1,048,576. Just as with disk drive sizes, there is some confusion on this.

查看更多
登录 后发表回答