What is FLOP/s and is it a good measure of perform

2020-01-27 10:31发布

I've been asked to measure the performance of a fortran program that solves differential equations on a multi-CPU system. My employer insists that I measure FLOP/s (Floating operations per second) and compare the results with benchmarks (LINPACK) but I am not convinced that it's the way to go, simply because no one can explain to me what a FLOP is.

I did some research on what exactly a FLOP is and I got some pretty contradicting answers. One of the most popular answers I got was '1 FLOP = An addition and a multiplication operation'. Is that true? If so, again, physically, what exactly does that mean?

Whatever method I end up using, it has to be scalable. Some of versions of the code solve systems with multi-million unknowns and takes days to execute.

What would be some other, effective, ways of measuring performance in my case (summary of my case being 'fortran code that does a whole lot of arithmetic calculations over and over again for days on several hundred CPUs)?

8条回答
Summer. ? 凉城
2楼-- · 2020-01-27 10:53

"compare the results with benchmarks" and do what?

FLOPS means you need

1) FLOPs per some unit of work.

2) time for that unit of work.

Let's say you have some input file that does 1,000 iterations through some loop. The loop is a handy unit of work. It gets executed 1,000 times. It takes an hour.

The loop has some adds and multiplies and a few divides and a square root. You can count adds, multiplies and divides. You can count this in the source, looking for +, * and /. You can find the assembler-language output from the compiler, and count them there, too. You may get different numbers. Which one is right? Ask your boss.

You can count the square roots, but you don't know what it really does in terms of multiplies and adds. So, you'll have to do something like benchmark multiply vs. square root to get a sense of how long a square root takes.

Now you know the FLOPS in your loop. And you know the time to run it 1,000 times. You know FLOPS per second.

Then you look at LINPACK and find you're slower. Now what? Your program isn't LINPACK, and it's slower than LINPACK. Odds are really good that your code will be slower. Unless your code was written and optimized over the same number of years a LINPACK, you'll be slower.

Here's the other part. Your processor has some defined FLOPS rating against various benchmarks. Your algorithm is not one of those benchmarks, so you fall short of the benchmarks. Is this bad? Or is this the obvious consequence of not being a benchmark?

What's the actionable outcome going to be?

Measurement against some benchmark code base is only going to tell you that you're algorithm isn't the benchmark algorithm. It's a foregone conclusion that you'll be different; usually slower.

Obviously, the result of measuring against LINPACK will be (a) you're different and therefore (b) you need to optimize.

Measurement is only really valuable when done against yourself. Not some hypothetical instruction mix, but your own instruction mix. Measure your own performance. Make a change. See if your performance -- compared with yourself -- get better or worse.

FLOPS don't matter. What matters is time per unit of work. You'll never match the design parameters of your hardware because you're not running the benchmark that your hardware designers expected.

LINPACK doesn't matter. What matters is your code base and the changes you're making to change performance.

查看更多
Rolldiameter
3楼-- · 2020-01-27 10:59

Old question with old, if popular, answers that are not exactly great, IMO.

A “FLOP” is a floating-point math operation. “FLOPS” can mean either of two things:

  • The simple plural of “FLOP” (i.e. “operation X takes 50 FLOPs”)
  • The rate of FLOPs in the first sense (i.e. floating-point math operations per second)

Where it is not clear from context, which of these is meant is often disambiguated by writing the former as “FLOPs” and the latter as “FLOP/s”.

FLOPs are so-called to distinguish them from other kinds of CPU operations, such as integer math operations, logical operations, bitwise operations, memory operations, and branching operations, which have different costs (read “take different lengths of time”) associated with them.

The practice of “FLOP counting” dates back to the very early days of scientific computing, when FLOPs were, relatively speaking, extremely expensive, taking many CPU cycles each. An 80387 math coprocessor, for example, took something like 300 cycles for a single multiplication. This was at a time before pipelining and before the gulf between CPU clock speeds and memory speeds had really opened up: memory operations took just a cycle or two, and branching (“decision making”) was similarly cheap. Back then, if you could eliminate a single FLOP in favor of a dozen memory accesses, you made a gain. If you could eliminate a single FLOP in favor of a dozen branches, you made a gain. So, in the past, it made sense to count FLOPs and not worry much about memory references and branches because FLOPs strongly dominated execution time because they were individually very expensive relative to other kinds of operation.

More recently, the situation has reversed. FLOPs have become very cheap — any modern Intel core can perform about two FLOPs per cycle (although division remains relatively expensive) — and memory accesses and branches are comparatively much more expensive: a L1 cache hit costs maybe 3 or 4 cycles, a fetch from main memory costs 150–200. Given this inversion, it is no longer the case that eliminating a FLOP in favor of a memory access will result in a gain; in fact, that's unlikely. Similarly, it is often cheaper to “just do” a FLOP, even if it's redundant, rather than decide whether to do it or not. This is pretty much the complete opposite of the situation 25 years ago.

Unfortunately, the practice of blind FLOP-counting as an absolute metric of algorithmic merit has persisted well past its sell-by date. Modern scientific computing is much more about memory bandwidth management — trying to keep the execution units that do the FLOPs constantly fed with data — than it is about reducing the number of FLOPs. The reference to LINPACK (which was essentially obsoleted by LAPACK 20 years ago) leads me to suspect that your employer is probably of a very old school that hasn't internalized the fact that establishing performance expectations is not just a matter of FLOP counting any more. A solver that does twice as many FLOPs could still be twenty times faster than another if it has a much more favorable memory access pattern and data layout.

The upshot of all this is that performance assessment of computationally intensive software has become a lot more complex than it used to be. The fact that FLOPs have become cheap is hugely complicated by the massive variability in the costs of memory operations and branches. When it comes to assessing algorithms, simple FLOP counting simply doesn't inform overall performance expectations any more.

Perhaps a better way of thinking about performance expectations and assessment is provided by the so-called roofline model, which is far from perfect, but has the advantage of making you think about the trade-off between floating-point and memory bandwidth issues at the same time, providing a more informative and insightful “2D picture” that enables the comparison of performance measurements and performance expectations.

It's worth a look.

查看更多
登录 后发表回答