What is the effect of ordering if…else if statemen

2019-01-16 06:19发布

Specifically, if I have a series of if...else if statements, and I somehow know beforehand the relative probability that each statement will evaluate to true, how much difference in execution time does it make to sort them in order of probability? For example, should I prefer this:

if (highly_likely)
  //do something
else if (somewhat_likely)
  //do something
else if (unlikely)
  //do something

to this?:

if (unlikely)
  //do something
else if (somewhat_likely)
  //do something
else if (highly_likely)
  //do something

It seems obvious that the sorted version would be faster, however for readability or the existence of side-effects, we might want to order them non-optimally. It's also hard to tell how well the CPU will do with branch prediction until you actually run the code.

So, in the course of experimenting with this, I ended up answering my own question for a specific case, however I'd like to hear other opinions/insights as well.

Important: this question assumes that the if statements can be arbitrarily reordered without having any other effects on the behavior of the program. In my answer, the three conditional tests are mutually exclusive and produce no side effects. Certainly, if the statements must be evaluated in a certain order to achieve some desired behavior, then the issue of efficiency is moot.

10条回答
趁早两清
2楼-- · 2019-01-16 06:36

No you should not, unless you are really sure that target system is affected. By default go by readability.

I highly doubt your results. I've modified your example a bit, so reversing execution is easier. Ideone rather consistently shows that reverse-order is faster, though not much. On certain runs even this occasionally flipped. I'd say the results are inconclusive. coliru reports no real difference as well. I can check Exynos5422 CPU on my odroid xu4 later on.

The thing is that modern CPUs have branch predictors. There is much-much logic dedicated to pre-fetching both data and instructions, and modern x86 CPUs are rather smart, when it comes to this. Some slimmer architectures like ARMs or GPUs might be vulnerable to this. But it is really highly dependent on both compiler and target system.

I would say that branch ordering optimization is pretty fragile and ephemeral. Do it only as some really fine-tuning step.

Code:

#include <chrono>
#include <iostream>
#include <random>
#include <algorithm>
#include <iterator>
#include <functional>

using namespace std;

int main()
{
    //Generate a vector of random integers from 1 to 100
    random_device rnd_device;
    mt19937 rnd_engine(rnd_device());
    uniform_int_distribution<int> rnd_dist(1, 100);
    auto gen = std::bind(rnd_dist, rnd_engine);
    vector<int> rand_vec(5000);
    generate(begin(rand_vec), end(rand_vec), gen);
    volatile int nLow, nMid, nHigh;

    //Count the number of values in each of three different ranges
    //Run the test a few times
    for (int n = 0; n != 10; ++n) {

        //Run the test again, but now sort the conditional statements in reverse-order of likelyhood
        {
          nLow = nMid = nHigh = 0;
          auto start = chrono::high_resolution_clock::now();
          for (int& i : rand_vec) {
              if (i >= 95) ++nHigh;               //Least likely branch
              else if (i < 20) ++nLow;
              else if (i >= 20 && i < 95) ++nMid; //Most likely branch
          }
          auto end = chrono::high_resolution_clock::now();
          cout << "Reverse-sorted: \t" << chrono::duration_cast<chrono::nanoseconds>(end-start).count() << "ns" << endl;
        }

        {
          //Sort the conditional statements in order of likelyhood
          nLow = nMid = nHigh = 0;
          auto start = chrono::high_resolution_clock::now();
          for (int& i : rand_vec) {
              if (i >= 20 && i < 95) ++nMid;  //Most likely branch
              else if (i < 20) ++nLow;
              else if (i >= 95) ++nHigh;      //Least likely branch
          }
          auto end = chrono::high_resolution_clock::now();
          cout << "Sorted:\t\t\t" << chrono::duration_cast<chrono::nanoseconds>(end-start).count() << "ns" << endl;
        }
        cout << endl;
    }
}
查看更多
兄弟一词,经得起流年.
3楼-- · 2019-01-16 06:38

Also depends on your compiler and the platform you’re compiling for.

In theory, the most likely condition should make the control jump as less as possible.

Typically the most likely condition should be first:

if (most_likely) {
     // most likely instructions
} else …

The most popular asm’s are based on conditional branches that jump when condition is true. That C code will be likely translated to such pseudo asm:

jump to ELSE if not(most_likely)
// most likely instructions
jump to end
ELSE:
…

This is because jumps make the cpu cancel the execution pipeline and stall because the program counter changed (for architectures that support pipelines which are really common). Then it’s about the compiler, which may or may not apply some sophisticated optimizations about having the statistically most probably condition to get the control make less jumps.

查看更多
成全新的幸福
4楼-- · 2019-01-16 06:39

The way I usually see this solved for high-performance code is keeping the order that is most readable, but providing hints to the compiler. Here is one example from Linux kernel:

if (likely(access_ok(VERIFY_READ, from, n))) {
    kasan_check_write(to, n);
    res = raw_copy_from_user(to, from, n);
}
if (unlikely(res))
    memset(to + (n - res), 0, res);

Here the assumption is that access check will pass, and that no error is returned in res. Trying to reorder either of these if clauses would just confuse the code, but the likely() and unlikely() macros actually help readability by pointing out what is the normal case and what is the exception.

The Linux implementation of those macros uses GCC specific features. It seems that clang and Intel C compiler support the same syntax, but MSVC doesn't have such feature.

查看更多
Root(大扎)
5楼-- · 2019-01-16 06:40

Just my 5 cents. It seems the effect of ordering if statements should depend on:

  1. Probability of each if statement.

  2. Number of iterations, so the branch predictor could kick in.

  3. Likely/unlikely compiler hints, i.e. code layout.

To explore those factors, I benchmarked the following functions:

ordered_ifs()

for (i = 0; i < data_sz * 1024; i++) {
    if (data[i] < check_point) // highly likely
        s += 3;
    else if (data[i] > check_point) // samewhat likely
        s += 2;
    else if (data[i] == check_point) // very unlikely
        s += 1;
}

reversed_ifs()

for (i = 0; i < data_sz * 1024; i++) {
    if (data[i] == check_point) // very unlikely
        s += 1;
    else if (data[i] > check_point) // samewhat likely
        s += 2;
    else if (data[i] < check_point) // highly likely
        s += 3;
}

ordered_ifs_with_hints()

for (i = 0; i < data_sz * 1024; i++) {
    if (likely(data[i] < check_point)) // highly likely
        s += 3;
    else if (data[i] > check_point) // samewhat likely
        s += 2;
    else if (unlikely(data[i] == check_point)) // very unlikely
        s += 1;
}

reversed_ifs_with_hints()

for (i = 0; i < data_sz * 1024; i++) {
    if (unlikely(data[i] == check_point)) // very unlikely
        s += 1;
    else if (data[i] > check_point) // samewhat likely
        s += 2;
    else if (likely(data[i] < check_point)) // highly likely
        s += 3;
}

data

The data array contains random numbers between 0 and 100:

const int RANGE_MAX = 100;
uint8_t data[DATA_MAX * 1024];

static void data_init(int data_sz)
{
    int i;
        srand(0);
    for (i = 0; i < data_sz * 1024; i++)
        data[i] = rand() % RANGE_MAX;
}

The Results

The following results are for Intel i5@3,2 GHz and G++ 6.3.0. The first argument is the check_point (i.e. probability in %% for the highly likely if statement), the second argument is data_sz (i.e. number of iterations).

---------------------------------------------------------------------
Benchmark                              Time           CPU Iterations
---------------------------------------------------------------------
ordered_ifs/50/4                    4660 ns       4658 ns     150948
ordered_ifs/50/8                   25636 ns      25635 ns      27852
ordered_ifs/75/4                    4326 ns       4325 ns     162613
ordered_ifs/75/8                   18242 ns      18242 ns      37931
ordered_ifs/100/4                   1673 ns       1673 ns     417073
ordered_ifs/100/8                   3381 ns       3381 ns     207612
reversed_ifs/50/4                   5342 ns       5341 ns     126800
reversed_ifs/50/8                  26050 ns      26050 ns      26894
reversed_ifs/75/4                   3616 ns       3616 ns     193130
reversed_ifs/75/8                  15697 ns      15696 ns      44618
reversed_ifs/100/4                  3738 ns       3738 ns     188087
reversed_ifs/100/8                  7476 ns       7476 ns      93752
ordered_ifs_with_hints/50/4         5551 ns       5551 ns     125160
ordered_ifs_with_hints/50/8        23191 ns      23190 ns      30028
ordered_ifs_with_hints/75/4         3165 ns       3165 ns     218492
ordered_ifs_with_hints/75/8        13785 ns      13785 ns      50574
ordered_ifs_with_hints/100/4        1575 ns       1575 ns     437687
ordered_ifs_with_hints/100/8        3130 ns       3130 ns     221205
reversed_ifs_with_hints/50/4        6573 ns       6572 ns     105629
reversed_ifs_with_hints/50/8       27351 ns      27351 ns      25568
reversed_ifs_with_hints/75/4        3537 ns       3537 ns     197470
reversed_ifs_with_hints/75/8       16130 ns      16130 ns      43279
reversed_ifs_with_hints/100/4       3737 ns       3737 ns     187583
reversed_ifs_with_hints/100/8       7446 ns       7446 ns      93782

Analysis

1. The Ordering Does Matter

For 4K iterations and (almost) 100% probability of highly liked statement the difference is huge 223%:

---------------------------------------------------------------------
Benchmark                              Time           CPU Iterations
---------------------------------------------------------------------
ordered_ifs/100/4                   1673 ns       1673 ns     417073
reversed_ifs/100/4                  3738 ns       3738 ns     188087

For 4K iterations and 50% probability of highly liked statement the difference is about 14%:

---------------------------------------------------------------------
Benchmark                              Time           CPU Iterations
---------------------------------------------------------------------
ordered_ifs/50/4                    4660 ns       4658 ns     150948
reversed_ifs/50/4                   5342 ns       5341 ns     126800

2. Number of Iterations Does Matter

The difference between 4K and 8K iterations for (almost) 100% probability of highly liked statement is about two times (as expected):

---------------------------------------------------------------------
Benchmark                              Time           CPU Iterations
---------------------------------------------------------------------
ordered_ifs/100/4                   1673 ns       1673 ns     417073
ordered_ifs/100/8                   3381 ns       3381 ns     207612

But the difference between 4K and 8K iterations for 50% probability of highly liked statement is 5,5 times:

---------------------------------------------------------------------
Benchmark                              Time           CPU Iterations
---------------------------------------------------------------------
ordered_ifs/50/4                    4660 ns       4658 ns     150948
ordered_ifs/50/8                   25636 ns      25635 ns      27852

Why is so? Because of branch predictor misses. Here is the branch misses for each mentioned above case:

ordered_ifs/100/4    0.01% of branch-misses
ordered_ifs/100/8    0.01% of branch-misses
ordered_ifs/50/4     3.18% of branch-misses
ordered_ifs/50/8     15.22% of branch-misses

So on my i5 the branch predictor fails spectacularly for not-so-likely branches and large data sets.

3. Hints Help a Bit

For 4K iterations the results are somewhat worse for 50% probability and somewhat better for close to 100% probability:

---------------------------------------------------------------------
Benchmark                              Time           CPU Iterations
---------------------------------------------------------------------
ordered_ifs/50/4                    4660 ns       4658 ns     150948
ordered_ifs/100/4                   1673 ns       1673 ns     417073
ordered_ifs_with_hints/50/4         5551 ns       5551 ns     125160
ordered_ifs_with_hints/100/4        1575 ns       1575 ns     437687

But for 8K iterations the results are always a bit better:

---------------------------------------------------------------------
Benchmark                              Time           CPU Iterations
---------------------------------------------------------------------
ordered_ifs/50/8                   25636 ns      25635 ns      27852
ordered_ifs/100/8                   3381 ns       3381 ns     207612
ordered_ifs_with_hints/50/8        23191 ns      23190 ns      30028
ordered_ifs_with_hints/100/8        3130 ns       3130 ns     221205

So, the hints also help, but just a tiny bit.

Overall conclusion is: always benchmark the code, because the results may surprise.

Hope that helps.

查看更多
Evening l夕情丶
6楼-- · 2019-01-16 06:42

I made up the following test to time the execution of two different if...else if blocks, one sorted in order of probability, the other sorted in reverse order:

#include <chrono>
#include <iostream>
#include <random>
#include <algorithm>
#include <iterator>
#include <functional>

using namespace std;

int main()
{
    long long sortedTime = 0;
    long long reverseTime = 0;

    for (int n = 0; n != 500; ++n)
    {
        //Generate a vector of 5000 random integers from 1 to 100
        random_device rnd_device;
        mt19937 rnd_engine(rnd_device());
        uniform_int_distribution<int> rnd_dist(1, 100);
        auto gen = std::bind(rnd_dist, rnd_engine);
        vector<int> rand_vec(5000);
        generate(begin(rand_vec), end(rand_vec), gen);

        volatile int nLow, nMid, nHigh;
        chrono::time_point<chrono::high_resolution_clock> start, end;

        //Sort the conditional statements in order of increasing likelyhood
        nLow = nMid = nHigh = 0;
        start = chrono::high_resolution_clock::now();
        for (int& i : rand_vec) {
            if (i >= 95) ++nHigh;               //Least likely branch
            else if (i < 20) ++nLow;
            else if (i >= 20 && i < 95) ++nMid; //Most likely branch
        }
        end = chrono::high_resolution_clock::now();
        reverseTime += chrono::duration_cast<chrono::nanoseconds>(end-start).count();

        //Sort the conditional statements in order of decreasing likelyhood
        nLow = nMid = nHigh = 0;
        start = chrono::high_resolution_clock::now();
        for (int& i : rand_vec) {
            if (i >= 20 && i < 95) ++nMid;  //Most likely branch
            else if (i < 20) ++nLow;
            else if (i >= 95) ++nHigh;      //Least likely branch
        }
        end = chrono::high_resolution_clock::now();
        sortedTime += chrono::duration_cast<chrono::nanoseconds>(end-start).count();

    }

    cout << "Percentage difference: " << 100 * (double(reverseTime) - double(sortedTime)) / double(sortedTime) << endl << endl;
}

Using MSVC2017 with /O2, the results show that the sorted version is consistently about 28% faster than the unsorted version. Per luk32's comment, I also switched the order of the two tests, which makes a noticeable difference (22% vs 28%). The code was run under Windows 7 on an Intel Xeon E5-2697 v2. This is, of course, very problem-specific and should not be interpreted as a conclusive answer.

查看更多
何必那么认真
7楼-- · 2019-01-16 06:43

I decided to rerun the test on my own machine using Lik32 code. I had to change it due to my windows or compiler thinking high resolution is 1ms, using

mingw32-g++.exe -O3 -Wall -std=c++11 -fexceptions -g

vector<int> rand_vec(10000000);

GCC has made the same transformation on both original codes.

Note that only the two first conditions are tested as the third must always be true, GCC is a kind of a Sherlock here.

Reverse

.L233:
        mov     DWORD PTR [rsp+104], 0
        mov     DWORD PTR [rsp+100], 0
        mov     DWORD PTR [rsp+96], 0
        call    std::chrono::_V2::system_clock::now()
        mov     rbp, rax
        mov     rax, QWORD PTR [rsp+8]
        jmp     .L219
.L293:
        mov     edx, DWORD PTR [rsp+104]
        add     edx, 1
        mov     DWORD PTR [rsp+104], edx
.L217:
        add     rax, 4
        cmp     r14, rax
        je      .L292
.L219:
        mov     edx, DWORD PTR [rax]
        cmp     edx, 94
        jg      .L293 // >= 95
        cmp     edx, 19
        jg      .L218 // >= 20
        mov     edx, DWORD PTR [rsp+96]
        add     rax, 4
        add     edx, 1 // < 20 Sherlock
        mov     DWORD PTR [rsp+96], edx
        cmp     r14, rax
        jne     .L219
.L292:
        call    std::chrono::_V2::system_clock::now()

.L218: // further down
        mov     edx, DWORD PTR [rsp+100]
        add     edx, 1
        mov     DWORD PTR [rsp+100], edx
        jmp     .L217

And sorted

        mov     DWORD PTR [rsp+104], 0
        mov     DWORD PTR [rsp+100], 0
        mov     DWORD PTR [rsp+96], 0
        call    std::chrono::_V2::system_clock::now()
        mov     rbp, rax
        mov     rax, QWORD PTR [rsp+8]
        jmp     .L226
.L296:
        mov     edx, DWORD PTR [rsp+100]
        add     edx, 1
        mov     DWORD PTR [rsp+100], edx
.L224:
        add     rax, 4
        cmp     r14, rax
        je      .L295
.L226:
        mov     edx, DWORD PTR [rax]
        lea     ecx, [rdx-20]
        cmp     ecx, 74
        jbe     .L296
        cmp     edx, 19
        jle     .L297
        mov     edx, DWORD PTR [rsp+104]
        add     rax, 4
        add     edx, 1
        mov     DWORD PTR [rsp+104], edx
        cmp     r14, rax
        jne     .L226
.L295:
        call    std::chrono::_V2::system_clock::now()

.L297: // further down
        mov     edx, DWORD PTR [rsp+96]
        add     edx, 1
        mov     DWORD PTR [rsp+96], edx
        jmp     .L224

So this doesn't tell us much except that the last case doesn't need a branch predict.

Now I tried all 6 combinations of the if's, the top 2 are the original reverse and sorted. high is >= 95, low is < 20, mid is 20-94 with 10000000 iterations each.

high, low, mid: 43000000ns
mid, low, high: 46000000ns
high, mid, low: 45000000ns
low, mid, high: 44000000ns
mid, high, low: 46000000ns
low, high, mid: 44000000ns

high, low, mid: 44000000ns
mid, low, high: 47000000ns
high, mid, low: 44000000ns
low, mid, high: 45000000ns
mid, high, low: 46000000ns
low, high, mid: 45000000ns

high, low, mid: 43000000ns
mid, low, high: 47000000ns
high, mid, low: 44000000ns
low, mid, high: 45000000ns
mid, high, low: 46000000ns
low, high, mid: 44000000ns

high, low, mid: 42000000ns
mid, low, high: 46000000ns
high, mid, low: 46000000ns
low, mid, high: 45000000ns
mid, high, low: 46000000ns
low, high, mid: 43000000ns

high, low, mid: 43000000ns
mid, low, high: 47000000ns
high, mid, low: 44000000ns
low, mid, high: 44000000ns
mid, high, low: 46000000ns
low, high, mid: 44000000ns

high, low, mid: 43000000ns
mid, low, high: 48000000ns
high, mid, low: 44000000ns
low, mid, high: 44000000ns
mid, high, low: 45000000ns
low, high, mid: 45000000ns

high, low, mid: 43000000ns
mid, low, high: 47000000ns
high, mid, low: 45000000ns
low, mid, high: 45000000ns
mid, high, low: 46000000ns
low, high, mid: 44000000ns

high, low, mid: 43000000ns
mid, low, high: 47000000ns
high, mid, low: 45000000ns
low, mid, high: 45000000ns
mid, high, low: 46000000ns
low, high, mid: 44000000ns

high, low, mid: 43000000ns
mid, low, high: 46000000ns
high, mid, low: 45000000ns
low, mid, high: 45000000ns
mid, high, low: 45000000ns
low, high, mid: 44000000ns

high, low, mid: 42000000ns
mid, low, high: 46000000ns
high, mid, low: 44000000ns
low, mid, high: 45000000ns
mid, high, low: 45000000ns
low, high, mid: 44000000ns

1900020, 7498968, 601012

Process returned 0 (0x0)   execution time : 2.899 s
Press any key to continue.

So why is the order high, low, med then faster (marginally)

Because the most unpredictable is last and therefore is never run through a branch predictor.

          if (i >= 95) ++nHigh;               // most predictable with 94% taken
          else if (i < 20) ++nLow; // (94-19)/94% taken ~80% taken
          else if (i >= 20 && i < 95) ++nMid; // never taken as this is the remainder of the outfalls.

So the branches will be predicted taken, taken and remainder with

6%+(0.94*)20% mispredicts.

"Sorted"

          if (i >= 20 && i < 95) ++nMid;  // 75% not taken
          else if (i < 20) ++nLow;        // 19/25 76% not taken
          else if (i >= 95) ++nHigh;      //Least likely branch

The branches will be predicted with not taken, not taken and Sherlock.

25%+(0.75*)24% mispredicts

Giving 18-23% difference (measured difference of ~9%) but we need to calculate cycles instead of mispredicting %.

Let's assume 17 cycles mispredict penalty on my Nehalem CPU and that each check takes 1 cycle to issue (4-5 instructions) and the loop takes one cycle too. The data dependencies are the counters and the loop variables, but once the mispredicts are out of the way it shouldn't influence the timing.

So for "reverse", we get the timings (this should be the formula used in Computer Architecture: A Quantitative Approach IIRC).

mispredict*penalty+count+loop
0.06*17+1+1+    (=3.02)
(propability)*(first check+mispredict*penalty+count+loop)
(0.19)*(1+0.20*17+1+1)+  (= 0.19*6.4=1.22)
(propability)*(first check+second check+count+loop)
(0.75)*(1+1+1+1) (=3)
= 7.24 cycles per iteration

and the same for "sorted"

0.25*17+1+1+ (=6.25)
(1-0.75)*(1+0.24*17+1+1)+ (=.25*7.08=1.77)
(1-0.75-0.19)*(1+1+1+1)  (= 0.06*4=0.24)
= 8.26

(8.26-7.24)/8.26 = 13.8% vs. ~9% measured (close to the measured!?!).

So the obvious of the OP is not obvious.

With these tests, other tests with more complicated code or more data dependencies will certainly be different so measure your case.

Changing the order of the test changed the results but that could be because of different alignments of the loop start which should ideally be 16 bytes aligned on all newer Intel CPUs but isn't in this case.

查看更多
登录 后发表回答