Code sample that shows casting to uint is more eff

2020-02-13 17:36发布

So I am looking at this question and the general consensus is that uint cast version is more efficient than range check with 0. Since the code is also in MS's implementation of List I assume it is a real optimization. However I have failed to produce a code sample that results in better performance for the uint version. I have tried different tests and there is something missing or some other part of my code is dwarfing the time for the checks. My last attempt looks like this:

class TestType
{
    public TestType(int size)
    {
        MaxSize = size;
        Random rand = new Random(100);
        for (int i = 0; i < MaxIterations; i++)
        {
            indexes[i] = rand.Next(0, MaxSize);
        }
    }

    public const int MaxIterations = 10000000;
    private int MaxSize;
    private int[] indexes = new int[MaxIterations];

    public void Test()
    {
        var timer = new Stopwatch();

        int inRange = 0;
        int outOfRange = 0;

        timer.Start();
        for (int i = 0; i < MaxIterations; i++)
        {
            int x = indexes[i];
            if (x < 0 || x > MaxSize)
            {
                throw new Exception();

            }

            inRange += indexes[x];
        }
        timer.Stop();

        Console.WriteLine("Comparision 1: " + inRange + "/" + outOfRange + ", elapsed: " + timer.ElapsedMilliseconds + "ms");

        inRange = 0;
        outOfRange = 0;

        timer.Reset();
        timer.Start();

        for (int i = 0; i < MaxIterations; i++)
        {
            int x = indexes[i];
            if ((uint)x > (uint)MaxSize)
            {
                throw new Exception();
            }

            inRange += indexes[x];
        }

        timer.Stop();

        Console.WriteLine("Comparision 2: " + inRange + "/" + outOfRange + ", elapsed: " + timer.ElapsedMilliseconds + "ms");

    }
}

class Program
{
    static void Main()
    {
        TestType t = new TestType(TestType.MaxIterations);
        t.Test();
        TestType t2 = new TestType(TestType.MaxIterations);
        t2.Test();
        TestType t3 = new TestType(TestType.MaxIterations);
        t3.Test();
    }
}

The code is a bit of a mess because I tried many things to make uint check perform faster like moving the compared variable into a field of a class, generating random index access and so on but in every case the result seems to be the same for both versions. So is this change applicable on modern x86 processors and can someone demonstrate it somehow?

Note that I am not asking for someone to fix my sample or explain what is wrong with it. I just want to see the case where the optimization does work.

标签: c# clr jit
3条回答
来,给爷笑一个
2楼-- · 2020-02-13 18:11

You aren't comparing like with like.

The code you were talking about not only saved one branch by using the optimisation, but also 4 bytes of CIL in a small method.

In a small method 4 bytes can be the difference in being inlined and not being inlined.

And if the method calling that method is also written to be small, then that can mean two (or more) method calls are jitted as one piece of inline code.

And maybe some of it is then, because it is inline and available for analysis by the jitter, optimised further again.

The real difference is not between index < 0 || index >= _size and (uint)index >= (uint)_size, but between code that has repeated efforts to minimise the method body size and code that does not. Look for example at how another method is used to throw the exception if necessary, further shaving off a couple of bytes of CIL.

(And no, that's not to say that I think all methods should be written like that, but there certainly can be performance differences when one does).

查看更多
smile是对你的礼貌
3楼-- · 2020-02-13 18:25
       if (x < 0 || x > MaxSize)

The comparison is performed by the CMP processor instruction (Compare). You'll want to take a look at Agner Fog's instruction tables document (PDF), it list the cost of instructions. Find your processor back in the list, then locate the CMP instruction.

For mine, Haswell, CMP takes 1 cycle of latency and 0.25 cycles of throughput.

A fractional cost like that could use an explanation, Haswell has 4 integer execution units that can execute instructions at the same time. When a program contains enough integer operations, like CMP, without an interdependency then they can all execute at the same time. In effect making the program 4 times faster. You don't always manage to keep all 4 of them busy at the same time with your code, it is actually pretty rare. But you do keep 2 of them busy in this case. Or in other words, two comparisons take just as long as single one, 1 cycle.

There are other factors at play that make the execution time identical. One thing helps is that the processor can predict the branch very well, it can speculatively execute x > MaxSize in spite of the short-circuit evaluation. And it will in fact end up using the result since the branch is never taken.

And the true bottleneck in this code is the array indexing, accessing memory is one of the slowest thing the processor can do. So the "fast" version of the code isn't faster even though it provides more opportunity to allow the processor to concurrently execute instructions. It isn't much of an opportunity today anyway, a processor has too many execution units to keep busy. Otherwise the feature that makes HyperThreading work. In both cases the processor bogs down at the same rate.

On my machine, I have to write code that occupies more than 4 engines to make it slower. Silly code like this:

     if (x < 0 || x > MaxSize || x > 10000000 || x > 20000000 || x > 3000000) {
         outOfRange++; 
     }
     else {
         inRange++;
     }

Using 5 compares, now I can a difference, 61 vs 47 msec. Or in other words, this is a way to count the number of integer engines in the processor. Hehe :)

So this is a micro-optimization that probably used to pay off a decade ago. It doesn't anymore. Scratch it off your list of things to worry about :)

查看更多
等我变得足够好
4楼-- · 2020-02-13 18:33

I would suggest attempting code which does not throw an exception when the index is out of range. Exceptions are incredibly expensive and can completely throw off your bench results.

The code below does a timed-average bench for 1,000 iterations of 1,000,000 results.

using System;
using System.Diagnostics;

namespace BenchTest
{
    class Program
    {
        const int LoopCount = 1000000;
        const int AverageCount = 1000;

        static void Main(string[] args)
        {
            Console.WriteLine("Starting Benchmark");
            RunTest();
            Console.WriteLine("Finished Benchmark");

            Console.Write("Press any key to exit...");
            Console.ReadKey();
        }

        static void RunTest()
        {
            int cursorRow = Console.CursorTop; int cursorCol = Console.CursorLeft;

            long totalTime1 = 0; long totalTime2 = 0;
            long invalidOperationCount1 = 0; long invalidOperationCount2 = 0;

            for (int i = 0; i < AverageCount; i++)
            {
                Console.SetCursorPosition(cursorCol, cursorRow);
                Console.WriteLine("Running iteration: {0}/{1}", i + 1, AverageCount);

                int[] indexArgs = RandomFill(LoopCount, int.MinValue, int.MaxValue);
                int[] sizeArgs = RandomFill(LoopCount, 0, int.MaxValue);

                totalTime1 += RunLoop(TestMethod1, indexArgs, sizeArgs, ref invalidOperationCount1);
                totalTime2 += RunLoop(TestMethod2, indexArgs, sizeArgs, ref invalidOperationCount2);
            }

            PrintResult("Test 1", TimeSpan.FromTicks(totalTime1 / AverageCount), invalidOperationCount1);
            PrintResult("Test 2", TimeSpan.FromTicks(totalTime2 / AverageCount), invalidOperationCount2);
        }

        static void PrintResult(string testName, TimeSpan averageTime, long invalidOperationCount)
        {
            Console.WriteLine(testName);
            Console.WriteLine("    Average Time: {0}", averageTime);
            Console.WriteLine("    Invalid Operations: {0} ({1})", invalidOperationCount, (invalidOperationCount / (double)(AverageCount * LoopCount)).ToString("P3"));
        }

        static long RunLoop(Func<int, int, int> testMethod, int[] indexArgs, int[] sizeArgs, ref long invalidOperationCount)
        {
            Stopwatch sw = new Stopwatch();

            Console.Write("Running {0} sub-iterations", LoopCount);
            sw.Start();
            long startTickCount = sw.ElapsedTicks;

            for (int i = 0; i < LoopCount; i++)
            {
                invalidOperationCount += testMethod(indexArgs[i], sizeArgs[i]);
            }

            sw.Stop();
            long stopTickCount = sw.ElapsedTicks;

            long elapsedTickCount = stopTickCount - startTickCount;
            Console.WriteLine(" - Time Taken: {0}", new TimeSpan(elapsedTickCount));
            return elapsedTickCount;
        }

        static int[] RandomFill(int size, int minValue, int maxValue)
        {
            int[] randomArray = new int[size];

            Random rng = new Random();

            for (int i = 0; i < size; i++)
            {
                randomArray[i] = rng.Next(minValue, maxValue);
            }

            return randomArray;
        }

        static int TestMethod1(int index, int size)
        {
            return (index < 0 || index >= size) ? 1 : 0;
        }

        static int TestMethod2(int index, int size)
        {
            return ((uint)(index) >= (uint)(size)) ? 1 : 0;
        }
    }
}
查看更多
登录 后发表回答