How to effectively apply bitwise operation to (lar

2019-08-12 23:03发布

问题:

I want to implement

void bitwise_and(
    char*       __restrict__  result,
    const char* __restrict__  lhs,
    const char* __restrict__  rhs,
    size_t                    length);

or maybe a bitwise_or(), bitwise_xor() or any other bitwise operation. Obviously it's not about the algorithm, just the implementation details - alignment, loading the largest possible element from memory, cache-awareness, using SIMD instructions etc.

I'm sure this has (more than one) fast existing implementations, but I would guess most library implementations would require some fancy container, e.g. std::bitset or boost::dynamic_bit_set - but I don't want to spend the time constructing one of those.

So do I... Copy-paste from an existing library? Find a library which can 'wrap' a raw packed bits array in memory with a nice object? Roll my own implementation anyway?

Notes:

  • I'm mostly interested in C++ code, but I certainly don't mind a plain C approach.
  • Obviously, making copies of the input arrays is out of the question - that would probably nearly-double the execution time.
  • I intentionally did not template the bitwise operator, in case there's some specific optimization for OR, or for AND etc.
  • Bonus points for discussing operations on multiple vectors at once, e.g. V_out = V_1 bitwise-and V_2 bitwise-and V_3 etc.
  • I noted this article comparing library implementations, but it's from 5 years ago. I can't ask which library to use since that would violate SO policy I guess...
  • If it helps you any, assume its uint64_ts rather than chars (that doesn't really matter - if the char array is unaligned we can just treated the heading and trailing chars separately).

回答1:

This answer is going to assume you want the fastest possible way and are happy to use platform specific things. You optimising compiler may be able to produce similar code to the below from normal C but in my experiance across a few compilers something as specific as this is still best hand-written.

Obviously like all optimisation tasks, never assume anything is better/worse and measure, measure, measure.

If you could lock down you architecture to x86 with at least SSE3 you would do:

void bitwise_and(
    char*       result,
    const char* lhs,
    const char* rhs,
    size_t      length)
{
    while(length >= 16)
    {
        // Load in 16byte registers
        auto lhsReg = _mm_loadu_si128((__m128i*)lhs);
        auto rhsReg = _mm_loadu_si128((__m128i*)rhs);

        // do the op
        auto res = _mm_and_si128(lhsReg, rhsReg);

        // save off again
        _mm_storeu_si128((__m128i*)result, res);

        // book keeping
        length -= 16;
        result += 16;
        lhs += 16;
        rhs += 16;
    }

    // do the tail end. Assuming that the array is large the
    // most that the following code can be run is 15 times so I'm not
    // bothering to optimise. You could do it in 64 bit then 32 bit
    // then 16 bit then char chunks if you wanted...
    while (length)
    {
        *result = *lhs & *rhs;
        length -= 1;
        result += 1;
        lhs += 1;
        rhs += 1;
    }
}

This compiles to ~10asm instructions per 16 bytes (+ change for the leftover and a little overhead).

The great thing about doing intrinsics like this (over hand rolled asm) is that the compiler is still free to do additional optimisations (such as loop unrolling) ontop of what you write. It also handles register allocation.

If you could guarantee aligned data you could save an asm instruction (use _mm_load_si128 instead and the compiler will be clever enough to avoid a second load and use it as an direct mem operand to the 'pand'.

If you could guarantee AVX2+ then you could use the 256 bit version and handle 10asm instructions per 32 bytes.

On arm theres similar NEON instructions.

If you wanted to do multiple ops just add the relevant intrinsic in the middle and it'll add 1 asm instruction per 16 bytes.

I'm pretty sure with a decent processor you dont need any additional cache control.



回答2:

Don't do it this way. The individual operations will look great, sleek asm, nice performance .. but a composition of them will be terrible. You cannot make this abstraction, nice as it looks. The arithmetic intensity of those kernels is almost the worst possible (the only worse one is doing no arithmetic, such as a straight up copy), and composing them at a high level will retain that awful property. In a sequence of operations each using the result of the previous one, the results are written and read again a lot later (in the next kernel), even though the high level flow could be transposed so that the result the "next operation" needs is right there in a register. Also, if the same argument appears twice in an expression tree (and not both as operands to one operation), they will be streamed in twice, instead of reusing the data for two operations.

It doesn't have that nice warm fuzzy feeling of "look at all this lovely abstraction" about it, but what you should do is find out at a high level how you're combining your vectors, and then try to chop that in pieces that make sense from a performance perspective. In some cases that may mean making big ugly messy loops that will make people get an extra coffee before diving in, that's just too bad then. If you want performance, you often have to sacrifice something else. Usually it's not so bad, it probably just means you have a loop that has an expression consisting of intrinsics in it, instead of an expression of vector-operations that each individually have a loop.