As part of a compression algorithm, I am looking for the optimal way to achieve the following:
I have a simple bitmap in a uint8_t
. For example 01010011
What I want is a __m256i
of the form: (0, maxint, 0, maxint, 0, 0, maxint, maxint)
One way to achieve this is by shuffling a vector of 8 x maxint into a vector of zeros. But that first requires me to expand my uint8_t
to the right shuffle bitmap.
I am wondering if there is a better way?
Here is a solution (PaulR improved my solution, see the end of my answer or his answer) based on a variation of this question fastest-way-to-broadcast-32-bits-in-32-bytes.
__m256i t1 = _mm256_set1_epi8(x);
__m256i t2 = _mm256_and_si256(t1, mask);
__m256i t4 = _mm256_cmpeq_epi32(t2, _mm256_setzero_si256());
t4 = _mm256_xor_si256(t4, _mm256_set1_epi32(-1));
I don't have AVX2 hardware to test this on right now but here is a SSE2 version showing that it works which also shows how to define the mask.
#include <x86intrin.h>
#include <stdint.h>
#include <stdio.h>
int main(void) {
char mask[32] = {
0x01, 0x00, 0x00, 0x00,
0x02, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00,
0x08, 0x00, 0x00, 0x00,
0x10, 0x00, 0x00, 0x00,
0x20, 0x00, 0x00, 0x00,
0x40, 0x00, 0x00, 0x00,
0x80, 0x00, 0x00, 0x00,
};
__m128i mask1 = _mm_loadu_si128((__m128i*)&mask[ 0]);
__m128i mask2 = _mm_loadu_si128((__m128i*)&mask[16]);
uint8_t x = 0x53; //0101 0011
__m128i t1 = _mm_set1_epi8(x);
__m128i t2 = _mm_and_si128(t1, mask1);
__m128i t3 = _mm_and_si128(t1, mask2);
__m128i t4 = _mm_cmpeq_epi32(t2,_mm_setzero_si128());
__m128i t5 = _mm_cmpeq_epi32(t3,_mm_setzero_si128());
t4 = _mm_xor_si128(t4, _mm_set1_epi32(-1));
t5 = _mm_xor_si128(t5, _mm_set1_epi32(-1));
int o1[4], o2[4];
_mm_store_si128((__m128i*)o1, t4);
_mm_store_si128((__m128i*)o2, t5);
for(int i=0; i<4; i++) printf("%d \n", o1[i]);
for(int i=0; i<4; i++) printf("%d \n", o2[i]);
}
Edit:
PaulR improved my solution
__m256i v = _mm256_set1_epi8(u);
v = _mm256_and_si256(v, mask);
v = _mm256_xor_si256(v, mask);
return _mm256_cmpeq_epi32(v, _mm256_setzero_si256());
with the mask defined as
int mask[8] = {
0x01010101, 0x02020202, 0x04040404, 0x08080808,
0x10101010, 0x20202020, 0x40404040, 0x80808080,
};
See his answer with performance testing for more details.
I think I'd probably go for the "brute force and ignorance" approach initially, maybe something like this:
uint8_t u = 0x53; // 01010011
const union {
uint32_t a[4];
__m128i v;
} kLUT[16] = { { { 0, 0, 0, 0 } },
{ { -1, 0, 0, 0 } },
{ { 0, -1, 0, 0 } },
{ { -1, -1, 0, 0 } },
{ { 0, 0, -1, 0 } },
{ { -1, 0, -1, 0 } },
{ { 0, -1, -1, 0 } },
{ { -1, -1, -1, 0 } },
{ { 0, 0, 0, -1 } },
{ { -1, 0, 0, -1 } },
{ { 0, -1, 0, -1 } },
{ { -1, -1, 0, -1 } },
{ { 0, 0, -1, -1 } },
{ { -1, 0, -1, -1 } },
{ { 0, -1, -1, -1 } },
{ { -1, -1, -1, -1 } } };
__m256i v = _mm256_set_m128i(kLUT[u >> 4].v, kLUT[u & 15].v);
Using clang -O3
this compiles to:
movl %ebx, %eax ;; eax = ebx = u
andl $15, %eax ;; get low offset = (u & 15) * 16
shlq $4, %rax
leaq _main.kLUT(%rip), %rcx ;; rcx = kLUT
vmovaps (%rax,%rcx), %xmm0 ;; load low half of ymm0 from kLUT
andl $240, %ebx ;; get high offset = (u >> 4) * 16
vinsertf128 $1, (%rbx,%rcx), %ymm0, %ymm0
;; load high half of ymm0 from kLUT
FWIW I threw together a simple test harness for three implementations: (i) a simple scalar code reference implementation, (ii) the above code, (iii) an implementation based on @Zboson's answer, (iv) a slightly improved version of (iii) and (v) a further improvement on (iv) using a suggestion from @MarcGlisse. I got the following results with a 2.6GHz Haswell CPU (compiled with clang -O3
):
scalar code: 7.55336 ns / vector
Paul R: 1.36016 ns / vector
Z boson: 1.24863 ns / vector
Z boson (improved): 1.07590 ns / vector
Z boson (improved + @MarcGlisse suggestion): 1.08195 ns / vector
So @Zboson's solution(s) win, by around 10% - 20%, presumably because they need only 1 load, versus 2 for mine.
If we get any other implementations I'll add these to the test harness and update the results.
Slightly improved version of @Zboson's implementation:
__m256i v = _mm256_set1_epi8(u);
v = _mm256_and_si256(v, mask);
v = _mm256_xor_si256(v, mask);
return _mm256_cmpeq_epi32(v, _mm256_setzero_si256());
Further improved version of @Zboson's implementation incorporating suggestion from @MarcGlisse:
__m256i v = _mm256_set1_epi8(u);
v = _mm256_and_si256(v, mask);
return _mm256_cmpeq_epi32(v, mask);
(Note that mask
needs to contain replicated 8 bit values in each 32 bit element, i.e. 0x01010101, 0x02020202, ..., 0x80808080
)
Based on all the answers, I hacked up a solution using Agner Fog's excellent library (which handles both AVX2, AVX and SSE solutions with a common abstraction). Figured I would share it as an alternative answer:
// Used to generate 32 bit vector bitmasks from 8 bit ints
static const Vec8ui VecBitMask8(
0x01010101
, 0x02020202
, 0x04040404
, 0x08080808
, 0x10101010
, 0x20202020
, 0x40404040
, 0x80808080);
// As above, but for 64 bit vectors and 4 bit ints
static const Vec4uq VecBitMask4(
0x0101010101010101
, 0x0202020202020202
, 0x0404040404040404
, 0x0808080808080808);
template <typename V>
inline static Vec32c getBitmapMask();
template <> inline Vec32c getBitmapMask<Vec8ui>() {return VecBitMask8;};
template <> inline Vec32c getBitmapMask<Vec8i>() {return VecBitMask8;};
template <> inline Vec32c getBitmapMask<Vec4uq>() {return VecBitMask4;};
template <> inline Vec32c getBitmapMask<Vec4q>() {return VecBitMask4;};
// Returns a bool vector representing the bitmask passed.
template <typename V>
static inline V getBitmap(const uint8_t bitMask) {
Vec32c mask = getBitmapMask<V>();
Vec32c v1(bitMask);
v1 = v1 & mask;
return ((V)v1 == (V)mask);
}