gcc 5.3 with -O3 -mavx -mtune=haswell
for x86-64 makes surprisingly bulky code to handle potentially-misaligned inputs for code like:
// convenient simple example of compiler input
// I'm not actually interested in this for any real program
void floatmul(float *a) {
for (int i=0; i<1024 ; i++)
a[i] *= 2;
}
clang uses unaligned load/store instructions, but gcc does a scalar intro/outro and an aligned vector loop: It peels off the first up-to-7 unaligned iterations, fully unrolling that into a sequence of
vmovss xmm0, DWORD PTR [rdi]
vaddss xmm0, xmm0, xmm0 ; multiply by two
vmovss DWORD PTR [rdi], xmm0
cmp eax, 1
je .L13
vmovss xmm0, DWORD PTR [rdi+4]
vaddss xmm0, xmm0, xmm0
vmovss DWORD PTR [rdi+4], xmm0
cmp eax, 2
je .L14
...
This seems pretty terrible, esp. for CPUs with a uop cache. I reported a gcc bug about this, with a suggestion for smaller/better code that gcc could use when peeling unaligned iterations. It's probably still not optimal, though.
This question is about what actually would be optimal with AVX. I'm asking about general-case solutions that gcc and other compilers could/should use. (I didn't find any gcc mailing list hits with discussion about this, but didn't spend long looking.)
There will probably be multiple answers, since what's optimal for -mtune=haswell
will probably be different from what's optimal for -mtune=bdver3
(steamroller). And then there's the question of what's optimal when allowing instruction set extensions (e.g. AVX2 for 256b integer stuff, BMI1 for turning a count into a bitmask in fewer instructions).
I'm aware of Agner Fog's Optimizing Assembly guide, Section 13.5 Accessing unaligned data and partial vectors. He suggests either using unaligned accesses, doing an overlapping write at the start and/or end, or shuffling data from aligned accesses (but PALIGNR
only takes an imm8 count, so 2x pshufb
/ por
). He discounts VMASKMOVPS
as not useful, probably because of how badly it performs on AMD. I suspect that if tuning for Intel, it's worth considering. It's not obvious how to generate the correct mask, hence the question title.
It might turn out that it's better to simply use unaligned accesses, like clang does. For short buffers, the overhead of aligning might kill any benefit from avoiding cacheline splits for the main loop. For big buffers, main memory or L3 as the bottleneck may hide the penalty for cacheline splits. If anyone has experimental data to back this up for any real code they've tuned, that's useful information too.
VMASKMOVPS
does look usable for Intel targets. (The SSE version is horrible, with an implicit non-temporal hint, but the AVX version doesn't have that. There's even a new intrinsic to make sure you don't get the SSE version for 128b operands: _mm128_maskstore_ps
) The AVX version is only a little bit slow on Haswell:
- 3 uops / 4c latency / 1-per-2c throughput as a load.
- 4 uops / 14c latency / 1-per-2c throughput as a 256b store.
- 4 uops / 13c latency / 1-per-1c throughput as a 128b store.
The store form is still unusably slow on AMD CPUs, both Jaguar (1 per 22c tput) and Bulldozer-family: 1 per 16c on Steamroller (similar on Bulldozer), or 1 per ~180c throughput on Piledriver.
But if we do want to use VMASKMOVPS
, we need a vector with the high bit set in each element that should actually be loaded/stored. PALIGNR and PSRLDQ (for use on a vector of all-ones) only take compile-time-constant counts.
Notice that the other bits don't matter: it doesn't have to be all-ones, so scattering some set bits out to the high bits of the elements is a possibility.
AVX-only: Unaligned accesses at the start/end, pipelining loads to avoid problems when rewriting in place.
Thanks to @StephenCanon for pointing out that this is better than
VMASKMOVPS
for anything thatVMASKMOVPS
could do to help with looping over unaligned buffers.This is maybe a bit much to expect a compiler to do as a loop transformation, esp. since the obvious way can make Valgrind unhappy (see below).
Doing a load from the end of the array at the start of the loop seems a little weird, but hopefully it doesn't confuse the hardware prefetchers, or slow down getting the beginning of the array streaming from memory.
Overhead:
2 extra integer uops total (to set up the aligned-start). We're already using the end pointer for the normal loop structure, so that's free.
2 extra copies of the loop body (load/calc/store). (First and last iteration peeled).
Compilers probably won't be happy about emitting code like this, when auto-vectorizing. Valgrind will report accesses outside of array bounds, and does so by single-stepping and decoding instructions to see what they're accessing. So merely staying within the same page (and cache line) as the last element in the array isn't sufficient. Also note that if the input pointer isn't 4B-aligned, we can potentially read into another page and segfault.
To keep Valgrind happy, we could stop the loop two vector widths early, to do the special-case load of the unaligned last vector-width of the array. That would require duplicating the loop body an extra time (insignificant in this example, but it's trivial on purpose.) Or maybe avoid pipelining by having the intro code jump into the middle of the loop. (That may be sub-optimal for the uop-cache, though: (parts of) the loop body may end up in the uop cache twice.)
TODO: write a version that jumps into the loop mid-way.
Load a mask for VMOVMASKPS from a window into a table. AVX2, or AVX1 with a few extra instructions or a larger table.
The mask can also be used for
ANDPS
in registers in a reduction that needs to count each element exactly once. As Stephen Canon points out in comments on the OP, pipelining loads can allow overlapping unaligned stores to work even for a rewrite-in-place function like the example I picked, soVMASKMOVPS
is NOT the best choice here.This should be good on Intel CPUs, esp. Haswell and later for AVX2.
Agner Fog's method for getting a pshufb mask actually provided an idea that is very efficient: do an unaligned load taking a window of data from a table. Instead of a giant table of masks, use an index as a way of doing a byte-shift on data in memory.
Masks in LSB-first byte order (as they're stored in memory), not the usual notation for
{X3,X2,X1,X0}
elements in a vector. As written, they line up with an aligned window including the start/end of the input array in memory.{0,-1,-1,-1,-1,-1,-1,-1}
(skip one in the first 32B)start misalign count = 7: mask =
{0, 0, 0, 0, 0, 0, 0,-1}
(skip all but one in the first 32B)end misalign count = 0: no trailing elements. mask = all-ones (Aligned case).
this is the odd case, not similar to count=1. A couple extra instructions for this special case is worth avoiding an extra loop iteration and a cleanup with a mask of all-zeros.
{-1, 0, 0, 0, 0, 0, 0, 0}
{-1,-1,-1,-1,-1,-1,-1, 0}
Untested code, assume there are bugs
This does require a load from a table, which can miss in L1 cache, and 15B of table data. (Or 24B if the loop count is also variable, and we have to generate the end mask separately).
Either way, after the 4 instructions to generate the misalignment-count and the aligned start address, getting the mask only takes a single vpmosvsxbd instruction. (The ymm, mem form can't micro-fuse, so it's 2 uops). This requires AVX2.
Without AVX2:
[masktable_intro + rax]
and[masktable_intro + rax + 4]
)Or: (more insns, and more shuffle-port pressure, but less load-port pressure)
Or:
DD
) instead of Bytes (DB
). This would actually save an insn relative to AVX2:address & 0x1c
is the index, without needing a right-shift by two. The whole table still fits in a cache line, but without room for other constants the algo might use.Overhead:
Integer ops: 5 uops at the start to get an index and align the start pointer. 7 uops to get the index for the end mask. Total of 12 GP-register uops beyond simply using unaligned, if the loop element count is a multiple of the vector width.
AVX2: Two 2-fused-domain-uop vector insns to go from [0..7] index in a GP reg to a mask in a YMM reg. (One for the start mask, one for the end mask). Uses a 24B table, accessed in an 8B window with byte granularity.
AVX: Six 1-fused-domain-uop vector insns (three at the start, three at the end). With RIP-relative addressing for the table, four of those instructions will be
[base+index]
and won't micro-fuse, so an extra two integer insns might be better.The code inside the loop is replicated 3 times.
TODO: write another answer generating the mask on the fly, maybe as bytes in a 64b reg, then unpacking it to 256b. Maybe with a bit-shift, or BMI2's BZHI(-1, count)?