Is there any difference between logical SSE intrinsics for different types? For example if we take OR operation, there are three intrinsics: _mm_or_ps, _mm_or_pd and _mm_or_si128 all of which do the same thing: compute bitwise OR of their operands. My questions:
Is there any difference between using one or another intrinsic (with appropriate type casting). Won't there be any hidden costs like longer execution in some specific situation?
These intrinsics maps to three different x86 instructions (por, orps, orpd). Does anyone have any ideas why Intel is wasting precious opcode space for several instructions which do the same thing?
I think all three are effectively the same, i.e. 128 bit bitwise operations. The reason different forms exist is probably historical, but I'm not certain. I guess it's possible that there may be some additional behaviour in the floating point versions, e.g. when there are NaNs, but this is pure guesswork. For normal inputs the instructions seem to be interchangeable, e.g.
Yes, there can be performance reasons to choose one vs. the other.
1: Sometimes there is an extra cycle or two of latency (forwarding delay) if the output of an integer execution unit needs to be routed to the input of an FP execution unit, or vice versa. It takes a LOT of wires to move 128b of data to any of many possible destinations, so CPU designers have to make tradeoffs, like only having a direct path from every FP output to every FP input, not to ALL possible inputs.
See this answer, or Agner Fog's microarchitecture doc for bypass-delays. Search for "Data bypass delays on Nehalem" in Agner's doc; it has some good practical examples and discussion. He has a section on it for every microarch he has analysed.
Remember that latency doesn't matter if it isn't on the critical path of your code. Using
pshufd
instead ofmovaps + shufps
can be a win if uop throughput is your bottleneck, rather than latency of your critical path.2: The
...ps
version takes 1 fewer byte of code than the other two. This will align the following instructions differently, which can matter for the decoders and/or uop cache lines.3: Recent Intel CPUs can only run the FP versions on port5.
Merom (Core2) and Penryn:
orps
can run on p0/p1/p5, but integer-domain only. Presumably all 3 versions decoded into the exact same uop. So the cross-domain forwarding delay happens. (AMD CPUs do this too: FP bitwise instructions run in the ivec domain.)Nehalem / Sandybridge / IvB / Haswell / Broadwell:
por
can run on p0/p1/p5, butorps
can run only on port5. p5 is also needed by shuffles, but the FMA, FP add, and FP mul units are on ports 0/1.Skylake:
por
andorps
both have 3-per-cycle throughput. Information about forwarding delays isn't available yet.Note that on SnB/IvB (AVX but not AVX2), only p5 needs to handle 256b logical ops, as
vpor ymm, ymm
requires AVX2. This was probably not the reason for the change, since Nehalem did this.How to choose wisely:
If logical op throughput on port5 could be a bottleneck, then use the integer versions, even on FP data. This is especially true if you want to use integer shuffles or other data-movement instructions.
AMD CPUs always use the integer domain for logicals, so if you have multiple integer-domain things to do, do them all at once to minimize round-trips between domains. Shorter latencies will get things cleared out of the reorder buffer faster, even if a dep chain isn't the bottleneck for your code.
If you just want to set/clear/flip a bit in FP vectors between FP add and mul instructions, use the
...ps
logicals, even on double-precision data, because single and double FP are the same domain on every CPU in existence, and the...ps
versions are one byte shorter.There are practical / human-factor reasons for using the
...pd
versions, though, which will often outweigh saving 1 byte of code. Readability of your code by other humans is a factor: They'll wonder why you're treating your data as singles when it's actually doubles. Esp. with C/C++ intrinsics, littering your code with casts between__mm256
and__mm256d
is not worth it. If tuning on the level of insn alignment matters, write in asm directly, not intrinsics! (Having the instruction one byte longer might align things better for uop cache line density and/or decoders.)For integer data, use the integer versions. Saving one instruction byte isn't worth the bypass-delay, and integer code often keeps port5 fully occupied with shuffles. For Haswell, many shuffle / insert / extract / pack / unpack instructions became p5 only, instead of p1/p5 for SnB/IvB.
If you look at the history of these instruction sets, you can kind of see how we got here.
MMX existed before SSE, so it looks like opcodes for SSE (
...ps
) instructions were chosen out of the same0F xx
space. Then for SSE2, the...pd
version added a66
operand-size prefix to the...ps
opcode, and the integer version added a66
prefix to the MMX version.They could have left out
orpd
and/orpor
, but they didn't. Perhaps they thought that future CPU designs might have longer forwarding paths between different domains, and so using the matching instruction for your data would be a bigger deal. Even though there are separate opcodes, AMD and early Intel treated them all the same, as int-vector.According to Intel and AMD optimization guidelines mixing op types with data types produces a performance hit as the CPU internally tags 64 bit halves of the register for a particular data type. This seems to mostly effect pipe-lining as the instruction is decoded and the uops are scheduled. Functionally they produce the same result. The newer versions for the integer data types have larger encoding and take up more space in the code segment. So if code size is a problem use the old ops as these have smaller encoding.