Why do some SSE “mov” instructions specify that th

2019-02-11 20:05发布

问题:

Many SSE "mov" instructions specify that they are moving floating-point values. For example:

  • MOVHLPS—Move Packed Single-Precision Floating-Point Values High to Low
  • MOVSD—Move Scalar Double-Precision Floating-Point Value
  • MOVUPD—Move Unaligned Packed Double-Precision Floating-Point Values

Why don't these instructions simply say that they move 32-bit or 64-bit values? If they're just moving bits around, why do the instructions specify that they are for floating-point values? Surely they would work whether you interpret those bits as floating-point or not?

回答1:

I think I've found the answer: some microarchitectures execute floating-point instructions on different execution units than integer instructions. You get better overall latency when a stream of instructions stays within the same "domain" (integer or floating point). This is covered in pretty good detail in Agner Fog's optimization manual, in the section titled "Data Bypass Delays": http://www.agner.org/optimize/microarchitecture.pdf

I found this explanation in this similar SO question: Difference between MOVDQA and MOVAPS x86 instructions?



回答2:

In case anyone cares, this is exactly why in Agner Fog's vectorclass he has seperate vector classes to use with boolean float (Vec4fb) and boolean integer (Vec4i) http://www.agner.org/optimize/#vectorclass

In his manual he writes. "The reason why we have defined a separate Boolean vector class for use with floating point vectors is that it enables us to produce faster code. (Many modern CPU's have separate execution units for integer vectors and floating point vectors. It is sometimes possible to do the Boolean operations in the floating point unit and thereby avoid the delay from moving data between the two units)."

Most questions about SSE and AVX can be answered by reading his manual and more importantly looking at the code in his vectorclass.



标签: assembly x86 sse