memcpy performance differences between 32 and 64 b

2019-01-22 22:42发布

We have Core2 machines (Dell T5400) with XP64.

We observe that when running 32-bit processes, the performance of memcpy is on the order of 1.2GByte/s; however memcpy in a 64-bit process achieves about 2.2GByte/s (or 2.4GByte/s with the Intel compiler CRT's memcpy). While the initial reaction might be to just explain this away as due to the wider registers available in 64-bit code, we observe that our own memcpy-like SSE assembly code (which should be using 128-bit wide load-stores regardless of 32/64-bitness of the process) demonstrates similar upper limits on the copy bandwidth it achieves.

My question is, what's this difference actually due to ? Do 32-bit processes have to jump through some extra WOW64 hoops to get at the RAM ? Is it something to do with TLBs or prefetchers or... what ?

Thanks for any insight.

Also raised on Intel forums.

7条回答
何必那么认真
2楼-- · 2019-01-22 23:10

Here's an example of a memcpy routine geared specifically for 64 bit architecture.

void uint8copy(void *dest, void *src, size_t n){
    uint64_t * ss = (uint64_t)src;
    uint64_t * dd = (uint64_t)dest;
    n = n * sizeof(uint8_t)/sizeof(uint64_t); 

    while(n--)
        *dd++ = *ss++;
}//end uint8copy()

The full article is here: http://www.godlikemouse.com/2008/03/04/optimizing-memcpy-routines/

查看更多
何必那么认真
3楼-- · 2019-01-22 23:11

Of course, you really need to look at the actual machine instructions that are being executed inside the innermost loop of the memcpy, by stepping into the machine code with a debugger. Anything else is just speculation.

My quess is that it probably doesn't have anything to do with 32-bit versus 64-bit per se; my guess is that the faster library routine was written using SSE non-temporal stores.

If the inner loop contains any variation of conventional load-store instructions, then the destination memory must be read into the machine's cache, modified, and written back out. Since that read is totally unnecessary -- the bits being read are overwritten immediately -- you can save half the memory bandwidth by using the "non-temporal" write instructions, which bypass the caches. That way, the destination memory is just written making a one-way trip to the memory instead of a round trip.

I don't know the Intel compiler's CRT library, so this is just a guess. There's no particular reason why the 32-bit libCRT can't do the same thing, but the speedup you quote is in the ballpark of what I would expect just by converting the movdqa instructions to movnt...

Since memcpy is not doing any calculations, it's always bound by how fast you can read and write memory.

查看更多
戒情不戒烟
4楼-- · 2019-01-22 23:12

I don't have a reference in front of me, so I'm not absolutely positive on the timings/instructions, but I can still give the theory. If you're doing a memory move under 32-bit mode, you'll do something like a "rep movsd" which moves a single 32-bit value every clock cycle. Under 64-bit mode, you can do a "rep movsq" which does a single 64-bit move every clock cycle. That instruction is not available to 32-bit code, so you'd be doing 2 x rep movsd (at 1 cycle a piece) for half the execution speed.

VERY much simplified, ignoring all the memory bandwidth/alignment issues, etc, but this is where it all begins...

查看更多
不美不萌又怎样
5楼-- · 2019-01-22 23:16

I finally got to the bottom of this (and Die in Sente's answer was on the right lines, thanks)

In the below, dst and src are 512 MByte std::vector. I'm using the Intel 10.1.029 compiler and CRT.

On 64bit both

memcpy(&dst[0],&src[0],dst.size())

and

memcpy(&dst[0],&src[0],N)

where N is previously declared const size_t N=512*(1<<20); call

__intel_fast_memcpy

the bulk of which consists of:

  000000014004ED80  lea         rcx,[rcx+40h] 
  000000014004ED84  lea         rdx,[rdx+40h] 
  000000014004ED88  lea         r8,[r8-40h] 
  000000014004ED8C  prefetchnta [rdx+180h] 
  000000014004ED93  movdqu      xmm0,xmmword ptr [rdx-40h] 
  000000014004ED98  movdqu      xmm1,xmmword ptr [rdx-30h] 
  000000014004ED9D  cmp         r8,40h 
  000000014004EDA1  movntdq     xmmword ptr [rcx-40h],xmm0 
  000000014004EDA6  movntdq     xmmword ptr [rcx-30h],xmm1 
  000000014004EDAB  movdqu      xmm2,xmmword ptr [rdx-20h] 
  000000014004EDB0  movdqu      xmm3,xmmword ptr [rdx-10h] 
  000000014004EDB5  movntdq     xmmword ptr [rcx-20h],xmm2 
  000000014004EDBA  movntdq     xmmword ptr [rcx-10h],xmm3 
  000000014004EDBF  jge         000000014004ED80 

and runs at ~2200 MByte/s.

But on 32bit

memcpy(&dst[0],&src[0],dst.size())

calls

__intel_fast_memcpy

the bulk of which consists of

  004447A0  sub         ecx,80h 
  004447A6  movdqa      xmm0,xmmword ptr [esi] 
  004447AA  movdqa      xmm1,xmmword ptr [esi+10h] 
  004447AF  movdqa      xmmword ptr [edx],xmm0 
  004447B3  movdqa      xmmword ptr [edx+10h],xmm1 
  004447B8  movdqa      xmm2,xmmword ptr [esi+20h] 
  004447BD  movdqa      xmm3,xmmword ptr [esi+30h] 
  004447C2  movdqa      xmmword ptr [edx+20h],xmm2 
  004447C7  movdqa      xmmword ptr [edx+30h],xmm3 
  004447CC  movdqa      xmm4,xmmword ptr [esi+40h] 
  004447D1  movdqa      xmm5,xmmword ptr [esi+50h] 
  004447D6  movdqa      xmmword ptr [edx+40h],xmm4 
  004447DB  movdqa      xmmword ptr [edx+50h],xmm5 
  004447E0  movdqa      xmm6,xmmword ptr [esi+60h] 
  004447E5  movdqa      xmm7,xmmword ptr [esi+70h] 
  004447EA  add         esi,80h 
  004447F0  movdqa      xmmword ptr [edx+60h],xmm6 
  004447F5  movdqa      xmmword ptr [edx+70h],xmm7 
  004447FA  add         edx,80h 
  00444800  cmp         ecx,80h 
  00444806  jge         004447A0

and runs at ~1350 MByte/s only.

HOWEVER

memcpy(&dst[0],&src[0],N)

where N is previously declared const size_t N=512*(1<<20); compiles (on 32bit) to a direct call to a

__intel_VEC_memcpy

the bulk of which consists of

  0043FF40  movdqa      xmm0,xmmword ptr [esi] 
  0043FF44  movdqa      xmm1,xmmword ptr [esi+10h] 
  0043FF49  movdqa      xmm2,xmmword ptr [esi+20h] 
  0043FF4E  movdqa      xmm3,xmmword ptr [esi+30h] 
  0043FF53  movntdq     xmmword ptr [edi],xmm0 
  0043FF57  movntdq     xmmword ptr [edi+10h],xmm1 
  0043FF5C  movntdq     xmmword ptr [edi+20h],xmm2 
  0043FF61  movntdq     xmmword ptr [edi+30h],xmm3 
  0043FF66  movdqa      xmm4,xmmword ptr [esi+40h] 
  0043FF6B  movdqa      xmm5,xmmword ptr [esi+50h] 
  0043FF70  movdqa      xmm6,xmmword ptr [esi+60h] 
  0043FF75  movdqa      xmm7,xmmword ptr [esi+70h] 
  0043FF7A  movntdq     xmmword ptr [edi+40h],xmm4 
  0043FF7F  movntdq     xmmword ptr [edi+50h],xmm5 
  0043FF84  movntdq     xmmword ptr [edi+60h],xmm6 
  0043FF89  movntdq     xmmword ptr [edi+70h],xmm7 
  0043FF8E  lea         esi,[esi+80h] 
  0043FF94  lea         edi,[edi+80h] 
  0043FF9A  dec         ecx  
  0043FF9B  jne         ___intel_VEC_memcpy+244h (43FF40h) 

and runs at ~2100MByte/s (and proving 32bit isn't somehow bandwidth limited).

I withdraw my claim that my own memcpy-like SSE code suffers from a similar ~1300 MByte/limit in 32bit builds; I now don't have any problems getting >2GByte/s on 32 or 64bit; the trick (as the above results hint) is to use non-temporal ("streaming") stores (e.g _mm_stream_ps intrinsic).

It seems a bit strange that the 32bit "dst.size()" memcpy doesn't eventually call the faster "movnt" version (if you step into memcpy there is the most incredible amount of CPUID checking and heuristic logic e.g comparing number of bytes to be copied with cache size etc before it goes anywhere near your actual data) but at least I understand the observed behaviour now (and it's not SysWow64 or H/W related).

查看更多
我欲成王,谁敢阻挡
6楼-- · 2019-01-22 23:23

My off-the-cuff guess is that the 64 bit processes are using the processor's native 64-bit memory size, which optimizes the use of the memory bus.

查看更多
甜甜的少女心
7楼-- · 2019-01-22 23:24

Thanks for the positive feedback! I think I can partly explain what's going here.

Using the non-temporal stores for memcpy is definitely the fasted if you're only timing the memcpy call.

On the other hand, if you're benchmarking an application, the movdqa stores have the benefit that they leave the destination memory in cache. Or at least the part of it that fits into cache.

So if you're designing a runtime library and if you can assume that the application that called memcpy is going to use the destination buffer immediately after the memcpy call, then you'll want to provide the movdqa version. This effectively optimizes out the trip from memory back into the cpu that would follow the movntdq version, and all of the instructions following the call will run faster.

But on the other hand, if the destination buffer is large compared to the processor's cache, that optimization doesn't work and the movntdq version would give you faster application benchmarks.

So the idea memcpy would have multiple versions under the hood. When the destination buffer is small compared to the processor's cache, use movdqa, otherwise, then the destination buffer is large compared to the processor's cache, use movntdq. It sounds like this is what's happening in the 32-bit library.

Of course, none of this has anything to do with the differences between 32-bit and 64-bit.

My conjecture is that the 64-bit library just isn't as mature. The developers just haven't gotten around to providing both routines in that version of library yet.

查看更多
登录 后发表回答