When executing a series of _mm_stream_load_si128()
calls (MOVNTDQA
) from consecutive memory locations, will the hardware pre-fetcher still kick-in, or should I use explicit software prefetching (with NTA hint) in order to obtain the benefits of prefetching while still avoiding cache pollution?
The reason I ask this is because their objectives seem contradictory to me. A streaming load will fetch data bypassing the cache, while the pre-fetcher attempts to proactively fetch data into the cache.
When sequentially iterating a large data structure (processed data won't be retouched in a long while), it would make sense to me to avoid polluting the chache hierarchy, but I do not want to incur in frequent ~100 cycle penalties because the pre-fetcher is idle.
Target architecture is Intel SandyBridge
I recently made some tests of the various
prefetch
flavors while answering another question and my findings were:The results from using
prefetchnta
were consistent with the following implementation on Skylake client:prefetchnta
loads values into theL1
andL3
but not theL2
(in fact, it seems the line may be evicted from theL2
if it is already there).prefetchnta
, like all other prefetch instructions, use a LFB entry, so they don't really help you get additional parallelism: but the NTA hint can be useful here to avoid L2 and L3 pollution.The current optimization manual (248966-038) claims in a few places that
prefetchnta
does bring data into the L2, but only in one way out of the set. E.g., in 7.6.2.1 Video Encoder:This isn't consistent with my test results on Skylake, where striding over a 64 KiB region with
prefetchnta
shows performance almost exactly consistent with fetching data from the L3 (~4 cycles per load, with an MLP factor of 10 and an L3 latency of about 40 cycles):Since the L2 in Skylake is 4-way, if the data was loaded into one way it should just barely stay in the L2 cache (one way of which covers 64 KiB), but the results above indicate that it doesn't.
You can run these tests on your own hardware on Linux using my uarch-bench program. Results for old systems would be particularly interesting.
Skylake Server (SKLX)
The reported behavior of
prefetchnta
on Skylake Server, which has a different L3 cache architecture, is significantly different from Skylake client. In particular, user Mysticial reports that lines fetched usingprefetchnta
are not available in any cache level and must be re-read from DRAM once they are evicted from L1.The mostly likely explanation is that they never entered L3 at all as a result of the
prefetchnta
- this is likely since in Skylake server the L3 is a non-inclusive shared victim cache for the private L2 caches, so lines that bypass the L2 cache usingprefetchnta
are likely never to have a chance to enter the L3. This makesprefetchnta
both more pure in function: fewer cache levels are polluted byprefetchnta
requests, but also more brittle: any failure to read annta
line from L1 before it is evicted means another full roundtrip to memory: the initial request triggered by theprefetchnta
is totally wasted.According to Patrick Fay (Intel)'s Nov 2011 post:, "On recent Intel processors, prefetchnta brings a line from memory into the L1 data cache (and not into the other cache levels)." He also says you need to make sure you don't prefetch too late (HW prefetch will already have pulled it in to all levels), or too early (evicted by the time you get there).
As discussed in comments on the OP, current Intel CPUs have a large shared L3 which is inclusive of all the per-core caches. This means cache-coherency traffic only has to check L3 tags to see if a cache line might be modified somewhere in a per-core L1/L2.
IDK how to reconcile Pat Fay's explanation with my understanding of cache coherency / cache heirarchy. I thought if it does go in L1, it would also have to go in L3. Maybe L1 tags have some kind of flag to say this line is weakly-ordered? My best guess is he was simplifying, and saying L1 when it actually only goes in fill buffers.
This Intel guide about working with video RAM talks about non-temporal moves using load/store buffers, rather than cache lines. (Note that this may only the case for uncacheable memory.) It doesn't mention prefetch. It's also old, predating SandyBridge. However, it does have this juicy quote:
And then in another paragraph, says typical CPUs have 8 to 10 fill buffers. SnB/Haswell still have 10 per core.. Again, note that this may only apply to uncacheable memory regions.
movntdqa
on WB (write-back) memory is not weakly-ordered (see the NT loads section of the linked answer), so it's not allowed to be "stale". Unlike NT stores, neithermovntdqa
norprefetchnta
change the memory ordering semantics of Write-Back memory.I have not tested this guess, but
prefetchnta
/movntdqa
on a modern Intel CPU could load a cache line into L3 and L1, but could skip L2 (because L2 isn't inclusive or exclusive of L1). The NT hint could have an effect by placing the cache line in the LRU position of its set, where it's the next line to be evicted. (Normal cache policy inserts new lines at the MRU position, farthest from being evicted. See this article about IvB's adaptive L3 policy for more about cache insertion policy).Prefetch throughput on IvyBridge is only one per 43 cycles, so be careful not to prefetch too much if you don't want prefetches to slow down your code on IvB. Source: Agner Fog's insn tables and microarch guide. This is a performance bug specific to IvB. On other designs, too much prefetch will just take up uop throughput that could have been useful instructions (other than harm from prefetching useless addresses).
About SW prefetching in general (not the
nt
kind): Linus Torvalds posted about how they rarely help in the Linux kernel, and often do more harm than good. Apparently prefetching a NULL pointer at the end of a linked-list can cause a slowdown, because it attempts a TLB fill.This question got me to do some reading... Looking at the Intel manual for MOVNTDQA (using a Sep'14 edition), there's an interesting statement -
and later on -
So there appears to be no guarantee that the non-temporal hint will do anything unless your mem type is WC. I don't really know what the WB memtype comment means, maybe some Intel processors do allow you to use it for the benefits of reducing cache pollution, or maybe they wanted to keep this option for the future (so you don't start using MOVNTDQA on WB mem and assume it would always behave the same), but it's quite clear that WC mem is the real use-case here. You want this instruction to provide some short-term buffering for stuff that would otherwise be completely uncacheable.
Now, on the other hand, looking at the description for prefetch*:
So that pretty much closes the story - your thinking is absolutely correct, these two are probably not meant and not likely to work together, chances are that one of them will be ignored.
Ok, but is there a chance these 2 would actually work (if the processor implements NT loads for WB memory)? Well, reading from MOVNTDQA again, something else catches the eye:
Ouch. So if you somehow do manage to prefetch into your cache, you're actually likely to degrade the performance of any consecutive streaming load, since it would have to flush the line out first. Not a pretty thought.
Both
MOVNTDQA
(on WC memory) andPREFETCHNTA
do not affect or trigger any of the cache hardware prefetchers. The whole idea of the non-temporal hint is to completely avoid cache pollution or at least minimize it as much as possible.There is only a very small number (undocumented) of buffers called streaming load buffers (these are separate from the line fill buffers and from the L1 cache) to hold cache lines fetched using
MOVNTDQA
. So basically you need to use what you fetch almost immediately. In addition,MOVNTDQA
only works on WC memory.The
PREFETCHNTA
instruction is perfect for your scenario, but you have to figure out how to use it properly in your code. From the Intel optimization manual Section 7.1:The
PREFETCHNTA
instruction offers the following benefits:PREFETCHNTA
, later prefetched cache lines can be placed in the same block as those lines that were also prefetched usingPREFETCHNTA
. So even if the total amount of data being fetched is massive, only one way of the whole cache will get affected. The data that resides in the other ways will remain cached and will be available after the algorithm terminates. But this is a double-edged sword. If twoPREFETCHNTA
instructions are too close to each other and if the specified addresses map to the same cache set, then only one will survive.PREFETCHNTA
are kept coherent like any other cached lines using the same hardware coherence mechanism.The thread that executes
PREFETCHNTA
may not be able to effectively benefit from it depending on the behavior of any other running threads on the same physical core, on other physical cores of the same processor, or on cores of other processors that share the same coherence domain. Techniques such as, pinning, priority boosting, CAT-based cache partitioning, and disabling hyperthreading may help that thread to run efficiently. Note also thatPREFETCHNTA
is classified as a speculative load and so it is concurrent with the three fence instructions.