Is prefetching triggered by the stream of exact ad

2019-04-07 05:21发布

问题:

On modern x86 CPUs, hardware prefetching is an important technique to bring cache lines into various levels of the cache hierarchy before they are explicitly requested by the user code.

The basic idea is that when the processor detects a series of accesses to sequential or strided-sequential1 locations, it will go ahead and fetch further memory locations in the sequence, even before executing the instructions that (may) actually access those locations.

My question is if the detection of a prefetch sequence is based on the full addresses (the actual addresses requested by user code) or the cache line addresses which is pretty much the address excluding the bottom 6 bits2 stripped off.

For example, on a system with a 64-bit cache line, accesses to full addresses 1, 2, 3, 65, 150 would access cache lines 0, 0, 0, 1, 2.

The difference could be relevant when a series of accesses is more regular in the cache line addressing than the full addressing. For example, a series of full addresses like:

32, 24, 8, 0, 64 + 32, 64 + 24, 64 + 8, 64 + 0, ..., N*64 + 32, N*64 + 24, N*64 + 8, N*64 + 0

might not look like a strided sequence at the full address level (indeed it might incorrectly trigger the backwards prefetcher since each subsequence of 4 accesses looks like an 8-byte strided reverse sequence), but at the cache line level it looks like its going forwards a cache line a time (just like the simple sequence 0, 8, 16, 24, ...).

Which system, if either, is in place on modern hardware?


Note: One could imagine also that the answer wouldn't be based on every access, but only accesses which miss in the some level of the cache that the prefetcher is observing, but then the same question still applies to the filtered stream of "miss accesses".


1Strided-sequential just means that accesses that have the same stride (delta) between them, even if that delta isn't 1. For example, a series of accesses to locations 100, 200, 300, ... could be detected as strided access with a stride of 100, and in principle the CPU will fetch based on this pattern (which would mean that some cache lines might be "skipped" in the prefetch pattern).

2 Here assuming a 64-bit cache line.

回答1:

The cache line offsets can be useful but they also can be misleading as your example shows. I will discuss the how line offsets impact the data prefetchers on modern Intel processors based on my experiments on Haswell.

The method I followed is simple. First, I disable all the data prefetchers except the one I want to test. Second, I design a sequence of accesses that exhibit a particular pattern of interest. The target prefetcher will see this sequence and learn from it. Then I follow that by an access to a particular line to determine whether the prefetcher has prefetched that line or not by accurately measuring the latency. The loop doesn't contain any other loads. It contains though one store used to store the latency measurement in some buffer.

There are 4 hardware data prefetchers. The behaviors of the DCU prefetcher and the L2 adjacent line prefetcher are not affected by the pattern of the line offsets, but only by the pattern of 64-byte aligned addresses.

My experiments don't show any evidence that the L2 streaming prefetcher even receives the cache line offset. It seems that it only gets the line-aligned address. For example, by accessing the same line multiple times, the offset pattern by itself does not seem to have an impact on the behavior of the prefetcher.

The DCU IP prefetcher shows interesting behavior. I've tested two cases:

  • If a load has decreasing offsets, the prefetcher will prefetch one or more lines both in the forward and backward direction.
  • If a load has increasing offsets, the prefetcher will prefetch one or more lines but only in the forward direction.