Can a lower level cache have higher associativity and still hold inclusion?
Suppose we have 2-level of cache.(L1 being nearest to CPU and L2 being nearest to main memory)
L1 cache is 2-way set associative with 4 sets and let's say L2 cache is direct mapped with 16 cache lines and assume that both caches have same block size. Then I think it will follow inclusion property even though L1(lower level) has higher associativity than L2 (upper level).
As per my understanding, lower level cache can have higher associativity (and still hold inclusion). This will only change the number of tag bits (as seen in physical address at each level), number of comparators and MUX to be used.Please let me know if this is correct.
Yes, but conflict evictions in the outer cache may force eviction from the inner cache to maintain inclusivity.
(I think if both caches use simple indexing, you wouldn't have that with an outer cache that's larger and at least as associative, because aliasing in the outer cache would only happen when you also alias in the inner.)
With the outer cache being larger you don't necessarily get aliasing in it for lines that would alias in L1, so it's not useless.
But it is unusual: usually outer caches are larger and more associative than inner caches because they don't have to be as fast, and high hit rate is more valuable.
If you were going to make the outer cache less associative than an inner cache, making it NINE (non-inclusive non-exclusive) might be a better idea. But you only asked if it was possible.
Inclusion is a property that is enforced on the contents of the included cache, and is independent of the associativity of the caches. Inclusion provides the benefit of eliminating most snoops of the included cache, which allows an implementation to get away with reduced tag bandwidth, and may also result in reduced snoop latency.
Intuitively, it makes sense that when the enclosing cache has more associativity than the included cache, the contents of the included cache should always "fit" into the enclosing cache. This "static" view is an inappropriate oversimplification. Differences in replacement policy and access patterns will almost always generate cases in which lines are chosen as victim in the enclosing cache before being chosen as victim in the included cache. The inclusion policy requires that such lines be evicted from the included cache -- independent of associativity.
The case that is intuitively more problematic occurs when the enclosing cache has less associativity than the included cache. In this case it is clear that associativity conflicts in the enclosing cache will force evictions from the included cache.
In either case, judging whether the additional evictions from the included cache outweigh the benefits of inclusion is multi-dimensional. The performance impact will depend on the specific sizes, associativities, and indexing of the caches, as well as on the application access patterns. The importance of the performance impact depends on the application characteristics -- e.g., tightly-coupled parallel applications typically show throughput proportional to the worst-case performance of the participating processors, while independent applications typically show throughput proportional to the average performance of the participating processors.