C volatile variables and Cache Memory

2019-01-13 01:43发布

问题:

Cache is controlled by cache hardware transparently to processor, so if we use volatile variables in C program, how is it guaranteed that my program reads data each time from the actual memory address specified but not cache.

My understanding is that,

  1. Volatile keyword tells compiler that the variable references shouldn't be optimized and should be read as programmed in the code.

  2. Cache is controlled by cache hardware transparently, hence when processor issues an address, it doesn't know whether the data is coming from cache or the memory.

So, if I have a requirement of having to read a memory address every time required, how can I make sure that its not referred from cache but from required address?

Some how, these two concepts are not fitting together well. Please clarify how its done.

(Imagining we have write-back policy in cache (if required for analyzing the problem))

Thank you, Microkernel :)

回答1:

Firmware developer here. This is a standard problem in embedded programming, and one that trips up many (even very experienced) developers.

My assumption is that you are attempting to access a hardware register, and that register value can change over time (be it interrupt status, timer, GPIO indications, etc.).

The volatile keyword is only part of the solution, and in many cases may not be necessary. This causes the variable to be re-read from memory each time it is used (as opposed to being optimized out by the compiler or stored in a processor register across multiple uses), but whether the "memory" being read is an actual hardware register versus a cached location is unknown to your code and unaffected by the volatile keyword. If your function only reads the register once then you can probably leave off volatile, but as a general rule I will suggest that most hardware registers should be defined as volatile.

The bigger issue is caching and cache coherency. The easiest approach here is to make sure your register is in uncached address space. That means every time you access the register you are guaranteed to read/write the actual hardware register and not cache memory. A more complex but potentially better performing approach is to use cached address space and have your code manually force cache updates for specific situations like this. For both approaches, how this is accomplished is architecture-dependent and beyond the scope of the question. It could involve MTRRs (for x86), MMU, page table modifications, etc.

Hope that helps. If I've missed something, let me know and I'll expand my answer.



回答2:

My suggestion is to mark the page as non-cached by the virtual memory manager.
In Windows, this is done through setting PAGE_NOCACHE when calling VirtualProtect.

For a somewhat different purpose, the SSE 2 instructions have the _mm_stream_xyz instructions to prevent cache pollution, although I don't think they apply to your case here.

In either case, there is no portable way of doing what you want in C; you have to use OS functionality.



回答3:

From your question there is a misconception on your part.
Volatile keyword is not related to the cache as you describe.

When the keyword volatile is specified for a variable, it gives a hint to the compiler not to do certain optimizations as this variable can change from other parts of the program unexpectedly.

What is meant here, is that the compiler should not reuse the value already loaded in a register, but access the memory again as the value in register is not guaranteed to be the same as the value stored in memory.

The rest concerning the cache memory is not directly related to the programmer.

I mean the synchronization of any cache memory of CPU with the RAM is an entirely different subject.



回答4:

Wikipedia has a pretty good article about MTRR (Memory Type Range Registers) which apply to the x86 family of CPUs.

To summarize it, starting with the Pentium Pro Intel (and AMD copied) had these MTR registers which could set uncached, write-through, write-combining, write-protect or write-back attributes on ranges of memory.

Starting with the Pentium III but as far as I know, only really useful with the 64-bit processors, they honor the MTRRs but they can be overridden by the Page Attribute Tables which let the CPU set a memory type for each page of memory.

A major use of the MTRRs that I know of is graphics RAM. It is much more efficient to mark it as write-combining. This lets the cache store up the writes and it relaxes all of the memory write ordering rules to allow very high-speed burst writes to a graphics card.

But for your purposes you would want either a MTRR or a PAT setting of either uncached or write-through.



回答5:

using the _Uncached keyword may help in embedded OS , like MQX

#define MEM_READ(addr)       (*((volatile _Uncached unsigned int *)(addr)))
#define MEM_WRITE(addr,data) (*((volatile _Uncached unsigned int *)(addr)) = data)


回答6:

As you say cache is transparent to the programmer. The system guarantees that you always see the value that was last written to if you access an object through its address. The "only" thing that you may incur if an obsolete value is in your cache is a runtime penalty.



回答7:

volatile makes sure that data is read everytime it is needed without bothering with any cache between CPU and memory. But if you need to read actual data from memory and not cached data, you have two options:

  • Make a board where said data is not cached. This may already be the case if you address some I/O device,
  • Use specific CPU instructions that bypass the cache. This is used when you need to scrub memory for activating possible SEU errors.

The details of second option depend on OS and/or CPU.