I have a question regarding memory mapped io. Suppose, there is a memory mapped IO peripheral whose value is being read by CPU. Once read, the value is stored in cache. But the value in memory has been updated by external IO peripheral. In such cases how will CPU determine cache has been invalidated and what could be the workaround for such case?
相关问题
- What uses more memory in c++? An 2 ints or 2 funct
- Achieving the equivalent of a variable-length (loc
- Memory for python.exe on Windows 7 python 32 - Num
- Why should we check WIFEXITED after wait in order
- Loki functor - problem with memory
相关文章
- Why are memory addresses incremented by 4 in MIPS?
- Is there a google API to read cached content? [clo
- What happens to dynamic allocated memory when call
- Is my heap fragmented
- AWS API Gateway caching ignores query parameters
- How does the piggybacking of current thread variab
- Check if url is cached webview android
- WebView's LOAD_NO_CACHE setting still saves fi
That's strongly platform dependent. And actually, there are two different cases.
Case #1. Memory-mapped peripheral. This means that access to some range of physical memory addresses is routed to peripheral device. There is no actual RAM involved. To control caching, x86, for example, has MTRR ("memory type range registers") and PAT ("page attribute tables"). They allow to set caching mode on particular range of physical memory. Under normal circumstances, range of memory mapped to RAM is write-back cacheable, while range of memory mapped to periphery devices is uncacheable. Different caching policies are described in Intel's system programming guide, 11.3 "Methods of caching available". So, when you issue read or write request to memory mapped peripheral, CPU cache is bypassed, and request goes directly to the device.
Case #2. DMA. It allows peripheral devices to access RAM asynchronously. In this case, DMA controller is no different from any CPU and equally participates in cache coherency protocol. Write request from periphery is seen by caches of other CPUs, and cache lines are either invalidated or are updated with new data. Read request also is seen by caches of other CPUs and data is returned from cache rather than from main RAM. (This is only an example: actual implementation is platform dependent. For example, SoC typically do not guarantee strong cache coherency peripheral <-> CPU.)
In both cases, the problem of caching also exists at compiler level: complier may cache data values in registers. That's why programming languages has some means of prohibiting such optimization: for example,
volatile
keyword in C.