I understand the part of the paper where they trick the CPU to speculatively load the part of the victim memory into the CPU cache. Part I do not understand is how they retrieve it from cache.
相关问题
- slurm: use a control node also for computing
- “Zero out” sensitive String data in Swift
- High cost encryption but less cost decryption
- How to restrict VOB read access in ClearCase (Wind
- Is it appropriate to secure/hide Swagger/OpenAPI S
相关文章
- Warning : HTML 1300 Navigation occured?
- How to get CPU serial under Linux without root per
- Is it possible to run 16 bit code in an operating
- Security concerns about CORS
- How do I prevent SQL injection with ColdFusion
- LINQ to Entities and SQL Injection
- Why does the latency of the sqrtsd instruction cha
- How to use Google application-specific password in
Basically, the secret retrieved speculatively is immediately used as an index to read from another array called
side_effects
. All we need is to "touch" an index inside_effects
array, so the corresponding element get from memory to CPU cache:Then the latency to access each element in
side_effects
array is measured and compared to a memory access time:If latency is lower that minimum memory access time, the element is in cache, so the secret was the current index. If the latency is high, the element is not in cache, so we continue our measurements.
So, basically we do not retrieve any information directly, rather we touch some memory during the speculative execution and then observe the side effects.
Here is the Specter-Based Meltdown proof of concept in 99 lines of code you might find easier to understand that the other PoCs: https://github.com/berestovskyy/spectre-meltdown
In general, this technique is called Side-Channel Attack and more information could be found on Wikipedia: https://en.wikipedia.org/wiki/Side-channel_attack
They don't retrieve it directly (out of bounds read bytes are not "retired" by the CPU and cannot be seen by the attacker in the attack).
A vector of attack is to do the "retrieval" a bit at a time. After the CPU cache has been prepared (flushing the cache where it has to be), and has been "taught" that a if branch goes through while the condition relies on non-cached data, the CPU speculatively executes the couple of lines from the if scope, including an out-of-bounds access (giving a byte B), and then immediately access some authorized non-cached array at an index that depends on one bit of the secret B (B will never directly be seen by the attacker). Finally, attacker retrieves the same authorized data array from, say, an index calculated with B bit, say zero: if the retrieval of that ok byte is fast, data was still in the cache, meaning B bit is zero. If the retrieval is (relatively) slow, the CPU had to load in its cache that ok data, meaning it didn't earlier, meaning B bit was one.
For instance,
Cond
, allValidArray
not cached,LargeEnough
is big enough to ensure the CPU will not load bothValidArray[ valid-index + 0 ]
andValidArray[ valid-index + LargeEnough ]
in its cache in one shotwhere
bit
is tried successively being first0x01
, then0x02
... to0x80
. By measuring the "time" (number of CPU cycles) the "next" code takes for each bit, the value of V is revealed:ValidArray[ valid-index + 0 ]
is in the cache,V & bit
is0
V & bit
isbit
This takes time, each bit requires to prepare the CPU L1 cache, tries several time the same bit to minimize timing errors etc...
Then the correct attack "offset" has to be determined to read an interesting area.
Clever attack, but not so easy to implement.