If you have multiple assignments of shared variables inside one lock code block, does it necessarily mean that all these changes are immediately visible to other threads, potentially running on other processors once they enter a lock statement on the same object - or is there no such guarantees?
A lot of the examples out there shows a single "set" or "get" of a common variable and goes into detail of memory barriers but what happens if a more complicated set of statements are inside? Potentially even function calls that does other things?
Something like this:
lock(sharedObject)
{
x = 10;
y = 20;
z = a + 10;
}
If this code runs on another thread, which is possibly executed on another processor, does it make any guarantees about the "visibility" of the change?
lock (sharedObject)
{
if (y == 10)
{
// Do something.
}
}
If the answer is no - perhaps and explanation of when these changes might become visible?
A lock block includes a memory fence at the start and at the finish (start and end of the block). This ensures that any changes to memory are visible to other cores (e.g. other threads running on other cores). In your example, changes to x, y, z in your first lock block will be visible to any other threads. "Visible" means any values cached into a register will be flushed to memory, and any memory cached in the CPUs' cache will be flushed to physical memory. ECMA 334 details that a lock block is a block surrounded by Monitor.Enter and Monitor.Exit. Further, ECMA 335 details that Monitor.Enter "shall implicitly perform a volatile read operation..." and Monitor.Exit "implicitly perform a volatile write operation. This does mean that the modifications won't be visible to other cores/threads until the end of the lock block (after the Monitor.Exit), but if all your access to these variables are guarded by a lock, there can be no simultaneous access to said variables across different cores/threads anyway.
This effectively means that any variables guarded by a lock statement do not need to be declared as volatile in order to have their modifications visible to other threads.
Since the example code only contains an operation that relies on a single shared atomic operation (read and write of a single value to y) You could get the same results with:
and
The first block guarantees that the write to x is visible before the write to y and the write to y is visible before the write to z. It also guarantees that if the writes to x or y were cached in the CPUs cache that that cache would be flushed to physical memory (and thus visible to any other thread) immediately after the call to VolatileWrite.
If within the
if(y == 10)
block you do something withx
andy
, you should return to using thelock
keyword.Further, the following would be identical:
Forgive me if I'm misunderstanding your question (very possible); but I think you're operating on a confused blend of the concepts of synchronization and visibility.
The whole point of a mutex ("mutual exclusion") is to ensure that two blocks of code will not run simultaneously. So in your example, the first block:
...and the second block:
...will never execute at the same time. This is what the
lock
keyword guarantees for you.Therefore, any time your code has entered the second block, the variables
x
,y
, andz
should be in a state that is consistent with a full execution of the first. (This is assuming that everywhere you access these variables, youlock
onsharedObject
in the same way you have in these snippets.)What this means is that the "visibility" of intermediate changes within the first block is irrelevant from the perspective of the second, since there will never be a time when, e.g., the change to the value
x
has occurred but not toy
orz
.