What guarantees that different unrelated objects i

2019-09-20 07:52发布

When different threads only use unrelated objects and literally do not share anything they cannot have a race condition, right? Obviously.

Actually all threads share something: the address space. There is no guarantee that a memory location that was used by one thread isn't going to be allocated at some other time to another thread. This can be true of memory for dynamically allocated objects or even for automatic objects: there is no prescription that the memory space for the "stacks" (the local objects of functions) of multiple threads is pre-allocated (even lazily), disjoint and represented as the usual linear "stack"; it could be anything with stack (FILO) behavior. So the memory location used to store an automatic object can be reused later by another automatic object in another thread.

That in itself seems pretty innocuous and uninteresting as how room is made for automatic objects is only important when room is missing (very large automatic arrays or deep recursion).

What about synchronisation? Unrelated disjoint threads obviously cannot use any C++ synchronisation primitive to ensure correct synchronisation as by definition there is nothing (to) synchronize (on), so no happens before relation is going to be created between threads.

What if the implementation reuses the memory range of the stack of foo() (including the location of i) after destruction of local variables and exit of foo() in thread 1 to store variables for bar() in thread 2?

void foo() { // in thread 1
   int i;
   i = 1;
}

void bar() { // in thread 2
   int i;
   i = 2;
}

There is no happens before between i = 1 and i = 2.

Would that cause a data race and undefined behavior?

In other words, do all multithread programs have a potential for having undefined behavior based on implementation choices the user has no control over, that are unforeseeable and with races he can't do anything about?

2条回答
趁早两清
2楼-- · 2019-09-20 08:13

The C++ memory model doesn't behave like you might intuitively expect. For example, it has memory locations, but quoting the N4713 draft section 6.6.1, paragraph 3:

A memory location is either an object of scalar type or a maximal sequence of adjacent bit-fields all having nonzero width. [ Note: Various features of the language, such as references and virtual functions, might involve additional memory locations that are not accessible to programs but are managed by the implementation. — end note ] Two or more threads of execution (6.8.2) can access separate memory locations without interfering with each other.

So by the C++ memory model, two distinct objects in different threads are never considered to have the same memory location, even if at the physical machine level, one is allocated in the same RAM after the other is deallocated.

By the C++ memory model, the situation you ask about is not a data race. The implementation must take whatever steps are necessary to ensure this is safe, regardless of the hardware's memory model.

查看更多
可以哭但决不认输i
3楼-- · 2019-09-20 08:30

The physical machine's "same address" is irrelivant to the C++ memory model. The C++ memory model talks about the behaviour of the abstract machine. Addresses in the abstract machine can be incomparable in fundamenral way, even if they have the same machine address at different times.

Race conditions in the C++ abstract machine talk about operations in it, not on the physical machine. It is the job of the compiler to ensure that the physical machine implementation of the abstract machine behaviour of the C++ code is conformant.

If it does strange things like reuse stack address space between threads, then it does whatever it has to in order to maintain the lack of race conditions that accessing unrelated variables in the abstract machine. None of this happens at the C++ code level; there is no C++ code (other than possibly in namespace std) involved.

查看更多
登录 后发表回答