Preferring synchronized to volatile

2019-04-20 06:16发布

I've read this answer in the end of which the following's written:

Anything that you can with volatile can be done with synchronized, but not vice versa.

It's not clear. JLS 8.3.1.4 defines volatile fields as follows:

A field may be declared volatile, in which case the Java Memory Model ensures that all threads see a consistent value for the variable (§17.4).

So, the volatile fields are about memory visibility. Also, as far as I got from the answer I cited, reading and writing to volatile fields are synched.

Synchronization, in turn guarantees that the only one thread has access to a synched block. As I got, it has nothing to do with memory visibility. What did I miss?

4条回答
Summer. ? 凉城
2楼-- · 2019-04-20 06:24

Synchronized and volatile are different, but usually both of them are used to solve same common problem.

Synchronized is to make sure that only one thread will access the shared resource at a given point of time.

Whereas, those shared resources are often declared as volatile, it is because, if a thread has changed the shared resource value, it has to updated in the other thread also. But without volatile, the Runtime, just optimizes the code, by reading the value from the cache. So what volatile does is, whenever any thread access volatile, it wont read the value from the cache, instead it actually gets it from the actual memory and the same is used.


Was going through log4j code and this is what I found.

/**
 * Config should be consistent across threads.
 */
protected volatile PrivateConfig config;
查看更多
男人必须洒脱
3楼-- · 2019-04-20 06:28

In fact synchronization is also related to memory visibilty as the JVM adds a memory barrier in the exit of the synchronized block. This ensures that the results of writes by the thread in the synchronization block are guaranteed to be visible by reads by another threads once the first thread has exited the synchronized block.

Note : following @PaŭloEbermann's comment, if the other thread go through a read memory barrier (by getting in a synchronized block for example), their local cache will not be invalidated and therefore they might read an old value.

The exit of a synchronized block is a happens-before in this doc : http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/package-summary.html#MemoryVisibility

Look for these extracts:

The results of a write by one thread are guaranteed to be visible to a read by another thread only if the write operation happens-before the read operation.

and

An unlock (synchronized block or method exit) of a monitor happens-before every subsequent lock (synchronized block or method entry) of that same monitor. And because the happens-before relation is transitive, all actions of a thread prior to unlocking happen-before all actions subsequent to any thread locking that monitor.

查看更多
The star\"
4楼-- · 2019-04-20 06:33

That's wrong. Synchronization has to do with memory visibility. Every thread has is own cache. If you got a lock the cache is refresehd. If you release a lock the cache is flused to the main memory.

If you read a volatile field there is also a refresh, if you write a volatile field there is a flush.

查看更多
走好不送
5楼-- · 2019-04-20 06:35

If multiple threads write to a shared volatile variable and they also need to use a previous value of it, it can create a race condition. So at this point you need use synchronization.

... if two threads are both reading and writing to a shared variable, then using the volatile keyword for that is not enough. You need to use a synchronized in that case to guarantee that the reading and writing of the variable is atomic. Reading or writing a volatile variable does not block threads reading or writing. For this to happen you must use the synchronized keyword around critical sections.

For detailed tutorial about volatile, see 'volatile' is not always enough.

查看更多
登录 后发表回答