If I have an unsynchronized java collection in a multithreaded environment, and I don't want to force readers of the collection to synchronize[1], is a solution where I synchronize the writers and use the atomicity of reference assignment feasible? Something like:
private Collection global = new HashSet(); // start threading after this
void allUpdatesGoThroughHere(Object exampleOperand) {
// My hypothesis is that this prevents operations in the block being re-ordered
synchronized(global) {
Collection copy = new HashSet(global);
copy.remove(exampleOperand);
// Given my hypothesis, we should have a fully constructed object here. So a
// reader will either get the old or the new Collection, but never an
// inconsistent one.
global = copy;
}
}
// Do multithreaded reads here. All reads are done through a reference copy like:
// Collection copy = global;
// for (Object elm: copy) {...
// so the global reference being updated half way through should have no impact
Rolling your own solution seems to often fail in these type of situations, so I'd be interested in knowing other patterns, collections or libraries I could use to prevent object creation and blocking for my data consumers.
[1] The reasons being a large proportion of time spent in reads compared to writes, combined with the risk of introducing deadlocks.
Edit: A lot of good information in several of the answers and comments, some important points:
- A bug was present in the code I posted. Synchronizing on global (a badly named variable) can fail to protect the syncronized block after a swap.
- You could fix this by synchronizing on the class (moving the synchronized keyword to the method), but there may be other bugs. A safer and more maintainable solution is to use something from java.util.concurrent.
- There is no "eventual consistency guarantee" in the code I posted, one way to make sure that readers do get to see the updates by writers is to use the volatile keyword.
- On reflection the general problem that motivated this question was trying to implement lock free reads with locked writes in java, however my (solved) problem was with a collection, which may be unnecessarily confusing for future readers. So in case it is not obvious the code I posted works by allowing one writer at a time to perform edits to "some object" that is being read unprotected by multiple reader threads. Commits of the edit are done through an atomic operation so readers can only get the pre-edit or post-edit "object". When/if the reader thread gets the update, it cannot occur in the middle of a read as the read is occurring on the old copy of the "object". A simple solution that had probably been discovered and proved to be broken in some way prior to the availability of better concurrency support in java.
I think your original idea was sound, and DaoWen did a good job getting the bugs out. Unless you can find something that does everything for you, it's better to understand these things than hope some magical class will do it for you. Magical classes can make your life easier and reduce the number of mistakes, but you do want to understand what they are doing.
ConcurrentSkipListSet might do a better job for you here. It could get rid of all your multithreading problems.
However, it is slower than a HashSet (usually--HashSets and SkipLists/Trees hard to compare). If you are doing a lot of reads for every write, what you've got will be faster. More importantly, if you update more than one entry at a time, your reads could see inconsistent results. If you expect that whenever there is an entry A there is an entry B, and vice versa, the skip list could give you one without the other.
With your current solution, to the readers, the contents of the map are always internally consistent. A read can be sure there's an A for every B. It can be sure that the
size()
method gives the precise number of elements that will be returned by the iterator. Two iterations will return the same elements in the same order.In other words, allUpdatesGoThroughHere and ConcurrentSkipListSet are two good solutions to two different problems.
According to the relevant Java Tutorial,
This is reaffirmed by Section §17.7 of the Java Language Specification
It appears that you can indeed rely on reference access being atomic; however, recognize that this does not ensure that all readers will read an updated value for
global
after this write -- i.e. there is no memory ordering guarantee here.If you use an implicit lock via
synchronized
on all access toglobal
, then you can forge some memory consistency here... but it might be better to use an alternative approach.You also appear to want the collection in
global
to remain immutable... luckily, there isCollections.unmodifiableSet
which you can use to enforce this. As an example, you should likely do something like the following...... that, or using
AtomicReference
,You would then use
Collections.unmodifiableSet
for your modified copies as well.You should know that making a copy here is redundant, as internally
for (Object elm : global)
creates anIterator
as follows...There is therefore no chance of switching to an entirely different value for
global
in the midst of reading.All that aside, I agree with the sentiment expressed by DaoWen... is there any reason you're rolling your own data structure here when there may be an alternative available in
java.util.concurrent
? I figured maybe you're dealing with an older Java, since you use raw types, but it won't hurt to ask.You can find copy-on-write collection semantics provided by
CopyOnWriteArrayList
, or its cousinCopyOnWriteArraySet
(which implements aSet
using the former).Also suggested by DaoWen, have you considered using a
ConcurrentHashMap
? They guarantee that using afor
loop as you've done in your example will be consistent.Internally, an
Iterator
is used for enhancedfor
over anIterable
.You can craft a
Set
from this by utilizingCollections.newSetFromMap
like follows:Replace the
synchronized
by makingglobal
volatile
and you'll be alright as far as the copy-on-write goes.Although the assignment is atomic, in other threads it is not ordered with the writes to the object referenced. There needs to be a happens-before relationship which you get with a
volatile
or synchronising both reads and writes.The problem of multiple updates happening at once is separate - use a single thread or whatever you want to do there.
If you used a
synchronized
for both reads and writes then it'd be correct but the performance may not be great with reads needing to hand-off. AReadWriteLock
may be appropriate, but you'd still have writes blocking reads.Another approach to the publication issue is to use final field semantics to create an object that is (in theory) safe to be published unsafely.
Of course, there are also concurrent collections available.
Rather than trying to roll out your own solution, why not use a ConcurrentHashMap as your set and just set all the values to some standard value? (A constant like
Boolean.TRUE
would work well.)I think this implementation works well with the many-readers-few-writers scenario. There's even a constructor that lets you set the expected "concurrency level".
Update: Veer has suggested using the Collections.newSetFromMap utility method to turn the ConcurrentHashMap into a Set. Since the method takes a
Map<E,Boolean>
my guess is that it does the same thing with setting all the values toBoolean.TRUE
behind-the-scenes.Update: Addressing the poster's example
Your minimalist solution would work just fine with a bit of tweaking. My worry is that, although it's minimal now, it might get more complicated in the future. It's hard to remember all of the conditions you assume when making something thread-safe—especially if you're coming back to the code weeks/months/years later to make a seemingly insignificant tweak. If the ConcurrentHashMap does everything you need with sufficient performance then why not use that instead? All the nasty concurrency details are encapsulated away and even 6-months-from-now you will have a hard time messing it up!
You do need at least one tweak before your current solution will work. As has already been pointed out, you should probably add the
volatile
modifier toglobal
's declaration. I don't know if you have a C/C++ background, but I was very surprised when I learned that the semantics ofvolatile
in Java are actually much more complicated than in C. If you're planning on doing a lot of concurrent programming in Java then it'd be a good idea to familiarize yourself with the basics of the Java memory model. If you don't make the reference toglobal
avolatile
reference then it's possible that no thread will ever see any changes to the value ofglobal
until they try to update it, at which point entering thesynchronized
block will flush the local cache and get the updated reference value.However, even with the addition of
volatile
there's still a huge problem. Here's a problem scenario with two threads:global={}
. ThreadsA
andB
both have this value in their thread-local cached memory.A
obtains obtains thesynchronized
lock onglobal
and starts the update by making a copy ofglobal
and adding the new key to the set.A
is still inside thesynchronized
block, ThreadB
reads its local value ofglobal
onto the stack and tries to enter thesynchronized
block. Since ThreadA
is currently inside the monitor ThreadB
blocks.A
completes the update by setting the reference and exiting the monitor, resulting inglobal={1}
.B
is now able to enter the monitor and makes a copy of theglobal={1}
set.A
decides to make another update, reads in its localglobal
reference and tries to enter thesynchronized
block. Since Thread B currently holds the lock on{}
there is no lock on{1}
and ThreadA
successfully enters the monitor!A
also makes a copy of{1}
for purposes of updating.Now Threads
A
andB
are both inside thesynchronized
block and they have identical copies of theglobal={1}
set. This means that one of their updates will be lost! This situation is caused by the fact that you're synchronizing on an object stored in a reference that you're updating inside yoursynchronized
block. You should always be very careful which objects you use to synchronize. You can fix this problem by adding a new variable to act as the lock:This bug was insidious enough that none of the other answers have addressed it yet. It's these kinds of crazy concurrency details that cause me to recommend using something from the already-debugged java.util.concurrent library rather than trying to put something together yourself. I think the above solution would work—but how easy would it be to screw it up again? This would be so much easier:
Since the reference is
final
you don't need to worry about threads using stale references, and since theConcurrentHashMap
handles all the nasty memory model issues internally you don't have to worry about all the nasty details of monitors and memory barriers!Can you use the
Collections.synchronizedSet
method? From HashSet Javadoc http://docs.oracle.com/javase/6/docs/api/java/util/HashSet.html