可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I've got a ByteBuffer in java, and want to read, then conditionally modify that byte, e.g. with a method like:
public void updateByte(int index) {
byte b = this.buffer.getByte(index);
if (b == someByteValue) {
this.buffer.setByte(index, someNewByte);
}
}
How can I ensure that the reading then modifying of a byte happens atomically?
I don't want to synchronize the entire ByteBuffer or updateByte
method, since I want multiple threads to be able to read/write different bytes of the buffer at the same time (i.e. updateByte
can be called simultaneously by many threads as long as index
is different).
The ByteBuffer I'm using isn't backed by a byte[], so bb.hasArray() == false
in the above example.
回答1:
Short answer: you can't, without resorting to JNI.
Longer answer: There are no atomic updates in the ByteBuffer API. Moreover, the interaction of a ByteBuffer with memory is not rigorously defined. And in the Sun implementation, the methods used to access raw memory do not attempt to flush the cache, so you may see stale results on a multicore processor.
Also, be aware that Buffer
(and its subclasses such as ByteBuffer) is explicitly documented as not thread-safe. If you have multiple threads accessing the same buffer, you're (1) relying on implementation behavior for absolute access, or (2) writing broken code for relative access.
回答2:
How about providing a set of explicit lock objects for portions of the ByteBuffer (portions could be very small, e.g. one word, or quite large, e.g. four quarter-buffers)?
When a thread wants to check and modify a byte, it must first acquire the lock for the appropriate portion, perform its work, then release the lock.
This would allow access to different portions of the data by multiple threads, without requiring global synchronization.
回答3:
I don't believe you can access byte atomically in Java. The best you can do is to modify int
values. This will allow you to simulate modifying a single byte.
You can use Unsafe (on many JVMs) to do a compare and swap for the array() (heap ByteBuffer) or address() (direct ByteBuffer)
回答4:
Personally I would lock on a mutex until I figure out the offset to write the data to and then release the mutex. This way you lock for a very small time
回答5:
A long long thread about concurrent DirectByteBuffer at this list:
The answer should be "YES".
Another BIG example is NIO.2. write/read operation submit byte buffers, it's OK when CompletionHandler invoked.
Ofcause, in NIO.2 case, only DirectByteBuffer applied. For Non-Direct-ByteBuffer, which "Cloned" to DirectByteBuffer, it is NOT parameters of real lower level operations.
回答6:
It should be possible to lock the ByteBuffer. Methods:
- you could create a list of lock objects and lock only one area per read of the bytebuffer. Like DNA is suggesting, which should be the fastest solution.
- or you could even use memory mapping to solve this and then using the FileChannel.lock which would also lock an area of the bytebuffer but on a more low level. Edit: this only protects access from external programs IMO
- Or you could use several smaller, but synchronized ByteBuffers + exchange information. It is interesting to note that threads should see changes immediately from each other (this is where I got the mmap idea)
回答7:
I think putting the critical section of code under a lock control should be the clean solution. However do not use synchronization directly if your use-case has high number of reads compared to writes. I would suggest that you make use of ReentrantReadWriteLock as part of your solution. In function where you modify ByteBuffer you take writeLock().lock() and then your code. And while reading make use of readLock().lock(). You can read more on read-write lock on mentioned link. Basically it will allow concurrent reads but not concurrent writes and while write is happening read threads wait