Consider the following scenario:
Requirements:
- Intel x64 Server (multiple CPU-sockets => NUMA)
- Ubuntu 12, GCC 4.6
- Two processes sharing large amounts of data over (named) shared-memory
- Classical producer-consumer scenario
- Memory is arranged in a circular buffer (with M elements)
Program sequence (pseudo code):
Process A (Producer):
int bufferPos = 0;
while( true )
{
if( isBufferEmpty( bufferPos ) )
{
writeData( bufferPos );
setBufferFull( bufferPos );
bufferPos = ( bufferPos + 1 ) % M;
}
}
Process B (Consumer):
int bufferPos = 0;
while( true )
{
if( isBufferFull( bufferPos ) )
{
readData( bufferPos );
setBufferEmpty( bufferPos );
bufferPos = ( bufferPos + 1 ) % M;
}
}
Now the age-old question: How to synchronize them effectively!?
- Protect every read/write access with mutexes
- Introduce a "grace period", to allow writes to complete: Read data in buffer N, when buffer(N+3) has been marked as full (dangerous, but seems to work...)
- ?!?
Ideally I would like something along the lines of a memory-barrier, that guarantees that all previous reads/writes are visible across all CPUs, along the lines of:
writeData( i );
MemoryBarrier();
//All data written and visible, set flag
setBufferFull( i );
This way, I would only have to monitor the buffer flags and then could read the large data chunks safely.
Generally I'm looking for something along the lines of acquire/release fences as described by Preshing here:
http://preshing.com/20130922/acquire-and-release-fences/
(if I understand it correctly the C++11 atomics only work for threads of a single process and not along multiple processes.)
However the GCC-own memory barriers (__sync_synchronize in combination with the compiler barrier asm volatile( "" ::: "memory" ) to be sure) don't seem to work as expected, as writes become visible after the barrier, when I expected them to be completed.
Any help would be appreciated...
BTW: Under windows this just works fine using volatile variables (a Microsoft specific behaviour)...
Boost Interprocess has support for Shared Memory.
Boost Lockfree has a Single-Producer Single-Consumer queue type (
spsc_queue
). This is basically what you refer to as a circular buffer.Here's a demonstration that passes IPC messages (in this case, of type
string
) using this queue, in a lock-free fashion.Defining the types
First, let's define our types:
For simplicity I chose to demo the runtime-size
spsc_queue
implementation, randomly requesting a capacity of 200 elements.The
shared_string
typedef defines a string that will transparently allocate from the shared memory segment, so they are also "magically" shared with the other process.The consumer side
This is the simplest, so:
This opens the shared memory area, locates the shared queue if it exists. NOTE This should be synchronized in real life.
Now for the actual demonstration:
The consumer just infinitely monitors the queue for pending jobs and processes one each ~10ms.
The Producer side
The producer side is very similar:
Again, add proper synchronization to the initialization phase. Also, you would probably make the producer in charge of freeing the shared memory segment in due time. In this demonstration, I just "let it hang". This is nice for testing, see below.
So, what does the producer do?
Right, the producer produces precisely 3 messages in ~750ms and then exits.
Note that consequently if we do (assume a POSIX shell with job control):
Will print 3x3 messages "immediately", while leaving the consumer running. Doing
again after this, will show the messages "trickle in" in realtime (in burst of 3 at ~250ms intervals) because the consumer is still running in the background
See the full code online in this gist: https://gist.github.com/sehe/9376856