posix pipe as a work queue

2020-06-16 05:47发布

问题:

The normal implementations of a work queue I have seen involve mutexes and condition variables.

Consumer:

A) Acquires Lock
B) While Queue empty
      Wait on Condition Variable (thus suspending thread and releasing lock)
C) Work object retrieved from queue
D) Lock is released
E) Do Work
F) GOTO A

Producer:

A) Acquires Lock
B) Work is added to queue
C) condition variable is signaled (potentially releasing worker)
D) Lock is released

I have been browsing some code and I saw an implementation using POSIX pipes (I have not seen this technique before).

Consumer:

A) Do select on pipe (thus suspending thread while no work)
B) Get Job from pipe
C) Do Work
D) GOTO A

Producer:

A) Write Job to pipe.

Since the producer and consumer are threads inside the same application (thus they share the same address space and thus pointers between them are valid); the jobs are written to the pipe as the address of the work object (A C++ object). So all that has to be written/read from the pipe is an 8-byte address.

My question is:

  • Is this a common technique (have I been sheltered from this) and what are the advantages/disadvantages?

My curiosity was piqued because the pipe technique does not involve any visible lock or signals (it may be hidden in the select). So I was wondering if this would be more efficient?

Edit:

Based on comments in @Maxim Yegorushkin answer.

Actually the "Producer" in this scenario is involved in a lot of high volume IO from lots of source in parallel. So I suspect that the original author though it very desirable that this thread did not block under any circumstances, but also did not want to high cost work in the "Producer" thread.

回答1:

As it's been mentioned here already, people use pipes as queues to avoid blocking on a condition variable in a non-blocking I/O thread (i.e. the thread that handles multiple sockets and blocks on select/epoll). If an I/O thread blocks on a condition variable or a mutex it can't do non-blocking I/O any more.

Some say that writing into a pipe involves a system call and may increase latency when the volume of inter-thread events is high. That is only true for naive pipe-based queue implementations.

Advanced implementations use lock-free linked lists of jobs/events and only when the first job is added to the list the pipe is written to to wake the target I/O thread from the blocking epoll call (essentially using pipe as an edge-triggered notification mechanism but not for passing pointers to jobs/events). Because it takes a few micro-seconds to wake up a thread there may be more jobs/events posted to that thread's event queue during this time but every subsequent event doesn't require writing to the pipe, until later time when the I/O thread wakes up and consumes all events in the queue. Also, in newer Linux kernel a faster eventfd can be used instead of pipe to wake up an I/O thread.



回答2:

I have done this. It's old-school but it works.

The reason I did it this way was I needed to wake up the same thread on either a job for it to do or read input from another source, so select() was involved.



回答3:

It is because of select and how it is structured. As you can see in the man page

select() and pselect() allow a program to monitor multiple file descriptors, waiting until one or more of the file descriptors become "ready" for some class of I/O operation (e.g., input possible). A file descriptor is considered ready if it is possible to perform the corresponding I/O operation (e.g., read(2)) without blocking.

The key in the above is the 'waiting until one or more of the FDs become ready'. That is the synchronization point between the two threads.



回答4:

I think the answer is that the pipe technique does not give as good performance as it involves system calls, which are relatively expensive. But it does mean that all that tricky locking and sleeping and waking gets taken care of for you.

I've used both myself, but pipes only for occasional non performance critical applications.

EDIT: I suppose I might as well make the standard recommendation since nobody has come along with any clearly authoritative comments.

Standard recommendation being: Try both and benchmark them. It's the one true way to find out which performs better...