Everyone seems to say named pipes are faster than sockets IPC. How much faster are they? I would prefer to use sockets because they can do two-way communication and are very flexible but will choose speed over flexibility if it is by considerable amount.
相关问题
- Multiple sockets for clients to connect to
- Is shmid returned by shmget() unique across proces
- Faster loop: foreach vs some (performance of jsper
- how to get running process information in java?
- Why wrapping a function into a lambda potentially
I would suggest you take the easy path first, carefully isolating the ipc mechanism so that you can change from socket to pipe, but I would definitely go with socket first. You should be sure IPC performance is a problem before preemptively optimizing.
And if you get in trouble because of IPC speed, I think you should consider switching to shared memory rather rather than going to pipe.
If you want to do some transfer speed testing, you should try socat, which is a very versatile program that allows you to create almost any kind of tunnel.
You can use lightweight solution like ZeroMQ [ zmq/0mq ]. It is very easy to use and dramatically faster then sockets.
Keep in mind that sockets does not necessarily mean IP (and TCP or UDP). You can also use UNIX sockets (PF_UNIX), which offer a noticeable performance improvement over connecting to 127.0.0.1
If you do not need speed, sockets are the easiest way to go!
If what you are looking at is speed, the fastest solution is shared Memory, not named pipes.
As often, numbers says more than feeling, here are some data: Pipe vs Unix Socket Performance (opendmx.net).
This benchmark shows a difference of about 12 to 15% faster speed for pipes.
I'm going to agree with shodanex, it looks like you're prematurely trying to optimize something that isn't yet problematic. Unless you know sockets are going to be a bottleneck, I'd just use them.
A lot of people who swear by named pipes find a little savings (depending on how well everything else is written), but end up with code that spends more time blocking for an IPC reply than it does doing useful work. Sure, non-blocking schemes help this, but those can be tricky. Spending years bringing old code into the modern age, I can say, the speedup is almost nil in the majority of cases I've seen.
If you really think that sockets are going to slow you down, then go out of the gate using shared memory with careful attention to how you use locks. Again, in all actuality, you might find a small speedup, but notice that you're wasting a portion of it waiting on mutual exclusion locks. I'm not going to advocate a trip to futex hell (well, not quite hell anymore in 2015, depending upon your experience).
Pound for pound, sockets are (almost) always the best way to go for user space IPC under a monolithic kernel .. and (usually) the easiest to debug and maintain.