Fastest technique to pass messages between process

2019-01-21 03:29发布

What is the fastest technology to send messages between C++ application processes, on Linux? I am vaguely aware that the following techniques are on the table:

  • TCP
  • UDP
  • Sockets
  • Pipes
  • Named pipes
  • Memory-mapped files

are there any more ways and what is the fastest?

7条回答
成全新的幸福
2楼-- · 2019-01-21 03:51

As you tagged this question with C++, I'd recommend Boost.Interprocess:

Shared memory is the fastest interprocess communication mechanism. The operating system maps a memory segment in the address space of several processes, so that several processes can read and write in that memory segment without calling operating system functions. However, we need some kind of synchronization between processes that read and write shared memory.

Source

One caveat I've found is the portability limitations for synchronization primitives. Nor OS X, nor Windows have a native implementation for interprocess condition variables, for example, and so it emulates them with spin locks.

Now if you use a *nix which supports POSIX process shared primitives, there will be no problems.

Shared memory with synchronization is a good approach when considerable data is involved.

查看更多
劳资没心,怎么记你
3楼-- · 2019-01-21 03:53

Whilst all the above answers are very good, I think we'd have to discuss what is "fastest" [and does it have to be "fastest" or just "fast enough for "?]

For LARGE messages, there is no doubt that shared memory is a very good technique, and very useful in many ways.

However, if the messages are small, there are drawbacks of having to come up with your own message-passing protocol and method of informing the other process that there is a message.

Pipes and named pipes are much easier to use in this case - they behave pretty much like a file, you just write data at the sending side, and read the data at the receiving side. If the sender writes something, the receiver side automatically wakes up. If the pipe is full, the sending side gets blocked. If there is no more data from the sender, the receiving side is automatically blocked. Which means that this can be implemented in fairly few lines of code with a pretty good guarantee that it will work at all times, every time.

Shared memory on the other hand relies on some other mechanism to inform the other thread that "you have a packet of data to process". Yes, it's very fast if you have LARGE packets of data to copy - but I would be surprised if there is a huge difference to a pipe, really. Main benefit would be that the other side doesn't have to copy the data out of the shared memory - but it also relies on there being enough memory to hold all "in flight" messages, or the sender having the ability to hold back things.

I'm not saying "don't use shared memory", I'm just saying that there is no such thing as "one solution that solves all problems 'best'".

To clarify: I would start by implementing a simple method using a pipe or named pipe [depending on which suits the purposes], and measure the performance of that. If a significant time is spent actually copying the data, then I would consider using other methods.

Of course, another consideration should be "are we ever going to use two separate machines [or two virtual machines on the same system] to solve this problem. In which case, a network solution is a better choice - even if it's not THE fastest, I've run a local TCP stack on my machines at work for benchmark purposes and got some 20-30Gbit/s (2-3GB/s) with sustained traffic. A raw memcpy within the same process gets around 50-100GBit/s (5-10GB/s) (unless the block size is REALLY tiny and fits in the L1 cache). I haven't measured a standard pipe, but I expect that's somewhere roughly in the middle of those two numbers. [This is numbers that are about right for a number of different medium-sized fairly modern PC's - obviously, on a ARM, MIPS or other embedded style controller, expect a lower number for all of these methods]

查看更多
我欲成王,谁敢阻挡
4楼-- · 2019-01-21 03:53

posix message queues are pretty fast but they have some limitations

查看更多
唯我独甜
5楼-- · 2019-01-21 03:56

I would suggest looking at this also: How to use shared memory with Linux in C.

Basically, I'd drop network protocols such as TCP and UDP when doing IPC on a single machine. These have packeting overhead and are bound to even more resources (e.g. ports, loopback interface).

查看更多
\"骚年 ilove
6楼-- · 2019-01-21 04:01

Check CMA and kdbus: https://lwn.net/Articles/466304/

I think the fastest stuff these days are based on AIO. http://www.kegel.com/c10k.html

查看更多
一夜七次
7楼-- · 2019-01-21 04:01

Well, you could simply have a shared memory segment between your processes, using the linux shared memory aka SHM.

It's quite easy to use, look at the link for some examples.

查看更多
登录 后发表回答