Sharing data across processes on linux

2019-06-06 16:44发布

In my application, I have a process which forks off a child, say child1, and this child process writes a huge binary file on the disk and exits. The parent process then forks off another child process, child2, which reads in this huge file to do further processing.

The file dumping and re-loading is making my application slow and I'm thinking of possible ways of avoiding disk I/O completely. Possible ways I have identified are ram-disk or tmpfs. Can I somehow implement ram-disk or tmpfs from within my application? Or is there any other way by which I can avoid disk I/O completely and send data across processes reliably.

标签: c linux ipc fork
7条回答
爷、活的狠高调
2楼-- · 2019-06-06 17:06

You can use pipes, sockets, and take advantage of sendfile() or splice() features of Linux kernel (they can avoid data copying).

查看更多
登录 后发表回答