In a multithreaded Linux/C++-program, I want to use fork()
with a signal handler for SIGCHLD
.
In the child process I use open()
to create two new file descriptors, sendfile()
and close()
, then the child exits.
I planned to use fork()
to implement the following requirements:
The threads in the parent process shall be able to
- detect the normal termination of the child process, and in that case shall be able to create another
fork()
doing theopen()/sendfile()/close()
for a range of files - kill the
sendfile()
-child process in case of a specific event and detect the intentional termination to clean up
For requirement 1 I could just wait for the result of sendfile()
.
Requirement 2 is why I think I need to use fork()
in the first place.
After reading the following posts
I think that my solution might not be a good one.
My questions are:
- Is there any other solution to implement requirement 2 ?
- Or how can I make sure that the library calls
open(), close() and sendfile()
will be okay?
Update:
- The program will run on a Busybox Linux / ARM
- I've assumed that I should use
sendfile()
for having the most efficient file transfer due to several posts I've read regarding this topic. A safe way to implement my requirement could be usingfork()
andexec*()
withcp
, with the disadvantage that the file transfer might be less efficient
Update 2:
- it's sufficient to
fork()
once in case of a specific event (instead of once per file) since I switched toexec*()
withrsync
in the child process. However the program needs invoke thatrsync
always in case of a specific event.
You can use threads, but forcefully terminating threads typically leads to memory leaks and other problems.
My linux experience is somewhat limited, but I would probably try to fork the program early, before it gets multithreaded. Now that you have two instances, the single threaded instance can be safely used to manage the starting and stopping of additional instances.