I would like to make a section of my code more efficient. I'm thinking of making it fork off into multiple processes and have them execute 50/100 times at once, instead of just once.
For example (pseudo):
for line in file;
do
foo;
foo2;
foo3;
done
I would like this for loop to run multiple times. I know this can be done with forking. Would it look something like this?
while(x <= 50)
parent(child pid)
{
fork child()
}
child
{
do
foo; foo2; foo3;
done
return child_pid()
}
Or am I thinking about this the wrong way?
Thanks!
Here's my thread control function:
With GNU Parallel you can do:
This will run one job on each cpu core. To run 50 do:
Watch the intro videos to learn more:
http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
I don't know of any explicit
fork
call in bash. What you probably want to do is append&
to a command that you want to run in the background. You can also use&
on functions that you define within a bash script:EDIT: to put a limit on the number of simultaneous background processes, you could try something like this:
Let me try example
And use
jobs
to see what's running.Based on what you all shared I was able to put this together:
This lists all the updates on the mac on three machines at once. Later on I used it to perform a software update for all machines when i CAT my ipaddress.txt
In bash scripts (non-interactive) by default JOB CONTROL is disabled so you can't do the the commands: job, fg, and bg.
Here is what works well for me:
The last line uses "fg" to bring a background job into the foreground. It does this in a loop until fg returns 1 ($? == 1), which it does when there are no longer any more background jobs.