I have written 3 shell scripts named s1.sh s2.sh s3.sh. They have the same content:
#!/bin/ksh
echo $0 $$
and s.sh invoke them in order:
#!/bin/sh
echo $0 $$
exec ./s1.sh &
exec ./s2.sh &
exec ./s3.sh &
but the result is disorder:
victor@ThinkPad-Edge:~$ ./s.sh
./s.sh 3524
victor@ThinkPad-Edge:~$ ./s1.sh 3525
./s3.sh 3527
./s2.sh 3526
why not s1 s2 then s3 in sequence?
If I remove & in s.sh:
#!/bin/sh
echo $0 $$
exec ./s1.sh
exec ./s2.sh
exec ./s3.sh
the output:
$ ./s.sh
./s.sh 4022
./s1.sh 4022
Missing s2 and s3, why?
They have been executing in order (at least starting in order - Notice the ids are incrementing). You open 3 separate threads for 3 separate programs. One (for some reason) is faster than the other. If you want them in sequence, take the exec
s and &
s out of exec ./s1.sh &
.
The process scheduler achieves apparent multitasking by running a snippet of each task at a time, then rapidly switching to another. Depending on system load, I/O wait, priority, scheduling algorithm etc, two processes started at almost the same time may get radically different allotments of the available CPU. Thus there can be no guarantee as to which of your three processes reaches its echo
statement first.
This is very basic Unix knowledge; perhaps you should read a book or online tutorial if you mean to use Unix seriously.
If you require parallel processes to execute in a particular order, use a locking mechanism (semaphore, shared memory, etc) to prevent one from executing a particular part of the code, called a "critical section", before another. (This isn't easy to do in shell script, though. Switch to Python or Perl if you don't want to go all the way to C. Or use a lock file if you can live with the I/O latency.)
In your second example, the exec
command replaces the current process with another. Thus s1
takes over completely, and the commands to start s2
and s3
are never seen by the shell.
(This was not apparent in your first example because the &
caused the shell to fork a background process first, basically rendering the exec
useless anyway.)
The &
operator places each exec
in the background. Effectively, you are running all 3 of your scripts in parallel. They don't stay in order because the operating system executes a bit of each script whenever it gets a chance, but it is also executing a bunch of other stuff too. One process can be given more time to run than the others, causing it to finish sooner.
Missing s2 and s3, why?
You are not missing s2
or s3
-- s2
and s3
are executing in a replacement or subshell (when s.sh
exits (or is replaced), they lose communication with the console causing their output to overwrite prior output on the TTY).
Other answers have discussed, that s1,s2,s3
are all executed within replacement shells (exec
) or subshells (without exec
) and how removing exec
and &
will force sequential execution of s1,s2,s3
. There are two cases to discuss. One where exec
is present and one where it is not. Where exec
is present, the current shell is replaced by the executed process (as pointed out in the comments, the parent shell is killed).
Where exec is not used, then then s1,s2,s3
are executed in subshells. You are not seeing the output of s2
, s3
, because s.sh
has finished and/or exited before s2
, s3
execute removing their communication with the console (if you look you will see you get an additional prompt shown and then the output of the remaining s(2,3).sh
commands. But, there is a way to require their completion before s.sh
exits. Use wait
. wait
tells s.sh
not to exit until all of its child processes s1, s2, and s3
complete. This provides an output path to the console. Example:
#!/bin/bash
echo $0 $$
exec ./1.sh &
exec ./s2.sh &
exec ./s3.sh &
wait
output:
$ ./s.sh
./s.sh 11151
/home/david/scr/tmp/stack/s1.sh 11153
/home/david/scr/tmp/stack/s3.sh 11155
/home/david/scr/tmp/stack/s2.sh 11154