bash: start multiple chained commands in backgroun

2019-01-16 18:08发布

I'm trying to run some commands in paralel, in background, using bash. Here's what I'm trying to do:

forloop {
  //this part is actually written in perl
  //call command sequence
  print `touch .file1.lock; cp bigfile1 /destination; rm .file1.lock;`;
}

The part between backticks (``) spawns a new shell and executes the commands in succession. The thing is, control to the original program returns only after the last command has been executed. I would like to execute the whole statement in background (I'm not expecting any output/return values) and I would like the loop to continue running.

The calling program (the one that has the loop) would not end until all the spawned shells finish.

I could use threads in perl to spawn different threads which call different shells, but it seems an overkill...

Can I start a shell, give it a set of commands and tell it to go to the background?

14条回答
霸刀☆藐视天下
2楼-- · 2019-01-16 18:26

The facility in bash that you're looking for is called Compound Commands. See the man page for more info:

Compound Commands A compound command is one of the following:

   (list) list  is  executed  in a subshell environment (see COMMAND EXECUTION ENVIRONMENT below).  Variable assignments and
          builtin commands that affect the shell's environment do not remain in effect after  the  command  completes.   The
          return status is the exit status of list.

   { list; }
          list  is  simply  executed in the current shell environment.  list must be terminated with a newline or semicolon.
          This is known as a group command.  The return status is the exit status of list.  Note that unlike the metacharac‐
          ters  (  and  ),  {  and  } are reserved words and must occur where a reserved word is permitted to be recognized.
          Since they do not cause a word break, they must be separated from list by whitespace or another shell  metacharac‐
          ter.

There are others, but these are probably the 2 most common types. The first, the parens, will run a list of command in series in a subshell, while the second, the curly braces, will a list of commands in series in the current shell.

parens

% ( date; sleep 5; date; )
Sat Jan 26 06:52:46 EST 2013
Sat Jan 26 06:52:51 EST 2013

curly braces

% { date; sleep 5; date; }
Sat Jan 26 06:52:13 EST 2013
Sat Jan 26 06:52:18 EST 2013
查看更多
劳资没心,怎么记你
3楼-- · 2019-01-16 18:28

run the commands in a subshell:

(command1 ; command2 ; command3) &
查看更多
Root(大扎)
4楼-- · 2019-01-16 18:29

Run the command by using an at job:

# date
# jue sep 13 12:43:21 CEST 2012
# at 12:45
warning: commands will be executed using /bin/sh
at> command1
at> command2
at> ...
at> CTRL-d
at> <EOT>
job 20 at Thu Sep 13 12:45:00 2012

The result will be sent to your account by mail.

查看更多
对你真心纯属浪费
5楼-- · 2019-01-16 18:30

Try to put commands in curly braces with &s, like this:

{command1 & ; command2 & ; command3 & ; }

This does not create a sub-shell, but executes the group of commands in the background.

HTH

查看更多
Melony?
6楼-- · 2019-01-16 18:31

I stumbled upon this thread here and decided to put together a code snippet to spawn chained statements as background jobs. I tested this on BASH for Linux, KSH for IBM AIX and Busybox's ASH for Android, so I think it's safe to say it works on any Bourne-like shell.

processes=0;
for X in `seq 0 10`; do
   let processes+=1;
   { { echo Job $processes; sleep 3; echo End of job $processes; } & };
   if [[ $processes -eq 5 ]]; then
      wait;
      processes=0;
   fi;
done;

This code runs a number of background jobs up to a certain limit of concurrent jobs. You can use this, for example, to recompress a lot of gzipped files with xz without having a huge bunch of xz processes eat your entire memory and make your computer throw up: in this case, you use * as the for's list and the batch job would be gzip -cd "$X" | xz -9c > "${X%.gz}.xz".

查看更多
闹够了就滚
7楼-- · 2019-01-16 18:34

GavinCattell got the closest (for bash, IMO), but as Mad_Ady pointed out, it would not handle the "lock" files. This should:

If there are other jobs pending, the wait will wait for those, too. If you need to wait for only the copies, you can accumulate those PIDs and wait for only those. If not, you could delete the 3 lines with "pids" but it's more general.

In addition, I added checking to avoid the copy altogether:

pids=
for file in bigfile*
do
    # Skip if file is not newer...
    targ=/destination/$(basename "${file}")
    [ "$targ" -nt "$file" ] && continue

    # Use a lock file:  ".fileN.lock" for each "bigfileN"
    lock=".${file##*/big}.lock"
    ( touch $lock; cp "$file" "$targ"; rm $lock ) &
    pids="$pids $!"
done
wait $pids

Incidentally, it looks like you're copying new files to an FTP repository (or similar). If so, you could consider a copy/rename strategy instead of the lock files (but that's another topic).

查看更多
登录 后发表回答