Today's CPUs typically comprise several physical cores. These might even be multi-threaded so that the Linux kernel sees quite a large number of cores and accordingly starts several times the Linux scheduler (one for each core). When running multiple tasks on a Linux system the scheduler achieves normally a good distribution of the total workload to all Linux cores (might be the same physical core).
Now, say, I have a large number of files to process with the same executable. I usually do this with the "find" command:
find <path> <option> <exec>
However, this starts just one task at any time and waits until its completion before starting the next task. Thus, just one core at any time is in use for this. This leaves the majority of the cores idle (if this find-command is the only task running on the system). It would be much better to launch N tasks at the same time. Where N is the number of cores seen by the Linux kernel.
Is there a command that would do that ?
Use
find
with the-print0
option. Pipe it toxargs
with the-0
option.xargs
also accepts the-P
option to specify a number of processes.-P
should be used in combination with-n
or-L
.Read
man xargs
for more information.An example command:
find . -print0 | xargs -0 -P4 -n4 grep searchstring
Gnu parallel or
xargs -P
is probably a better way to handle this, but you can also write a sort-of multi-tasking framework in bash. It's a little messy and unreliable, however, due to the lack of certain facilities.If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you can do this:
You can install GNU Parallel simply by:
Watch the intro videos for GNU Parallel to learn more: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1