x=$(find . -name "*.txt")
echo $x
if I run the above piece of code in Bash shell, what I get is a string containing several file names separated by blank, not a list.
Of course, I can further separate them by blank to get a list, but I'm sure there is a better way to do it.
So what is the best way to loop through the results of a find
command?
find <path> -xdev -type f -name *.txt -exec ls -l {} \;
This will list the files and give details about attributes.
If you can assume the file names don't contain newlines, you can read the output of
find
into a Bash array using thereadarray
command:Note:
-t
causesreadarray
to strip newlines.readarray
is in a pipe, hence the process substitution.readarray
is available since Bash 4.readarray
can also be invoked asmapfile
with the same options.Reference: https://mywiki.wooledge.org/BashFAQ/005#Loading_lines_from_a_file_or_stream
You can put the filenames returned by
find
into an array like this:Now you can just loop through the array to access individual items and do whatever you want with them.
Note: It's white space safe.
What ever you do, don't use a
for
loop:Three reasons:
find
must run to completion.for
loop returns 40KB of text. That last 8KB will be dropped right off yourfor
loop and you'll never know it.Always use a
while read
construct:The loop will execute while the
find
command is executing. Plus, this command will work even if a file name is returned with whitespace in it. And, you won't overflow your command line buffer.The
-print0
will use the NULL as a file separator instead of a newline and the-d $'\0'
will use NULL as the separator while reading.TL;DR: If you're just here for the most correct answer, you probably want my personal preference,
find . -name '*.txt' -exec process {} \;
(see the bottom of this post). If you have time, read through the rest to see several different ways and the problems with most of them.The full answer:
The best way depends on what you want to do, but here are a few options. As long as no file or folder in the subtree has whitespace in its name, you can just loop over the files:
Marginally better, cut out the temporary variable
x
:It is much better to glob when you can. White-space safe, for files in the current directory:
By enabling the
globstar
option, you can glob all matching files in this directory and all subdirectories:In some cases, e.g. if the file names are already in a file, you may need to use
read
:read
can be used safely in combination withfind
by setting the delimiter appropriately:For more complex searches, you will probably want to use
find
, either with its-exec
option or with-print0 | xargs -0
:find
can also cd into each file's directory before running a command by using-execdir
instead of-exec
, and can be made interactive (prompt before running the command for each file) using-ok
instead of-exec
(or-okdir
instead of-execdir
).*: Technically, both
find
andxargs
(by default) will run the command with as many arguments as they can fit on the command line, as many times as it takes to get through all the files. In practice, unless you have a very large number of files it won't matter, and if you exceed the length but need them all on the same command line,you're SOLfind a different way.