How to loop through file names returned by find?

2019-01-01 09:58发布

x=$(find . -name "*.txt")
echo $x

if I run the above piece of code in Bash shell, what I get is a string containing several file names separated by blank, not a list.

Of course, I can further separate them by blank to get a list, but I'm sure there is a better way to do it.

So what is the best way to loop through the results of a find command?

标签: bash find
14条回答
冷夜・残月
2楼-- · 2019-01-01 10:32

find <path> -xdev -type f -name *.txt -exec ls -l {} \;

This will list the files and give details about attributes.

查看更多
时光乱了年华
3楼-- · 2019-01-01 10:35
# Doesn't handle whitespace
for x in `find . -name "*.txt" -print`; do
  process_one $x
done

or

# Handles whitespace and newlines
find . -name "*.txt" -print0 | xargs -0 -n 1 process_one
查看更多
爱死公子算了
4楼-- · 2019-01-01 10:35

If you can assume the file names don't contain newlines, you can read the output of find into a Bash array using the readarray command:

readarray -t x < <(find . -name '*.txt')

Note:

  • -t causes readarray to strip newlines.
  • It won't work if readarray is in a pipe, hence the process substitution.
  • readarray is available since Bash 4.

readarray can also be invoked as mapfile with the same options.

Reference: https://mywiki.wooledge.org/BashFAQ/005#Loading_lines_from_a_file_or_stream

查看更多
明月照影归
5楼-- · 2019-01-01 10:36

You can put the filenames returned by find into an array like this:

array=()
while IFS=  read -r -d $'\0'; do
    array+=("$REPLY")
done < <(find . -name '*.txt' -print0)

Now you can just loop through the array to access individual items and do whatever you want with them.

Note: It's white space safe.

查看更多
泪湿衣
6楼-- · 2019-01-01 10:37

What ever you do, don't use a for loop:

# Don't do this
for file in $(find . -name "*.txt")
do
    …code using "$file"
done

Three reasons:

  • For the for loop to even start, the find must run to completion.
  • If a file name has any whitespace (including space, tab or newline) in it, it will be treated as two separate names.
  • Although now unlikely, you can overrun your command line buffer. Imagine if your command line buffer holds 32KB, and your for loop returns 40KB of text. That last 8KB will be dropped right off your for loop and you'll never know it.

Always use a while read construct:

find . -name "*.txt" -print0 | while read -d $'\0' file
do
    …code using "$file"
done

The loop will execute while the find command is executing. Plus, this command will work even if a file name is returned with whitespace in it. And, you won't overflow your command line buffer.

The -print0 will use the NULL as a file separator instead of a newline and the -d $'\0' will use NULL as the separator while reading.

查看更多
还给你的自由
7楼-- · 2019-01-01 10:39

TL;DR: If you're just here for the most correct answer, you probably want my personal preference, find . -name '*.txt' -exec process {} \; (see the bottom of this post). If you have time, read through the rest to see several different ways and the problems with most of them.


The full answer:

The best way depends on what you want to do, but here are a few options. As long as no file or folder in the subtree has whitespace in its name, you can just loop over the files:

for i in $x; do # Not recommended, will break on whitespace
    process "$i"
done

Marginally better, cut out the temporary variable x:

for i in $(find -name \*.txt); do # Not recommended, will break on whitespace
    process "$i"
done

It is much better to glob when you can. White-space safe, for files in the current directory:

for i in *.txt; do # Whitespace-safe but not recursive.
    process "$i"
done

By enabling the globstar option, you can glob all matching files in this directory and all subdirectories:

# Make sure globstar is enabled
shopt -s globstar
for i in **/*.txt; do # Whitespace-safe and recursive
    process "$i"
done

In some cases, e.g. if the file names are already in a file, you may need to use read:

# IFS= makes sure it doesn't trim leading and trailing whitespace
# -r prevents interpretation of \ escapes.
while IFS= read -r line; do # Whitespace-safe EXCEPT newlines
    process "$line"
done < filename

read can be used safely in combination with find by setting the delimiter appropriately:

find . -name '*.txt' -print0 | 
    while IFS= read -r -d $'\0' line; do 
        process $line
    done

For more complex searches, you will probably want to use find, either with its -exec option or with -print0 | xargs -0:

# execute `process` once for each file
find . -name \*.txt -exec process {} \;

# execute `process` once with all the files as arguments*:
find . -name \*.txt -exec process {} +

# using xargs*
find . -name \*.txt -print0 | xargs -0 process

# using xargs with arguments after each filename (implies one run per filename)
find . -name \*.txt -print0 | xargs -0 -I{} process {} argument

find can also cd into each file's directory before running a command by using -execdir instead of -exec, and can be made interactive (prompt before running the command for each file) using -ok instead of -exec (or -okdir instead of -execdir).

*: Technically, both find and xargs (by default) will run the command with as many arguments as they can fit on the command line, as many times as it takes to get through all the files. In practice, unless you have a very large number of files it won't matter, and if you exceed the length but need them all on the same command line, you're SOL find a different way.

查看更多
登录 后发表回答