The following is a simple Bash command line:
grep -li 'regex' "filename with spaces" "filename"
No problems. Also the following works just fine:
grep -li 'regex' $(<listOfFiles.txt)
where listOfFiles.txt
contains a list of filenames to be grepped, one
filename per line.
The problem occurs when listOfFiles.txt
contains filenames with
embedded spaces. In all cases I've tried (see below), Bash splits the
filenames at the spaces so, for example, a line in listOfFiles.txt
containing a name like ./this is a file.xml
ends up trying to run
grep on each piece (./this
, is
, a
and file.xml
).
I thought I was a relatively advanced Bash user, but I cannot find a simple magic incantation to get this to work. Here are the things I've tried.
grep -li 'regex' `cat listOfFiles.txt`
Fails as described above (I didn't really expect this to work), so I thought I'd put quotes around each filename:
grep -li 'regex' `sed -e 's/.*/"&"/' listOfFiles.txt`
Bash interprets the quotes as part of the filename and gives "No such file or directory" for each file (and still splits the filenames with blanks)
for i in $(<listOfFiles.txt); do grep -li 'regex' "$i"; done
This fails as for the original attempt (that is, it behaves as if the quotes are ignored) and is very slow since it has to launch one 'grep' process per file instead of processing all files in one invocation.
The following works, but requires some careful double-escaping if the regular expression contains shell metacharacters:
eval grep -li 'regex' `sed -e 's/.*/"&"/' listOfFiles.txt`
Is this the only way to construct the command line so it will correctly handle filenames with spaces?
The -0 option on xargs tells xargs to use a null character rather than white space as a filename terminator. The tr command converts the incoming newlines to a null character.
This meets the OP's requirement that grep not be invoked multiple times. It has been my experience that for a large number of files avoiding the multiple invocations of grep improves performance considerably.
This scheme also avoids a bug in the OP's original method because his scheme will break where listOfFiles.txt contains a number of files that would exceed the buffer size for the commands. xargs knows about the maximum command size and will invoke grep multiple times to avoid that problem.
A related problem with using xargs and grep is that grep will prefix the output with the filename when invoked with multiple files. Because xargs invokes grep with multiple files one will receive output with the filename prefixed, but not for the case of one file in listOfFiles.txt or the case of multiple invocations where the last invocation contains one filename. To achieve consistent output add /dev/null to the grep command:
Note that was not an issue for the OP because he was using the -l option on grep; however it is likely to be an issue for others.
Try this:
IFS
is the Internal Field Separator. Setting it to$'\n'
tells Bash to use the newline character to delimit filenames. Its default value is$' \t\n'
and can be printed usingcat -etv <<<"$IFS"
.Enclosing the script in parenthesis starts a subshell so that only commands within the parenthesis are affected by the custom
IFS
value.Do note that if you somehow ended up with a list in a file which has Windows line endings,
\r\n
, NONE of the notes above about the input file separator$IFS
(and quoting the argument) will work; so make sure that the line endings are correctly\n
(I usescite
to show the line endings, and easily change them from one to the other).Also
cat
piped intowhile file read ...
seems to work (apparently without need to set separators):... although for me it was more relevant for a "grep" through a directory with spaces in filenames:
Though it may overmatch, this is my favorite solution:
This works:
With Bash 4, you can also use the builtin mapfile function to set an array containing each line and iterate on this array: