I want to iterate over each line in the output of: ls -l /some/dir/*
Right now I'm trying: for x in $(ls -l $1); do echo $x; done
However, this iterates over each element in the line separately, so I get:
-r--r-----
1
ivanevf
eng
1074
Apr
22
13:07
File1
-r--r-----
1
ivanevf
eng
1074
Apr
22
13:17
File2
But I want to iterate over each line as a whole, though. How do I do that?
You can also try the
find
command. If you only want files in the current directory:Run a command on each of them?
Count lines, but only in files?
As already mentioned, awk is the right tool for this. If you don't want to use awk, instead of parsing output of "ls -l" line by line, you could iterate over all files and do an "ls -l" for each individual file like this:
It depends what you want to do with each line. awk is a useful utility for this type of processing. Example:
.. on my system prints the name and size of each item in the directory.
The read(1) utility along with output redirection of the ls(1) command will do what you want.
Set IFS to newline, like this:
Put a sub-shell around it if you don't want to set IFS permanently:
Or use while | read instead:
One more option, which runs the while/read at the same shell level:
So, why didn't anybody suggest just using options that eliminate the parts he doesn't want to process.
On modern Debian you just get your file with:
Further more, you don't have to pay attention to what directory you are running it in if you use the full directory:
This last command I am using the above and I get the following output: