I have several hundred PDFs under a directory in UNIX. The names of the PDFs are really long (approx. 60 chars).
When I try to delete all PDFs together using the following command:
rm -f *.pdf
I get the following error:
/bin/rm: cannot execute [Argument list too long]
What is the solution to this error?
Does this error occur for mv
and cp
commands as well? If yes, how to solve for these commands?
I found that for extremely large lists of files (>1e6), these answers were too slow. Here is a solution using parallel processing in python. I know, I know, this isn't linux... but nothing else here worked.
(This saved me hours)
I had the same problem with a folder full of temporary images that was growing day by day and this command helped me to clear the folder
The difference with the other commands is the mtime parameter that will take only the files older than X days (in the example 50 days)
Using that multiple times, decreasing on every execution the day range, I was able to remove all the unnecessary files
If you have similar problems with grep, the easiest solution is stepping one dir back and do a recursive search.
So instead of
you can use:
Note it will recursively search subfolders of "search_in_this_dir" directory as well.
tl;dr
It's a kernel limitation on the size of the command line argument. Use a
for
loop instead.Origin of problem
This is a system issue, related to
execve
andARG_MAX
constant. There is plenty of documentation about that (see man execve, debian's wiki).Basically, the expansion produce a command (with its parameters) that exceeds the
ARG_MAX
limit. On kernel2.6.23
, the limit was set at128 kB
. This constant has been increased and you can get its value by executing:Solution: Using
for
LoopUse a
for
loop as it's recommended on BashFAQ/095 and there is no limit except for RAM/memory space:Also this is a portable approach as glob have strong and consistant behavior among shells (part of POSIX spec).
Note: As noted by several comments, this is indeed slower but more maintainable as it can adapt more complex scenarios, e.g. where one want to do more than just one action.
Solution: Using
find
If you insist, you can use
find
but really don't use xargs as it "is dangerous (broken, exploitable, etc.) when reading non-NUL-delimited input":Using
-maxdepth 1 ... -delete
instead of-exec rm {} +
allowsfind
to simply execute the required system calls itself without using an external process, hence faster (thanks to @chepner comment).References
i was facing same problem while copying form source directory to destination
source directory had files ~3 lakcs
i used cp with option -r and it's worked for me
cp -r abc/ def/
it will copy all files from abc to def without giving warning of Argument list too long
To delete all
*.pdf
in a directory/path/to/dir_with_pdf_files/
To delete specific files via
rsync
using wildcard is probably the fastest solution in case you've millions of files. And it will take care of error you're getting.(Optional Step): DRY RUN. To check what will be deleted without deleting. `
. . .
Click rsync tips and tricks for more rsync hacks