Argument list too long error for rm, cp, mv comman

2018-12-31 16:29发布

I have several hundred PDFs under a directory in UNIX. The names of the PDFs are really long (approx. 60 chars).

When I try to delete all PDFs together using the following command:

rm -f *.pdf

I get the following error:

/bin/rm: cannot execute [Argument list too long]

What is the solution to this error? Does this error occur for mv and cp commands as well? If yes, how to solve for these commands?

30条回答
大哥的爱人
2楼-- · 2018-12-31 16:38

You could use a bash array:

files=(*.pdf)
for((I=0;I<${#files[*]};I+=1000)); do rm -f ${files[@]:I:1000}; done

This way it will erase in batches of 1000 files per step.

查看更多
情到深处是孤独
3楼-- · 2018-12-31 16:39

If you’re trying to delete a very large number of files at one time (I deleted a directory with 485,000+ today), you will probably run into this error:

/bin/rm: Argument list too long.

The problem is that when you type something like rm -rf *, the * is replaced with a list of every matching file, like “rm -rf file1 file2 file3 file4” and so on. There is a relatively small buffer of memory allocated to storing this list of arguments and if it is filled up, the shell will not execute the program.

To get around this problem, a lot of people will use the find command to find every file and pass them one-by-one to the “rm” command like this:

find . -type f -exec rm -v {} \;

My problem is that I needed to delete 500,000 files and it was taking way too long.

I stumbled upon a much faster way of deleting files – the “find” command has a “-delete” flag built right in! Here’s what I ended up using:

find . -type f -delete

Using this method, I was deleting files at a rate of about 2000 files/second – much faster!

You can also show the filenames as you’re deleting them:

find . -type f -print -delete

…or even show how many files will be deleted, then time how long it takes to delete them:

root@devel# ls -1 | wc -l && time find . -type f -delete
100000
real    0m3.660s
user    0m0.036s
sys     0m0.552s
查看更多
余欢
4楼-- · 2018-12-31 16:40

Another answer is to force xargs to process the commands in batches. For instance to delete the files 100 at a time, cd into the directory and run this:

echo *.pdf | xargs -n 100 rm

查看更多
妖精总统
5楼-- · 2018-12-31 16:40

For remove first 100 files:

rm -rf 'ls | head -100'

查看更多
只靠听说
6楼-- · 2018-12-31 16:40

Using GNU parallel (sudo apt install parallel) is super easy

It runs the commands multithreaded where '{}' is the argument passed

E.g.

ls /tmp/myfiles* | parallel 'rm {}'

查看更多
浪荡孟婆
7楼-- · 2018-12-31 16:41

you can try this:

for f in *.pdf
do
  rm $f
done

EDIT: ThiefMaster comment suggest me not to disclose such dangerous practice to young shell's jedis, so I'll add a more "safer" version (for the sake of preserving things when someone has a "-rf . ..pdf" file)

echo "# Whooooo" > /tmp/dummy.sh
for f in '*.pdf'
do
   echo "rm -i $f" >> /tmp/dummy.sh
done

After running the above, just open the /tmp/dummy.sh file in your fav. editor and check every single line for dangerous filenames, commenting them out if found.

Then copy the dummy.sh script in your working dir and run it.

All this for security reasons.

查看更多
登录 后发表回答