I have several hundred PDFs under a directory in UNIX. The names of the PDFs are really long (approx. 60 chars).
When I try to delete all PDFs together using the following command:
rm -f *.pdf
I get the following error:
/bin/rm: cannot execute [Argument list too long]
What is the solution to this error?
Does this error occur for mv
and cp
commands as well? If yes, how to solve for these commands?
I only know a way around this. The idea is to export that list of pdf files you have into a file. Then split that file into several parts. Then remove pdf files listed in each part.
wc -l is to count how many line the list.txt contains. When you have the idea of how long it is, you can decide to split it in half, forth or something. Using split -l command For example, split it in 600 lines each.
this will create a few file named xaa,xab,xac and so on depends on how you split it. Now to "import" each list in those file into command rm, use this:
Sorry for my bad english.
I ran into this problem a few times. Many of the solutions will run the
rm
command for each individual file that needs to be deleted. This is very inefficient:I ended up writing a python script to delete the files based on the first 4 characters in the file-name:
This worked very well for me. I was able to clear out over 2 million temp files in a folder in about 15 minutes. I commented the tar out of the little bit of code so anyone with minimal to no python knowledge can manipulate this code.
If you need to keep a server or system responsive while deleting a huge amount of files,
sleep
between each delete statement can be a good approach.Or you can try:
You can create a temp folder, move all the files and sub-folders you want to keep into the temp folder then delete the old folder and rename the temp folder to the old folder try this example until you are confident to do it live:
the
rm -r big_folder
will remove all files in thebig_folder
no matter how many. You just have to be super careful you first have all the files/folders you want to keep, in this case it wasfile1.pdf
A bit safer version than using xargs, also not recursive:
ls -p | grep -v '/$' | grep '\.pdf$' | while read file; do rm "$file"; done
Filtering our directories here is a bit unnecessary as 'rm' won't delete it anyway, and it can be removed for simplicity, but why run something that will definitely return error?