Shell script to count files, then remove oldest fi

2019-03-08 03:07发布

I am new to shell scripting, so I need some help here. I have a directory that fills up with backups. If I have more than 10 backup files, I would like to remove the oldest files, so that the 10 newest backup files are the only ones that are left.

So far, I know how to count the files, which seems easy enough, but how do I then remove the oldest files, if the count is over 10?

if [ls /backups | wc -l > 10]
    then
        echo "More than 10"
fi

标签: linux bash shell
10条回答
手持菜刀,她持情操
2楼-- · 2019-03-08 03:35

The proper way to do this type of thing is with logrotate.

查看更多
我想做一个坏孩纸
3楼-- · 2019-03-08 03:40

I like the answers from @Dennis Williamson and @Dale Hagglund. (+1 to each)

Here's another way to do it using find (with the -newer test) that is similar to what you started with.

This was done in bash on cygwin...

if [[ $(ls /backups | wc -l) > 10 ]]
then
  find /backups ! -newer $(ls -t | sed '11!d') -exec rm {} \;
fi
查看更多
SAY GOODBYE
4楼-- · 2019-03-08 03:41

Try this:

ls -t | sed -e '1,10d' | xargs -d '\n' rm

This should handle all characters (except newlines) in a file name.

What's going on here?

  • ls -t lists all files in the current directory in decreasing order of modification time. Ie, the most recently modified files are first, one file name per line.
  • sed -e '1,10d' deletes the first 10 lines, ie, the 10 newest files. I use this instead of tail because I can never remember whether I need tail -n +10 or tail -n +11.
  • xargs -d '\n' rm collects each input line (without the terminating newline) and passes each line as an argument to rm.

As with anything of this sort, please experiment in a safe place.

查看更多
我欲成王,谁敢阻挡
5楼-- · 2019-03-08 03:43

find is the common tool for this kind of task :

find ./my_dir -mtime +10 -type f -delete

EXPLANATIONS

  • ./my_dir your directory (replace with your own)
  • -mtime +10 older than 10 days
  • -type f only files
  • -delete no surprise. Remove it to test your find filter before executing the whole command

And take care that ./my_dir exists to avoid bad surprises !

查看更多
做自己的国王
6楼-- · 2019-03-08 03:44

Using inode numbers via stat & find command (to avoid pesky-chars-in-file-name issues):

stat -f "%m %i" * | sort -rn -k 1,1 | tail -n +11 | cut -d " " -f 2 | \
   xargs -n 1 -I '{}' find "$(pwd)" -type f -inum '{}' -print

#stat -f "%m %i" * | sort -rn -k 1,1 | tail -n +11 | cut -d " " -f 2 | \
#   xargs -n 1 -I '{}' find "$(pwd)" -type f -inum '{}' -delete 
查看更多
来,给爷笑一个
7楼-- · 2019-03-08 03:45

On a very limited chroot environment, we had only a couple of programs available to achieve what was initially asked. We solved it that way:

MIN_FILES=5
FILE_COUNT=$(ls -l | grep -c ^d )


if [ $MIN_FILES -lt $FILE_COUNT  ]; then
  while [ $MIN_FILES -lt $FILE_COUNT ]; do
    FILE_COUNT=$[$FILE_COUNT-1]
    FILE_TO_DEL=$(ls -t | tail -n1)
    # be careful with this one
    rm -rf "$FILE_TO_DEL"
  done
fi

Explanation:

  • FILE_COUNT=$(ls -l | grep -c ^d ) counts all files in the current folder. Instead of grep we could use also wc -l but wc was not installed on that host.
  • FILE_COUNT=$[$FILE_COUNT-1] update the current $FILE_COUNT
  • FILE_TO_DEL=$(ls -t | tail -n1) Save the oldest file name in the $FILE_TO_DEL variable. tail -n1 returns the last element in the list.
查看更多
登录 后发表回答