I have an S3 bucket that contains database backups. I am creating a script that I would like to download the latest backup (and eventually restore it somewhere else), but I'm not sure how to go about only grabbing the most recent file from a bucket.
Is it possible to copy only the most recent file from a s3 bucket to a local directory using AWS CLI tools?
And here is a bash script create based on @error2007s's answer. This script requires your aws profile and bucket name as variables, and downloads the latest object to your ~/Downloads folder:
The Above solutions are Bash for if one want to do the same thing in Powershell for downloading in windows using the follwing script :
This is a approach you can take.
You can list all the objects in the bucket with
aws s3 ls $BUCKET --recursive
:They're sorted alphabetically by key, but that first column is the last modified time. A quick
sort
will reorder them by date:tail -n 1
selects the last row, andawk '{print $4}'
extracts the fourth column (the name of the object).Last but not least, drop that into
aws s3 cp
to download the object:$BUCKET_NAME - is the bucket from which you want to download.
$FILE_NAME_FILTER - a string used as a filter for the name, which you want to match.
aws s3 cp " " - its in double-quotes because to also include files that have spaces in their names.