How do you make an AWS S3 public folder private again?
I was testing out some staging data, so I made the entire folder public within a bucket. I'd like to restrict its access again. So how do I make the folder private again?
How do you make an AWS S3 public folder private again?
I was testing out some staging data, so I made the entire folder public within a bucket. I'd like to restrict its access again. So how do I make the folder private again?
As of now, according to the boto docs you can do it this way
Also, you may consider to remove any bucket policies under permissions tab of s3 bucket.
There are two ways to manage this:
Make Public
option where you can execute the script from ascobol (I just rewrite it with boto3)cheers
For AWS CLI, it is fairly straight forward.
If the object is:
s3://<bucket-name>/file.txt
For single object:
For all objects in the bucket (bash one-liner):
If you have S3 Browser, you will be having an option to make it public or private.
The accepted answer works well - seems to set ACLs recursively on a given s3 path too. However, this can also be done more easily by a third-party tool called s3cmd - we use it heavily at my company and it seems to be fairly popular within the AWS community.
For example, suppose you had this kind of s3 bucket and dir structure:
s3://mybucket.com/topleveldir/scripts/bootstrap/tmp/
. Now suppose you had marked the entirescripts
"directory" as public using the Amazon S3 console.Now to make the entire
scripts
"directory-tree" recursively (i.e. including subdirectories and their files) private again:It's also easy to make the
scripts
"directory-tree" recursively public again if you want:You can also choose to set the permission/ACL only on a given s3 "directory" (i.e. non-recursively) by simply omitting
--recursive
in the above commands.For
s3cmd
to work, you first have to provide your AWS access and secret keys to s3cmd vias3cmd --configure
(see http://s3tools.org/s3cmd for more details).I did this today. My situation was I had certain top level directories whose files needed to be made private. I did have some folders that needed to be left public.
I decided to use the
s3cmd
like many other people have already shown. But given the massive number of files, I wanted to run parallels3cmd
jobs for each directory. And since it was going to take a day or so, I wanted to run them as background processes on an EC2 machine.I set up an Ubuntu machine using the
t2.xlarge
type. I chose the xlarge afters3cmd
failed with out of memory messages on a micro instance. xlarge is probably overkill but this server will only be up for a day.After logging into the server, I installed and configured
s3cmd
:sudo apt-get install python-setuptools wget https://sourceforge.net/projects/s3tools/files/s3cmd/2.0.2/s3cmd-2.0.2.tar.gz/download mv download s3cmd.tar.gz tar xvfz s3cmd.tar.gz cd s3cmd-2.0.2/ python setup.py install sudo python setup.py install cd ~ s3cmd --configure
I originally tried using
screen
but had some problems, mainly processes were dropping fromscreen -r
despite running the proper screen command likescreen -S directory_1 -d -m s3cmd setacl --acl-private --recursive --verbose s3://my_bucket/directory_1
. So I did some searching and found thenohup
command. Here's what I ended up with:nohup s3cmd setacl --acl-private --recursive --verbose s3://my_bucket/directory_1 > directory_1.out & nohup s3cmd setacl --acl-private --recursive --verbose s3://my_bucket/directory_2 > directory_2.out & nohup s3cmd setacl --acl-private --recursive --verbose s3://my_bucket/directory_3 > directory_3.out &
With a multi-cursor error this becomes pretty easy (I used
aws s3 ls s3//my_bucket
to list the directories).Doing that you can
logout
as you want, and log back in and tail any of your logs. You can tail multiple files like:tail -f directory_1.out -f directory_2.out -f directory_3.out
So set up
s3cmd
then usenohup
as I demonstrated and you're good to go. Have fun!