How do you make an AWS S3 public folder private again?
I was testing out some staging data, so I made the entire folder public within a bucket. I'd like to restrict its access again. So how do I make the folder private again?
How do you make an AWS S3 public folder private again?
I was testing out some staging data, so I made the entire folder public within a bucket. I'd like to restrict its access again. So how do I make the folder private again?
While @kintuparantu's answer works great, it's worth mentioning that, due to the
awk
part, the script only accounts for the last part of thels
results. If the filename has spaces in it,awk
will get only the last segment of the filename split by spaces, not the entire filename.Example: A file with a path like
folder1/subfolder1/this is my file.txt
would result in an entry called justfile.txt
.In order to prevent that while still using his script, you'd have to replace
$NF
inawk {print $NF}
by a sequence of variable placeholders that accounts for the number of segments that the 'split by space' operation would result in. Since filenames might have a quite large number of spaces in their names, I've gone with an exaggeration, but to be honest, I think a completely new approach would probably be better to deal with these cases. Here's the updated code:I should also mention that using
cut
didn't have any results for me, so I removed it. Credits still go to @kintuparantu, since he built the script.From what I understand, the 'Make public' option in the managment console recursively adds a public grant for every object 'in' the directory. You can see this by right-clicking on one file, then click on 'Properties'. You then need to click on 'Permissions' and there should be a line:
If you upload a new file within this directory it won't have this public access set and therefore be private.
You need to remove public read permission one by one, either manually if you only have a few keys or by using a script.
I wrote a small script in Python with the 'boto' module to recursively remove the 'public read' attribute of all keys in a S3 folder:
I tested it in a folder with (only) 2 objects and it worked. If you have lots of keys it may take some time to complete and a parallel approach might be necessary.
I actually used Amazon's UI following this guide http://aws.amazon.com/articles/5050/
If you want a delightfully simple one-liner, you can use the AWS Powershell Tools. The reference for the AWS Powershell Tools can be found here. We'll be using the Get-S3Object and Set-S3ACL commandlets.
It looks like that this is now addressed by Amazon:
Selecting the following checkbox makes the bucket and its contents private again:
Block public and cross-account access if bucket has public policies
https://aws.amazon.com/blogs/aws/amazon-s3-block-public-access-another-layer-of-protection-for-your-accounts-and-buckets/
From the AWS S3 bucket listing (The AWS S3 UI), you can modify individual file's permissions after making either one file public manually or by making the whole folder content public (To clarify, I'm referring to a folder inside a bucket). To revert the public attribute back to private, you click on the file, then go to permissions and click in the radial button under "EVERYONE" heading. You get a second floating window where you can uncheck the *read object" attribute. Don't forget to save the change. If you try to access the link, you should get the typical "Access Denied" message. I have attached two screenshots. The first one shows the folder listing. Clicking the file and following the aforementioned procedure should show you the second screenshot, which shows the 4 steps. Notice that to modify multiple files, one would need to use the scripts as proposed in previous posts. -Kf