Trying to copy a file from an S3 bucket to my local machine:
aws s3 cp s3://my-bucket-name/audio-0b7ea3d0-13ab-4c7c-ac66-1bec2e572c14.wav ./
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Things I have confirmed:
- I'm using version
aws-cli/1.11.13 Python/3.5.2 Linux/4.4.0-75-generic botocore/1.4.70
- The S3 Object key is correct. I have copied it directly from the S3 web interface.
- The AWS CLI is configured with valid credentials. I generated a new key/secret pair. I deleted the ~/.aws folder before re-configuring the aws cli. The IAM web interface online confirms that the user specific by arn is in fact making use of S3 via the CLI.
- The IAM user is granted the S3 full access managed policy, per this SO post. I removed all this users' policies, and then added only the AWS managed policy called AdministratorAccess, which includes "S3, Full access, All resources." Is there a different way to grant access via the CLI? I did not believe so.
Bucket policy is intended to grant wide open access:
{
"Sid": "AdminAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
How did I upload this object?
I uploaded this object using AWS Signature v4 signed upload policy from a web app in the client browser directly to AWS.
I ran into a similar permissions issue when trying to download from s3 something I had uploaded previously. Turns out it has nothing to do with the bucket policy and everything to do with how your credentials are set when you upload and how you grant access privileges at time of upload. See this for more information on several ways to solve the problem.
In my case above error appeared when machine that was trying to contact S3 had system time far from the current one. Setting a correct time helped.
It turns out, looking at the object properties, I can see the Owner of the OBJECT is "Anonymous," and also "Anonymous" user has full permission to this object.
I believe this is why I'm not able to access this object (I'm authenticated). Example: Since the "Anonymous" user has full permission, I am able to access via GET using a Web browser. This is functioning as designed. The S3 bucket is for uploading files which then become available for public consumption.
So when the file is POST'ed with the upload policy, the resulting owner is "Anonymous".
In this case,
acl=bucket-owner-full-control
should be used while uploading the object so the bucket owner can control the object. Doing this, the owner will still be "Anonymous", however, it'll give the bucket owner (me) the full permission and I should be able to access the object after that, via AWS CLI.Note that
acl=ec2-bundle-read
is a default that's actually hard-coded into the latest AWS SDK. See https://github.com/aws/aws-sdk-java/blob/7844c64cf248aed889811bf2e871ad6b276a89ca/aws-java-sdk-ec2/src/main/java/com/amazonaws/services/ec2/util/S3UploadPolicy.java#L77It was necessary to copy S3UploadPolicy.java into my own codebase (it's an entirely portable little utility class, it turns out) and modify it in order to use
acl=bucket-owner-full-control
. And I have verified that this affords the administration of uploaded objects via AWS CLI.In my case I have 3 accounts (
A1
,A2
,A3
) with 3 canonical users (canonical_user_account_A1
,canonical_user_account_A2
,canonical_user_account_A3
) and 1 IAM role (R1
) that is inA3
.Files are in a bucket in
A2
and the files owner iscanonical_user_account_A1
(this is on purpose). When I tried to list the files I didn't got any error, BUT when I tried to download one of them I gotI have added
List
andGet
permissions for aR1
in the bucket policy and in the role permissions, in this case this is not enough, if the account were the bucket is not the owner it can't allow users from other account toget
(download) files. So I needed to make sure that when I upload files I'm using:This allow both
canonical_user_account_A2
andcanonical_user_account_A3
to read and download the file.AWS S3 will return you Forbidden(403) even if file does not exist for security reasons. Please ensure you have given proper s3 path while downloading.
You can read more about it here