I cannot acces S3 even if I am allowed to it - amazon-s3

I am using Amazon stuff. I have this IAM:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SSSSSS",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::bucket1-name",
"arn:aws:s3:::bucket1-name/*"
]
}
]
}
I want to get an image from the bucket and I am getting the error Access Denied. What is the problem here?

It seems that the image I was trying to get was not on the Bucket. If I have tried to get an image that is on the specified bucket, it is not giving me the error of Access Denied. So, the problem seems to be that the Amazon is not making difference between Access Denied and Object not found.

Related

Imagekit EACCES - Access denied by AWS S3. Check attached IAM policy on AWS

After I set up Imagekit connecting to S3 bucket correctly with IAM policy having the s3:GetObject to the bucket, I got an error accessing the image through Imagekit url.
The error message is
EACCES - Access denied by AWS S3. Check attached IAM policy on AWS
Imagekit actually needs more than just action s3:GetObject in the policy if your objects in the S3 buckets are server-side encrypted. It will kms:Decrypt as well. This is not in their documentation as 2022/06/16.
My IAM policy is like the following to make Imagekit access correctly.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ImagekitObjectAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::[imagekit-bucket-name]/*"
]
},
{
"Sid": "ImagekitObjectEncryptingKeyAccess",
"Effect": "Allow",
"Action": [
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:us-east-1:187681360541:key/[object-encrypting-key-id]"
]
}
]
}

IAM access to all objects in an S3 bucket

My S3 bucket has random files getting pushed in. One of the automated process assumes a role and tries to list the objects(recursively).
I have been getting the following error while trying to list the object within a directory in the bucket.
(AccessDenied) when calling the ListObjects operation: Access Denied
This is my current state of the policy
{
"Version": "version_id",
"Statement": [
{
"Sid": "some_id",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bucketname",
"arn:aws:s3:::bucketname/*"
]
}
]
}
I am not able to figure out how to give List access for all objects within the bucket. Would be glad if someone could help.

Google Cloud Storage transfer from Amazon S3 - Invalid access key

I'm trying to create a transfer from my S3 bucket to Google Cloud - it's basically the same problem as in this question, but none of the answers work for me. Whenever I try to make a transfer, I get the following error:
Invalid access key. Make sure the access key for your S3 bucket is correct, or set the bucket permissions to Grant Everyone.
I've tried the following policies, to no success:
First policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:GetBucketLocation"
],
"Resource": "*"
}
]
}
Second policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
Third policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
]
}
I've also made sure to grant the 'List' permission to 'Everyone'. Tried this on buckets in two different locations - Sao Paulo and Oregon. I'm starting to run out of ideas, hope you can help.
I know this question is over a year old but I just encountered the same error when trying to do the transfer via the console. I worked around this by executing IT via the gsutils command line tool instead.
After installing and configuring the tool, simply run:
gsutils cp s3://sourcebucket gs://targetbucket
Hope this is helpful!

Getting Access Denied when calling the PutObject operation with bucket-level permission

I followed the example on http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_examples.html#iam-policy-example-s3 for how to grant a user access to just one bucket.
I then tested the config using the W3 Total Cache Wordpress plugin. The test failed.
I also tried reproducing the problem using
aws s3 cp --acl=public-read --cache-control='max-age=604800, public' ./test.txt s3://my-bucket/
and that failed with
upload failed: ./test.txt to s3://my-bucket/test.txt A client error (AccessDenied) occurred when calling the PutObject operation: Access Denied
Why can't I upload to my bucket?
To answer my own question:
The example policy granted PutObject access, but I also had to grant PutObjectAcl access.
I had to change
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
from the example to:
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
You also need to make sure your bucket is configured for clients to set a public-accessible ACL by unticking these two boxes:
I was having a similar problem. I was not using the ACL stuff, so I didn't need s3:PutObjectAcl.
In my case, I was doing (in Serverless Framework YML):
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::MyBucketName"
Instead of:
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::MyBucketName/*"
Which adds a /* to the end of the bucket ARN.
Hope this helps.
If you have set public access for bucket and if it is still not working, edit bucket policy and paste following:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::yourbucketnamehere",
"arn:aws:s3:::yourbucketnamehere/*"
],
"Effect": "Allow",
"Principal": "*"
}
]
}
Change yourbucketnamehere in above code with name of your bucket.
In case this help out anyone else, in my case, I was using a CMK (it worked fine using the default aws/s3 key)
I had to go into my encryption key definition in IAM and add the programmatic user logged into boto3 to the list of users that "can use this key to encrypt and decrypt data from within applications and when using AWS services integrated with KMS.".
I was just banging my head against a wall just trying to get S3 uploads to work with large files. Initially my error was:
An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied
Then I tried copying a smaller file and got:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
I could list objects fine but I couldn't do anything else even though I had s3:* permissions in my Role policy. I ended up reworking the policy to this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-bucket/*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucketMultipartUploads",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "*"
}
]
}
Now I'm able to upload any file. Replace my-bucket with your bucket name. I hope this helps somebody else that's going thru this.
In my case the problem was that I was uploading the files with "--acl=public-read" in the command line.
However that bucket has public access blocked and is accessed only through CloudFront.
I had a similar issue uploading to an S3 bucket protected with KWS encryption.
I have a minimal policy that allows the addition of objects under a specific s3 key.
I needed to add the following KMS permissions to my policy to allow the role to put objects in the bucket. (Might be slightly more than are strictly required)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:ListKeys",
"kms:GenerateRandom",
"kms:ListAliases",
"s3:PutAccountPublicAccessBlock",
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:HeadBucket"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"kms:ImportKeyMaterial",
"kms:ListKeyPolicies",
"kms:ListRetirableGrants",
"kms:GetKeyPolicy",
"kms:GenerateDataKeyWithoutPlaintext",
"kms:ListResourceTags",
"kms:ReEncryptFrom",
"kms:ListGrants",
"kms:GetParametersForImport",
"kms:TagResource",
"kms:Encrypt",
"kms:GetKeyRotationStatus",
"kms:GenerateDataKey",
"kms:ReEncryptTo",
"kms:DescribeKey"
],
"Resource": "arn:aws:kms:<MY-REGION>:<MY-ACCOUNT>:key/<MY-KEY-GUID>"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
<The S3 actions>
],
"Resource": [
"arn:aws:s3:::<MY-BUCKET-NAME>",
"arn:aws:s3:::<MY-BUCKET-NAME>/<MY-BUCKET-KEY>/*"
]
}
]
}
I encountered the same issue. My bucket was private and had KMS encryption. I was able to resolve this issue by putting in additional KMS permissions in the role. The following list is the bare minimum set of roles needed.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAttachmentBucketWrite",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"kms:Decrypt",
"s3:AbortMultipartUpload",
"kms:Encrypt",
"kms:GenerateDataKey"
],
"Resource": [
"arn:aws:s3:::bucket-name/*",
"arn:aws:kms:kms-key-arn"
]
}
]
}
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/s3-large-file-encryption-kms-key/
I was having the same error message for a mistake I made:
Make sure you use a correct s3 uri such as: s3://my-bucket-name/
(If my-bucket-name is at the root of your aws s3 obviously)
I insist on that because when copy pasting the s3 bucket from your browser you get something like https://s3.console.aws.amazon.com/s3/buckets/my-bucket-name/?region=my-aws-regiontab=overview
Thus I made the mistake to use s3://buckets/my-bucket-name which raises:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Error : An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
I solved the issue by passing Extra Args parameter as PutObjectAcl is disabled by company policy.
s3_client.upload_file('./local_file.csv', 'bucket-name', 'path', ExtraArgs={'ServerSideEncryption': 'AES256'})
I got this error too: ERROR AccessDenied: Access Denied
I am working in a NodeJS app that was trying to use the s3.putObject method. I got clues from reading the many other answers above, so I went to the S3 Bucket, clicked on the Permission tab, then scrolled down to the Bucket Policy section and noticed there was a condition required for access.
So I added a ServerSideEncryption attribute to my params for the putObject call.
This finally worked for me. No other changes, such as any encryption of the message, are required for the putObject to work.
Similar to one post above, (except I was using admin credentials) to get S3 uploads to work with large 50M file.
Initially my error was:
An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied
I switched the multipart_threshold to be above the 50M
aws configure set default.s3.multipart_threshold 64MB
and I got:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
I checked bucket public access settings and all was allowed.
So I found that public access can be blocked on account level for all S3 buckets:
I also solved it by adding the following KMS permissions to my policy to allow the role to put objects in this bucket (and this bucket alone):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
You can also test your policy configurations before applying them with the IAM Policy Simulator. This came in handy to me.
In my case I had an ECS task with roles attached to it to access S3, but I tried to create a new user for my task to access SES as well. Once I did that I guess I overwrote some permissions somehow.
Basically when I gave SES access to the user my ECS lost access to S3.
My fix was to attach the SES policy to the ECS role together with the S3 policy and get rid of the new user.
What I learned is that ECS needs permissions in 2 different stages, when spinning up the task and for the task's everyday needs. If you want to give the containers in the task access to other AWS resources you need to make sure to attach those permissions to the ECS task.
My code fix in terraform:
data "aws_iam_policy" "AmazonSESFullAccess" {
arn = "arn:aws:iam::aws:policy/AmazonSESFullAccess"
}
resource "aws_iam_role_policy_attachment" "ecs_ses_access" {
role = aws_iam_role.app_iam_role.name
policy_arn = data.aws_iam_policy.AmazonSESFullAccess.arn
}
For me I was using expired auth keys. Generated new ones and boom.
My problem was that my source (an ec2 instance) had an IAM role attached that didn't allow any write actions, so even though the bucket policy was correct, I couldn't write anything to anywhere from it. I solved it by adding this policy to the IAM role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::destination-bucket/destination-path/*"
]
}
]
}
I was facing the similar issue so checked the permission tab in the AWS bucket. The public access was blocked which was causing the issue in my case so I unchecked the option and it worked.
enter image description here
If you have specified your own customer managed KMS key for S3 encryption you also need to provide the flag --server-side-encryption aws:kms, for example:
aws s3api put-object --bucket bucket --key objectKey --body /path/to/file --server-side-encryption aws:kms
If you do not add the flag --server-side-encryption aws:kms the cli displays an AccessDenied error
I was able to solve the issue by granting complete s3 access to Lambda from policies. Make a new role for Lambda and attach the policy with complete S3 Access to it.
Hope this will help.
In addition, I have set the permission for the group to which the user belongs to.

IAM configuration to access jgit on S3

I am trying to create IAM permissions so jgit can access a directory in one of my buckets.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::<mybucket>/<mydir>/*"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": ["arn:aws:s3:::<mybucket>/<mydir>"]
}
]
}
Unfortunately it throws an error. I am not sure what other allow actions need to happen for this to work. (A little new at IAM).
Caused by: java.io.IOException: Reading of '<mydir>/packed-refs' failed: 403 Forbidden
at org.eclipse.jgit.transport.AmazonS3.error(AmazonS3.java:519)
at org.eclipse.jgit.transport.AmazonS3.get(AmazonS3.java:289)
at org.eclipse.jgit.transport.TransportAmazonS3$DatabaseS3.open(TransportAmazonS3.java:284)
at org.eclipse.jgit.transport.WalkRemoteObjectDatabase.openReader(WalkRemoteObjectDatabase.java:365)
at org.eclipse.jgit.transport.WalkRemoteObjectDatabase.readPackedRefs(WalkRemoteObjectDatabase.java:423)
... 13 more
Caused by: java.io.IOException:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>...</RequestId><HostId>...</HostId></Error>
at org.eclipse.jgit.transport.AmazonS3.error(AmazonS3.java:538)
... 17 more
The 403 Forbidden is obviously the error but not sure what needs to be added to the IAM. Any ideas?
[Should have added, too, that I tried this out in the policy simulator and it appeared to work there.]
The "403" error may simply mean that the key <mydir>/packed-refs doesn't exist. According to https://forums.aws.amazon.com/thread.jspa?threadID=56531:
Amazon S3 will return an AccessDenied error when a nonexistent key is requested and the requester is not allowed to list the contents of the bucket.
If you're pushing for the first time, that folder might not exist, and I'm guessing you would need ListBucket privileges on the parent directory to get the proper NoSuchKey response. Try changing that first statement to:
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::<mybucket>/*"]
}
I also noticed that jgit push s3 refs/heads/master worked when jgit push s3 master did not.
To future folk: if all you want to do is to set up a git repos bucket with its own user, the following security policy seems to be good enough:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<bucketname>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<bucketname>/*"
]
}
]
}