Getting Access Denied when calling the PutObject operation with bucket-level permission - amazon-s3

I followed the example on http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_examples.html#iam-policy-example-s3 for how to grant a user access to just one bucket.
I then tested the config using the W3 Total Cache Wordpress plugin. The test failed.
I also tried reproducing the problem using
aws s3 cp --acl=public-read --cache-control='max-age=604800, public' ./test.txt s3://my-bucket/
and that failed with
upload failed: ./test.txt to s3://my-bucket/test.txt A client error (AccessDenied) occurred when calling the PutObject operation: Access Denied
Why can't I upload to my bucket?

To answer my own question:
The example policy granted PutObject access, but I also had to grant PutObjectAcl access.
I had to change
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
from the example to:
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
You also need to make sure your bucket is configured for clients to set a public-accessible ACL by unticking these two boxes:

I was having a similar problem. I was not using the ACL stuff, so I didn't need s3:PutObjectAcl.
In my case, I was doing (in Serverless Framework YML):
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::MyBucketName"
Instead of:
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::MyBucketName/*"
Which adds a /* to the end of the bucket ARN.
Hope this helps.

If you have set public access for bucket and if it is still not working, edit bucket policy and paste following:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::yourbucketnamehere",
"arn:aws:s3:::yourbucketnamehere/*"
],
"Effect": "Allow",
"Principal": "*"
}
]
}
Change yourbucketnamehere in above code with name of your bucket.

In case this help out anyone else, in my case, I was using a CMK (it worked fine using the default aws/s3 key)
I had to go into my encryption key definition in IAM and add the programmatic user logged into boto3 to the list of users that "can use this key to encrypt and decrypt data from within applications and when using AWS services integrated with KMS.".

I was just banging my head against a wall just trying to get S3 uploads to work with large files. Initially my error was:
An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied
Then I tried copying a smaller file and got:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
I could list objects fine but I couldn't do anything else even though I had s3:* permissions in my Role policy. I ended up reworking the policy to this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-bucket/*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucketMultipartUploads",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "*"
}
]
}
Now I'm able to upload any file. Replace my-bucket with your bucket name. I hope this helps somebody else that's going thru this.

In my case the problem was that I was uploading the files with "--acl=public-read" in the command line.
However that bucket has public access blocked and is accessed only through CloudFront.

I had a similar issue uploading to an S3 bucket protected with KWS encryption.
I have a minimal policy that allows the addition of objects under a specific s3 key.
I needed to add the following KMS permissions to my policy to allow the role to put objects in the bucket. (Might be slightly more than are strictly required)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:ListKeys",
"kms:GenerateRandom",
"kms:ListAliases",
"s3:PutAccountPublicAccessBlock",
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:HeadBucket"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"kms:ImportKeyMaterial",
"kms:ListKeyPolicies",
"kms:ListRetirableGrants",
"kms:GetKeyPolicy",
"kms:GenerateDataKeyWithoutPlaintext",
"kms:ListResourceTags",
"kms:ReEncryptFrom",
"kms:ListGrants",
"kms:GetParametersForImport",
"kms:TagResource",
"kms:Encrypt",
"kms:GetKeyRotationStatus",
"kms:GenerateDataKey",
"kms:ReEncryptTo",
"kms:DescribeKey"
],
"Resource": "arn:aws:kms:<MY-REGION>:<MY-ACCOUNT>:key/<MY-KEY-GUID>"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
<The S3 actions>
],
"Resource": [
"arn:aws:s3:::<MY-BUCKET-NAME>",
"arn:aws:s3:::<MY-BUCKET-NAME>/<MY-BUCKET-KEY>/*"
]
}
]
}

I encountered the same issue. My bucket was private and had KMS encryption. I was able to resolve this issue by putting in additional KMS permissions in the role. The following list is the bare minimum set of roles needed.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAttachmentBucketWrite",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"kms:Decrypt",
"s3:AbortMultipartUpload",
"kms:Encrypt",
"kms:GenerateDataKey"
],
"Resource": [
"arn:aws:s3:::bucket-name/*",
"arn:aws:kms:kms-key-arn"
]
}
]
}
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/s3-large-file-encryption-kms-key/

I was having the same error message for a mistake I made:
Make sure you use a correct s3 uri such as: s3://my-bucket-name/
(If my-bucket-name is at the root of your aws s3 obviously)
I insist on that because when copy pasting the s3 bucket from your browser you get something like https://s3.console.aws.amazon.com/s3/buckets/my-bucket-name/?region=my-aws-regiontab=overview
Thus I made the mistake to use s3://buckets/my-bucket-name which raises:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

Error : An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
I solved the issue by passing Extra Args parameter as PutObjectAcl is disabled by company policy.
s3_client.upload_file('./local_file.csv', 'bucket-name', 'path', ExtraArgs={'ServerSideEncryption': 'AES256'})

I got this error too: ERROR AccessDenied: Access Denied
I am working in a NodeJS app that was trying to use the s3.putObject method. I got clues from reading the many other answers above, so I went to the S3 Bucket, clicked on the Permission tab, then scrolled down to the Bucket Policy section and noticed there was a condition required for access.
So I added a ServerSideEncryption attribute to my params for the putObject call.
This finally worked for me. No other changes, such as any encryption of the message, are required for the putObject to work.

Similar to one post above, (except I was using admin credentials) to get S3 uploads to work with large 50M file.
Initially my error was:
An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied
I switched the multipart_threshold to be above the 50M
aws configure set default.s3.multipart_threshold 64MB
and I got:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
I checked bucket public access settings and all was allowed.
So I found that public access can be blocked on account level for all S3 buckets:

I also solved it by adding the following KMS permissions to my policy to allow the role to put objects in this bucket (and this bucket alone):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
You can also test your policy configurations before applying them with the IAM Policy Simulator. This came in handy to me.

In my case I had an ECS task with roles attached to it to access S3, but I tried to create a new user for my task to access SES as well. Once I did that I guess I overwrote some permissions somehow.
Basically when I gave SES access to the user my ECS lost access to S3.
My fix was to attach the SES policy to the ECS role together with the S3 policy and get rid of the new user.
What I learned is that ECS needs permissions in 2 different stages, when spinning up the task and for the task's everyday needs. If you want to give the containers in the task access to other AWS resources you need to make sure to attach those permissions to the ECS task.
My code fix in terraform:
data "aws_iam_policy" "AmazonSESFullAccess" {
arn = "arn:aws:iam::aws:policy/AmazonSESFullAccess"
}
resource "aws_iam_role_policy_attachment" "ecs_ses_access" {
role = aws_iam_role.app_iam_role.name
policy_arn = data.aws_iam_policy.AmazonSESFullAccess.arn
}

For me I was using expired auth keys. Generated new ones and boom.

My problem was that my source (an ec2 instance) had an IAM role attached that didn't allow any write actions, so even though the bucket policy was correct, I couldn't write anything to anywhere from it. I solved it by adding this policy to the IAM role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::destination-bucket/destination-path/*"
]
}
]
}

I was facing the similar issue so checked the permission tab in the AWS bucket. The public access was blocked which was causing the issue in my case so I unchecked the option and it worked.
enter image description here

If you have specified your own customer managed KMS key for S3 encryption you also need to provide the flag --server-side-encryption aws:kms, for example:
aws s3api put-object --bucket bucket --key objectKey --body /path/to/file --server-side-encryption aws:kms
If you do not add the flag --server-side-encryption aws:kms the cli displays an AccessDenied error

I was able to solve the issue by granting complete s3 access to Lambda from policies. Make a new role for Lambda and attach the policy with complete S3 Access to it.
Hope this will help.

In addition, I have set the permission for the group to which the user belongs to.

Related

Can't configure lambda and SNS with an existing S3 bucket

I have an existing S3 bucket (which has some lambda event and SNS configuration already created by my previous co-worker). I want to add a new lambda event that will trigger by PutObject in another prefix.
I have been doing this for other existing S3 bucket with no issues. However, right now with this S3 bucket, no matter i try to create a lambda (according to the some AWS document I was reading, doing this on lambda console will automatically attach the policy for the S3 to invoke the function. But I also just try to manually add the permission for the S3 to invoke lambda) or an SNS (I edited the SNS policy to allow S3 bucket to SendMessage and ReceiveMessage), I was get this error:
An error occurred when creating the trigger: Unable to validate the following destination configurations (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: KKBWYJGTVK8X8AYZ; S3 Extended Request ID: ZF3NOIqw8VcRYX6bohbYp7d0a+opDuXOcFRrn1KBn3vBVBIPuAQ/s7V+3vptIue1uWu6muIWBhY=; Proxy: null)
I already followed all the AWS links i can find and i even try to follow all settings of the existing lambda event trigger on the S3 (except the prefix). However, I still don't have any solutions. The only difference i can think about maybe there's a CloudFormation behind to chain all the existing applications. However, i don't think the s3 Bucket is involving.
Can you please give me any advice? Much appreciated!
Update: Also I just tested doing the same thing on another bucket - with same IAM role, and it works. So I think the issue is related to the bucket.
Could you share your policy with us or any Infra-as-Code that was used previously to get where you're now, it will be very hard for anyone to figure out what the cause of this could be. I would also certainly advice to setup resources in AWS Through AWS CloudFormation, perhaps this is a good starts guide: https://www.youtube.com/watch?v=t97jZch4lMY
Please compare the below IAM Policy that defines the permissions for the Lambda function.
The required permissions include:
Get the object from the source S3 bucket.
Put the resized object into the target S3 bucket.
Permissions related to the CloudWatch Logs.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:PutLogEvents",
"logs:CreateLogGroup",
"logs:CreateLogStream"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::mybucket/*"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::mybucket-resized/*"
}
]
You will also need to configure an execution role for your Lambda.
Create the execution role that gives your function permission to access AWS resources.
To create an execution role
Open the roles page in the IAM console.
Choose Create role.
Create a role with the following properties.
Trusted entity – AWS Lambda.
Permissions – AWSLambdaS3Policy.
Role name – lambda-s3-role.
The above created policy has the permissions that the function needs to manage objects in Amazon S3 and write logs to CloudWatch Logs.
The issue is with the SNS's Access Policy.
Adding this policy will fix this:
{
"Version": "2012-10-17",
"Id": "example-ID",
"Statement": [
{
"Sid": "example-statement-ID",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": [
"SNS:Publish"
],
"Resource": "arn:aws:sns:Region:account-id:topic-name",
"Condition": {
"ArnLike": { "aws:SourceArn": "arn:aws:s3:::awsexamplebucket1" },
"StringEquals": { "aws:SourceAccount": "bucket-owner-account-id" }
}
}
]
}
To use this policy, you must update the Amazon SNS topic ARN, bucket name, and bucket owner's AWS account ID.
Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/grant-destinations-permissions-to-s3.html

Access Denied for listobjects

I am getting error when trying to list objects with cross account bucket policy applied
aws s3 ls bucket-name
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
Bucket Policy used to allow list object is :
{
"Id": "Policy2",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt13",
"Action": [ "s3:GetBucketLocation", "s3:ListBucket" ,"s3:GetBucketPolicy"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::bucket-name"],
"Principal": {"AWS":"*"}
}
]
}
I have tried specifying the principal to a specific ARN. Assuming the block public access is enabled. But that doesn't work either.
Your policy worked fine for me!
The steps I took:
Created a new bucket
Turned OFF Block Public Access for the two Bucket Policy options
Added your bucket policy (above), changing my bucket name
Used an IAM User from a different account to list the bucket
It worked fine.

AWS S3 CLI:An error occurred (AllAccessDisabled) when calling the PutObject operation: All access to this object has been disabled

I'm using aws-cli/1.15.25 Python/2.7.15 Darwin/17.7.0 botocore/1.10.25 to try and upload a file to S3 using the following command:
aws s3 cp <file> s3://bucket.s3.amazonaws.com/<bucket name>
But I get the following returned:
u
pload failed: ./<file> to s3://bucket.s3.amazonaws.com/<bucket name> An error occurred (AllAccessDisabled) when calling the PutObject operation: All access to this object has been disabled
I have, as a test, set my bucket to accessible by all with the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Principal": "*",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:PutObjectAcl",
"s3:GetObjectAcl"
],
"Resource": [
"arn:aws:s3:::<bucket name>/*"
]
}
]
}
My IAM user has the correct permissions set
I don't know what else to look at. I've Googled and tried most suggestions
Note: AllAccessDisabled error will be displayed when non existing folder path is specified (misspelling) .
You are specifying the bucket name twice in the URL or you are actually using the string "bucket".
Your can use the virtual hosted style as:
http://bucketname.s3.amazonaws.com/path/to/file
http://bucketname.s3-aws-region.amazonaws.com/path/to/file
or the path style URL:
http://s3.amazonaws.com/bucketnamepath/to/file
http://s3-aws-region.amazonaws.com/bucketname/path/to/file
Replace "aws-region" with the region. Use the "s3-aws-region" style for regions that are not us-east-1. Examples for a bucket in South America:
http://bucketname.s3-sa-east-1.amazonaws.com/path/to/file
http://s3-sa-east-1.amazonaws.com/bucketname/path/to/file

Copying data from S3 to Redshift - Access denied

We are having trouble copying files from S3 to Redshift. The S3 bucket in question allows access only from a VPC in which we have a Redshift cluster. We have no problems with copying from public S3 buckets. We tried both, key-based and IAM role based approach, but result is the same: we keep getting 403 Access Denied by S3. Any idea what we are missing? Thanks.
EDIT:
Queries we use:
1. (using IAM role):
copy redshift_table from 's3://bucket/file.csv.gz' credentials 'aws_iam_role=arn:aws:iam::123456789:role/redshift-copyunload' delimiter '|' gzip;
(using access keys):
copy redshift_table from 's3://bucket/file.csv.gz' credentials 'aws_access_key_id=xxx;aws_secret_access_key=yyy' delimiter '|' gzip;
S3 policy for IAM Role (first query) and IAM user (second query) is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt123456789",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket/*"
]
}
]
}
Bucket has a policy denying access from anywhere other than VPC (redshift cluster is in this VPC):
{
"Version": "2012-10-17",
"Id": "VPCOnlyPolicy",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket/*",
"arn:aws:s3:::bucket"
],
"Condition": {
"StringNotEquals": {
"aws:sourceVpc": "vpc-123456"
}
}
}
]
}
We have no problem loading from publicly accessible buckets and if we remove this bucket policy we can copy the data with no problems.
The bucket is in the same region as redshift cluster.
When we run IAM role (redshift-copyunload) through the policy simulator it returns "permission allowed".
Enable "Enhanced VPC Routing" on your Redshift. Without the "Enhanced VPC Routing" your Redshift traffic will be coming via Internet and your S3 bucket policy will deny access. See here:
https://docs.aws.amazon.com/redshift/latest/mgmt/enhanced-vpc-enabling-cluster.html
1 Check encription of bucket.
According doc : https://docs.aws.amazon.com/en_us/redshift/latest/dg/c_loading-encrypted-files.html The COPY command automatically recognizes and loads files encrypted using SSE-S3 and SSE-KMS.
2 Check kms: rules on you key|role
3 If files from EMR, check Security configurations for S3.

How to remove "delete" permission on Amazon S3

In the Amazon S3 console I only see a permission option for "upload/delete". Is there a way to allow uploading but not deleting?
The permissions you are seeing in the AWS Management Console directly are based on the initial and comparatively simple Access Control Lists (ACL) available for S3, which essentially differentiated READ and WRITE permissions, see Specifying a Permission:
READ - Allows grantee to list the objects in the bucket
WRITE - Allows grantee to create, overwrite, and delete any object in the
bucket
These limitations have been addressed by adding Bucket Policies (permissions applied on the bucket level) and IAM Policies (permissions applied on the user level), and all three can be used together as well (which can become rather complex, as addressed below), see Access Control for the entire picture.
Your use case probably asks for a respective bucket policy, which you an add directly from the S3 console as well. Clicking on Add bucket policy opens the Bucket Policy Editor, which features links to a couple of samples as well as the highly recommended AWS Policy Generator, which allows you to assemble a policy addressing your use case.
For an otherwise locked down bucket, the simplest form might look like so (please ensure to adjust Principal and Resource to your needs):
{
"Statement": [
{
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<bucket_name>/<key_name>",
"Principal": {
"AWS": [
"*"
]
}
}
]
}
Depending on your use case, you can easily compose pretty complex policies by combining various Allow and Deny actions etc. - this can obviously yield inadvertent permissions as well, thus proper testing is key as usual; accordingly, please take care of the implications when using Using ACLs and Bucket Policies Together or IAM and Bucket Policies Together.
Finally, you might want to have a look at my answer to Problems specifying a single bucket in a simple AWS user policy as well, which addresses another commonly encountered pitfall with policies.
You can attach no-delete policy to your s3 bucket. For example if you don't want this IAM user to perform any delete operation to any buckets or any objects, you can set something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1480692207000",
"Effect": "Deny",
"Action": [
"s3:DeleteBucket",
"s3:DeleteBucketPolicy",
"s3:DeleteBucketWebsite",
"s3:DeleteObject",
"s3:DeleteObjectVersion"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
Also, you can check your policy with policy simulator https://policysim.aws.amazon.com to check if your set up is what you expected or not.
Hope this helps!
This worked perfect . Thanks to Pung Worathiti Manosroi . combined his mentioned policy as per below:
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:GetObjectAcl",
"s3:PutObjectAcl",
"s3:ListBucket",
"s3:GetBucketAcl",
"s3:PutBucketAcl",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::mybucketname/*",
"Condition": {}
},
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*",
"Condition": {}
},
{
"Effect": "Deny",
"Action": [
"s3:DeleteBucket",
"s3:DeleteBucketPolicy",
"s3:DeleteBucketWebsite",
"s3:DeleteObject",
"s3:DeleteObjectVersion"
],
"Resource": "arn:aws:s3:::mybucketname/*",
"Condition": {}
}
]
}
Yes, s3:DeleteObject is an option:
http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
However, there is no differentiation between changing an existing object (which would allow effectively deleting it) and creating a new object.