Amazon S3 error- A conflicting conditional operation is currently in progress against this resource. - amazon-s3

Why I got this error when I try to create a bucket in amazon S3?

This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the name is available again.

This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the Bucket name is available again.
Kindly note, I received this error when my access-priviliges were blocked.
The error means your Operation for creating new bucket at S3 is aborted.
There can be multiple reasons for this, you can check the below points for rectifying this error:
Is this Bucket available or is Queued for Deletion
Do you have adequate access privileges for this operation
Your Bucket Name must be unique
P.S: Edited this answer to add more details as shared by Sanity below, and his answer is more accurate with updated information.
You can view the related errors for this operation here.
I am editing my asnwer so that correct answer posted below can be selected as correct answer to this question.

Creating a S3 bucket policy and the S3 public access block for a bucket at the same time will cause the error.
Terraform example
resource "aws_s3_bucket_policy" "allow_alb_access_bucket_elb_log" {
bucket = local.bucket_alb_log_id
policy = data.aws_iam_policy_document.allow_alb_access_bucket_elb_log.json
}
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
}
Solution
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
#--------------------------------------------------------------------------------
# To avoid OperationAborted: A conflicting conditional operation is currently in progress
#--------------------------------------------------------------------------------
depends_on = [
aws_s3_bucket_policy.allow_alb_access_bucket_elb_log
]
}

We have also observed this error several times when we try to move bucket from one account to other. In order to achieve this you should do the following :
Backup content of the S3 bucket you want to move.
Delete S3 bucket on the account.
Wait for 1/2 hours
Create a bucket with the same name in another account
Restore s3 bucket backup

I received this error running a terraform apply with the error:
Error: error creating public access block policy for S3 bucket
(bucket-name): OperationAborted: A conflicting
conditional operation is currently in progress against this resource.
Please try again.
status code: 409, request id: 30B386F1FAA8AB9C, host id: M8flEj6+ncWr0174ftzHd74CXBjhlY8Ys70vTyORaAGWA2rkKqY6pUECtAbouqycbAZs4Imny/c=
It said to "please try again" which I did and it worked the second time. It seems there wasn't enough wait time when provisioning the initial resource with Terraform.

To fully resolve this error, I inserted a 5 second sleep between multiple requests. There is nothing else that I had to do.

Related

How to enable s3 Copy Bucket Permissions in Terraform statement

My goal is to copy the data from a set of s3 buckets into main logging account bucket. Every time I try to perform:
aws s3 cp s3://sub-account-cloudtrail s3://master-acccount-cloudtrail --profile=admin;
I get
(AccessDenied) when calling the CopyObject operation: Access Denied`
I've looked at this post:
How to fix AccessDenied calling CopyObject
I am trying to add the bucket permissions to a Terraform data aws_iam_policy_document. The statement is written like so
data aws_iam_policy_document s3 {
version = "2012-10-17"
statement {
sid = "CopyOobjectPermissions"
effect = "Allow"
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/ops-mgmt-admin"]
}
actions = ["s3:GetObject","s3:PutObject","s3:PutObjectAcl"]
resources = ["${aws_s3_bucket.nfcisbenchmark_cloudtrail.arn}/*"]
}
statement {
sid = "CopyBucketPermissions"
actions = ["s3:ListBucket"]
effect = "Allow"
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/ops-mgmt-admin"]
}
resources = ["${aws_s3_bucket.nfcisbenchmark_cloudtrail.arn}/*"]
}
}
My goal is to restrict the permissions to the role that is assumed from the sub-account to the master account. My specific question is what permissions need to be added in order to enable copy permissions?
Expected:
Terraform plan runs successfully
Actual:
│ Error: Error putting S3 policy: MalformedPolicy: Action does not apply to any resource(s) in statement
How can I resolve this?
Two things to mention:
In your second statement the resource is wrong, this is why you get the MalformedPolicy error. It should be:
resources = [aws_s3_bucket.nfcisbenchmark_cloudtrail.arn]
Be careful with the identifier. At this point I'm not really sure if your buckets are in different accounts or not. If they are, the account_id in the identifier should reference the source account. data.aws_caller_identity.current.account_id returns the account ID to which Terraform is authenticated, which usually is the account where you are deploying resources (destination account). If your are not doing cross account copying, than it should be fine as it is.
Furthermore, in case of cross account access, ops-mgmt-admin role should have a policy applied to it which gives access to get/list/upload objects to an S3 bucket.

WaiterError: Waiter ObjectExists failed: Forbidden

I have a requirement to call a lambda function from another account(B) when a file arrives at bucket oon another account(A). I have provided "s3:" on the bucket policy of account A and "s3:" to lambda role on account B for account A's bucket. S3 events gets raised and lambda gets called. However, lambda gets object doesn't exist exception. So, I implemented waiter object to wait until lambda can find the object.
waiter = s3_client.get_waiter('object_exists')
waiter.wait(Bucket = bucket_name, Key = key)
When I manually, dropping the file, it works perfectly fine. But when AWS drops the file (CUR), lambda gets "Waiter ObjectExists failed: Forbidden" exception.
I looked at all old stackoverflow posts, people are saying that only "s3:ListBucket" is needed for headObject. I tried "s3:ListBucket" and "s3:*" but still getting the same error.
Any help on this regards would be highly appreciated.

"AllAccessDisabled: All access to this object has been disabled" error being thrown when copying between S3 Buckets

I am getting this error:
AllAccessDisabled: All access to this object has been disabled
When performing the s3.copyObject function in my node Lambda function.
Is this error being thrown because of insufficient permissions on the source bucket, or because of insufficient permissions on the target bucket?
This error means you are trying to access a bucket that has been locked down by AWS so that nobody can access it, regardless of permissions -- all access has been disabled.
It can occur because a bill goes unpaid and probably for other reasons as well...
However... usually this means you've made a mistake in your code and are not accessing the bucket that you think you are.
s3.copyObject expects CopySource to be this:
'/' + source_bucket_name + '/' + object_key
If you overlook this and supply something like /uploads/funny/cat.png you're going to get exactly this error, because here, uploads is the bucket name and funny/cat.png is the object key... and the bucket named uploads happens to be a bucket that returns the AllAccessDisabled error... so the real error here is that you are accessing the wrong bucket.
If your bucket name does Not match with bucket name in your code, it will also throw an error 403 forbidden. Make sure you spell correctly

Amazon Redshift COPY always return S3ServiceException:Access Denied,Status 403

I'm really struggling with how to do data transfer from Amazon S3 bucket to Redshift with COPY command.
So far, I created an IAM User and 'AmazonS3ReadOnlyAccess' policy is assigned. But when I call COPY command likes following, Access Denied Error is always returned.
copy my_table from 's3://s3.ap-northeast-2.amazonaws.com/mybucket/myobject' credentials 'aws_access_key_id=<...>;aws_secret_access_key=<...>' REGION'ap-northeast-2' delimiter '|';
Error:
Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid EB18FDE35E1E0CAB,ExtRid ,CanRetry 1
Details: -----------------------------------------------
error: S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid EB18FDE35E1E0CAB,ExtRid ,CanRetry 1
code: 8001
context: Listing bucket=s3.ap-northeast-2.amazonaws.com prefix=mybucket/myobject
query: 1311463
location: s3_utility.cpp:542
process: padbmaster [pid=4527]
-----------------------------------------------;
Is there anyone can give me some clues or advice?
Thanks a lot!
Remove the endpoint s3.ap-northeast-2.amazonaws.com from the S3 path:
COPY my_table
FROM 's3://mybucket/myobject'
CREDENTIALS ''
REGION 'ap-northeast-2'
DELIMITER '|'
;
(See the examples in the documentation.) While the Access Denied error is definitely misleading, the returned message gives some hint as to what went wrong:
bucket=s3.ap-northeast-2.amazonaws.com
prefix=mybucket/myobject
We'd expect to see bucket=mybucket and prefix=myobject, though.
Check encription of bucket.
According doc : https://docs.aws.amazon.com/en_us/redshift/latest/dg/c_loading-encrypted-files.html
The COPY command automatically recognizes and loads files encrypted using SSE-S3 and SSE-KMS.
Check kms: rules on you key|role
If files from EMR, check Security configurations for S3.
your redshift cluster role does not have right to access to the S3 bucket. make sure the role you use for redshift has access to the bucket and bucket does not have policy that blocks the access

How can i change policy condition in Amazon S3 for a Bucket

My folder configuration in Amazon S3 looks like BucketName/R/A/123 now i want to add another folder under my Bucket and want to save data as BucketName/**I**/A/123. When i try to save my data, i get an error:
<Error><Code>AccessDenied</Code><Message>Invalid according to Policy: Policy Condition failed: ["starts-with", "$key", "R/"]</Message></Error>
I understand i need to give some permission for folder I to be created under this Bucket but i am struggling to find where.
Can anyone point me where i need to make a change.
Thanks
I understand i need to give some permission for folder I to be created under this Bucket but i am struggling to find where.
No, not according to this error message.
You appear to be supplying this policy in your code...
["starts-with", "$key", "R/"]
...and this policy -- part of a form post upload policy document your code is generating -- is telling S3 that you want it to deny an upload with a key that doesn't stat with "R/" ...
If this isn't what you want, you need to change the policy your code is supplying, so that it allows you to name the key the way you want to name it.