WaiterError: Waiter ObjectExists failed: Forbidden - amazon-s3

I have a requirement to call a lambda function from another account(B) when a file arrives at bucket oon another account(A). I have provided "s3:" on the bucket policy of account A and "s3:" to lambda role on account B for account A's bucket. S3 events gets raised and lambda gets called. However, lambda gets object doesn't exist exception. So, I implemented waiter object to wait until lambda can find the object.
waiter = s3_client.get_waiter('object_exists')
waiter.wait(Bucket = bucket_name, Key = key)
When I manually, dropping the file, it works perfectly fine. But when AWS drops the file (CUR), lambda gets "Waiter ObjectExists failed: Forbidden" exception.
I looked at all old stackoverflow posts, people are saying that only "s3:ListBucket" is needed for headObject. I tried "s3:ListBucket" and "s3:*" but still getting the same error.
Any help on this regards would be highly appreciated.

Related

putBucketPolicy Invalid principal in policy determine one or more

[nodejs sdk, s3.putBucketPolicy, error handling]
Is there a way to determine (one or more) invalid arn's (invalid account numbers) from error object returned by S3 putBucketPolicy call? Error statusCode is 400 however, trying to figure out which set of principals are invalid.
To clarify further I am not looking for validating role, root ARN patterns. More like, one or more account number(s) thats not correct. Can we extract that from error object or else where?
There are couple of ways to validate an arn:
Using organizations service
Using ARN as principal and apply a bucket policy for a dummy bucket. Let AWS SDK validate it for you, clean up afterwards.

S3 always returns error code: NoSuchKey even with incorrect Bucket Name instead of some specific error code detailing about bucket

S3 always returns error code: NoSuchKey i.e.
"when bucket name given in request is incorrect"
or
"when bucket name given in request is correct but with invalid object key"
Is there any way so that S3 API start returning me some specific error code stating that Bucket do not exist instead of generic error: NoSuchKey in a scenario where invalid bucketname is passed while requesting object.
First, check S3 object URL and the requested object URL is the same. Then check S3 upload handle properly asynchronously.
There can be a GetObject request that happened before the upload is completed.

"AllAccessDisabled: All access to this object has been disabled" error being thrown when copying between S3 Buckets

I am getting this error:
AllAccessDisabled: All access to this object has been disabled
When performing the s3.copyObject function in my node Lambda function.
Is this error being thrown because of insufficient permissions on the source bucket, or because of insufficient permissions on the target bucket?
This error means you are trying to access a bucket that has been locked down by AWS so that nobody can access it, regardless of permissions -- all access has been disabled.
It can occur because a bill goes unpaid and probably for other reasons as well...
However... usually this means you've made a mistake in your code and are not accessing the bucket that you think you are.
s3.copyObject expects CopySource to be this:
'/' + source_bucket_name + '/' + object_key
If you overlook this and supply something like /uploads/funny/cat.png you're going to get exactly this error, because here, uploads is the bucket name and funny/cat.png is the object key... and the bucket named uploads happens to be a bucket that returns the AllAccessDisabled error... so the real error here is that you are accessing the wrong bucket.
If your bucket name does Not match with bucket name in your code, it will also throw an error 403 forbidden. Make sure you spell correctly

Amazon S3 error- A conflicting conditional operation is currently in progress against this resource.

Why I got this error when I try to create a bucket in amazon S3?
This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the name is available again.
This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the Bucket name is available again.
Kindly note, I received this error when my access-priviliges were blocked.
The error means your Operation for creating new bucket at S3 is aborted.
There can be multiple reasons for this, you can check the below points for rectifying this error:
Is this Bucket available or is Queued for Deletion
Do you have adequate access privileges for this operation
Your Bucket Name must be unique
P.S: Edited this answer to add more details as shared by Sanity below, and his answer is more accurate with updated information.
You can view the related errors for this operation here.
I am editing my asnwer so that correct answer posted below can be selected as correct answer to this question.
Creating a S3 bucket policy and the S3 public access block for a bucket at the same time will cause the error.
Terraform example
resource "aws_s3_bucket_policy" "allow_alb_access_bucket_elb_log" {
bucket = local.bucket_alb_log_id
policy = data.aws_iam_policy_document.allow_alb_access_bucket_elb_log.json
}
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
}
Solution
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
#--------------------------------------------------------------------------------
# To avoid OperationAborted: A conflicting conditional operation is currently in progress
#--------------------------------------------------------------------------------
depends_on = [
aws_s3_bucket_policy.allow_alb_access_bucket_elb_log
]
}
We have also observed this error several times when we try to move bucket from one account to other. In order to achieve this you should do the following :
Backup content of the S3 bucket you want to move.
Delete S3 bucket on the account.
Wait for 1/2 hours
Create a bucket with the same name in another account
Restore s3 bucket backup
I received this error running a terraform apply with the error:
Error: error creating public access block policy for S3 bucket
(bucket-name): OperationAborted: A conflicting
conditional operation is currently in progress against this resource.
Please try again.
status code: 409, request id: 30B386F1FAA8AB9C, host id: M8flEj6+ncWr0174ftzHd74CXBjhlY8Ys70vTyORaAGWA2rkKqY6pUECtAbouqycbAZs4Imny/c=
It said to "please try again" which I did and it worked the second time. It seems there wasn't enough wait time when provisioning the initial resource with Terraform.
To fully resolve this error, I inserted a 5 second sleep between multiple requests. There is nothing else that I had to do.

How to Access and read bucket information of shared bucket using jets3t API?

Here is the explanation for my error. I have registered two User's A, B in Eucalyptus (open-source). I Created a bucket B1 using Jets3t API in User A's Account and granted Read Permission To user B (Using "CanonicalGrantee" Interface). While Listing Access Control List Using A's Credentials i got FULL_CONTROL for A and READ For B. But When I tried to access Bucket B1 information using B's Credentials I got this error
Exception in thread "main" org.jets3t.service.S3ServiceException: The action listObjects cannot be performed with an invalid bucket: null
at org.jets3t.service.S3Service.listObjects(S3Service.java:1410)
at test.ObjectPermission.main(ObjectPermission.java:40)
problematic code is S3Bucket publicBucket =s3Service.getBucket("B1");
Here B1 is Bucket belongs User A. in the above code s3service returns a null Values . I know that s3service only retrieves the information belongs which are created under B' Credential.
I don't know how to resolve this and to access shared bucket using Jets3t API
The method you are using is search the bucket B1 of account A in the list of bucket of Account B. So always it does not found that bucket in the Account B and return null.
So you have to check it in another way. You can check it by doing head request for Bucket B1 for Account B Service object for that call isBucketAccessible() if it returns true thats means bucket is accessible else not.
I am 100% sure it will work :)