S3 always returns error code: NoSuchKey even with incorrect Bucket Name instead of some specific error code detailing about bucket - aws-java-sdk

S3 always returns error code: NoSuchKey i.e.
"when bucket name given in request is incorrect"
or
"when bucket name given in request is correct but with invalid object key"
Is there any way so that S3 API start returning me some specific error code stating that Bucket do not exist instead of generic error: NoSuchKey in a scenario where invalid bucketname is passed while requesting object.

First, check S3 object URL and the requested object URL is the same. Then check S3 upload handle properly asynchronously.
There can be a GetObject request that happened before the upload is completed.

Related

putBucketPolicy Invalid principal in policy determine one or more

[nodejs sdk, s3.putBucketPolicy, error handling]
Is there a way to determine (one or more) invalid arn's (invalid account numbers) from error object returned by S3 putBucketPolicy call? Error statusCode is 400 however, trying to figure out which set of principals are invalid.
To clarify further I am not looking for validating role, root ARN patterns. More like, one or more account number(s) thats not correct. Can we extract that from error object or else where?
There are couple of ways to validate an arn:
Using organizations service
Using ARN as principal and apply a bucket policy for a dummy bucket. Let AWS SDK validate it for you, clean up afterwards.

WaiterError: Waiter ObjectExists failed: Forbidden

I have a requirement to call a lambda function from another account(B) when a file arrives at bucket oon another account(A). I have provided "s3:" on the bucket policy of account A and "s3:" to lambda role on account B for account A's bucket. S3 events gets raised and lambda gets called. However, lambda gets object doesn't exist exception. So, I implemented waiter object to wait until lambda can find the object.
waiter = s3_client.get_waiter('object_exists')
waiter.wait(Bucket = bucket_name, Key = key)
When I manually, dropping the file, it works perfectly fine. But when AWS drops the file (CUR), lambda gets "Waiter ObjectExists failed: Forbidden" exception.
I looked at all old stackoverflow posts, people are saying that only "s3:ListBucket" is needed for headObject. I tried "s3:ListBucket" and "s3:*" but still getting the same error.
Any help on this regards would be highly appreciated.

"AllAccessDisabled: All access to this object has been disabled" error being thrown when copying between S3 Buckets

I am getting this error:
AllAccessDisabled: All access to this object has been disabled
When performing the s3.copyObject function in my node Lambda function.
Is this error being thrown because of insufficient permissions on the source bucket, or because of insufficient permissions on the target bucket?
This error means you are trying to access a bucket that has been locked down by AWS so that nobody can access it, regardless of permissions -- all access has been disabled.
It can occur because a bill goes unpaid and probably for other reasons as well...
However... usually this means you've made a mistake in your code and are not accessing the bucket that you think you are.
s3.copyObject expects CopySource to be this:
'/' + source_bucket_name + '/' + object_key
If you overlook this and supply something like /uploads/funny/cat.png you're going to get exactly this error, because here, uploads is the bucket name and funny/cat.png is the object key... and the bucket named uploads happens to be a bucket that returns the AllAccessDisabled error... so the real error here is that you are accessing the wrong bucket.
If your bucket name does Not match with bucket name in your code, it will also throw an error 403 forbidden. Make sure you spell correctly

Accessing FlowFile content in NIFI PutS3Object Processor

I am new to NIFI and want to push data from Kafka to an S3 bucket. I am using the PutS3Object processor and can push data to S3 if I hard code the Bucket value as mphdf/orderEvent, but I want to specify the buckets based on a field in the content of the FlowFile, which is in Json. So, if the Json content is this {"menu": {"type": "file","value": "File"}}, can I have the value for the Bucket property as as mphdf/$.menu.type? I have tried to do this and get the error below. I want to know if there is a way to access the FlowFile content with the PutS3Object processor and make Bucket names configurable or will I have to build my own processor?
ERROR [Timer-Driven Process Thread-10]
o.a.nifi.processors.aws.s3.PutS3Object
com.amazonaws.services.s3.model.AmazonS3Exception: The XML you
provided was not well-formed or did not validate against our
published schema (Service: Amazon S3; Status Code: 400; Error Code:
MalformedXML; Request ID: 77DF07828CBA0E5F)
I believe what you want to do is use an EvaluateJSONPath processor, which evaluates arbitrary JSONPath expressions against the JSON content and extracts the results to flowfile attributes. You can then reference the flowfile attribute using NiFi Expression Language in the PutS3Object configuration (see your first property Object Key which references ${filename}). In this way, you would evaluate $.menu.type and store it into an attribute menuType in the EvaluateJSONPath processor, then in PutS3Object you would have Bucket be mphdf/${menuType}.
You might have to play around with it a bit but off the top of my head I think that should work.

Amazon S3 error- A conflicting conditional operation is currently in progress against this resource.

Why I got this error when I try to create a bucket in amazon S3?
This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the name is available again.
This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the Bucket name is available again.
Kindly note, I received this error when my access-priviliges were blocked.
The error means your Operation for creating new bucket at S3 is aborted.
There can be multiple reasons for this, you can check the below points for rectifying this error:
Is this Bucket available or is Queued for Deletion
Do you have adequate access privileges for this operation
Your Bucket Name must be unique
P.S: Edited this answer to add more details as shared by Sanity below, and his answer is more accurate with updated information.
You can view the related errors for this operation here.
I am editing my asnwer so that correct answer posted below can be selected as correct answer to this question.
Creating a S3 bucket policy and the S3 public access block for a bucket at the same time will cause the error.
Terraform example
resource "aws_s3_bucket_policy" "allow_alb_access_bucket_elb_log" {
bucket = local.bucket_alb_log_id
policy = data.aws_iam_policy_document.allow_alb_access_bucket_elb_log.json
}
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
}
Solution
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
#--------------------------------------------------------------------------------
# To avoid OperationAborted: A conflicting conditional operation is currently in progress
#--------------------------------------------------------------------------------
depends_on = [
aws_s3_bucket_policy.allow_alb_access_bucket_elb_log
]
}
We have also observed this error several times when we try to move bucket from one account to other. In order to achieve this you should do the following :
Backup content of the S3 bucket you want to move.
Delete S3 bucket on the account.
Wait for 1/2 hours
Create a bucket with the same name in another account
Restore s3 bucket backup
I received this error running a terraform apply with the error:
Error: error creating public access block policy for S3 bucket
(bucket-name): OperationAborted: A conflicting
conditional operation is currently in progress against this resource.
Please try again.
status code: 409, request id: 30B386F1FAA8AB9C, host id: M8flEj6+ncWr0174ftzHd74CXBjhlY8Ys70vTyORaAGWA2rkKqY6pUECtAbouqycbAZs4Imny/c=
It said to "please try again" which I did and it worked the second time. It seems there wasn't enough wait time when provisioning the initial resource with Terraform.
To fully resolve this error, I inserted a 5 second sleep between multiple requests. There is nothing else that I had to do.