Amazon Redshift COPY always return S3ServiceException:Access Denied,Status 403 - amazon-s3

I'm really struggling with how to do data transfer from Amazon S3 bucket to Redshift with COPY command.
So far, I created an IAM User and 'AmazonS3ReadOnlyAccess' policy is assigned. But when I call COPY command likes following, Access Denied Error is always returned.
copy my_table from 's3://s3.ap-northeast-2.amazonaws.com/mybucket/myobject' credentials 'aws_access_key_id=<...>;aws_secret_access_key=<...>' REGION'ap-northeast-2' delimiter '|';
Error:
Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid EB18FDE35E1E0CAB,ExtRid ,CanRetry 1
Details: -----------------------------------------------
error: S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid EB18FDE35E1E0CAB,ExtRid ,CanRetry 1
code: 8001
context: Listing bucket=s3.ap-northeast-2.amazonaws.com prefix=mybucket/myobject
query: 1311463
location: s3_utility.cpp:542
process: padbmaster [pid=4527]
-----------------------------------------------;
Is there anyone can give me some clues or advice?
Thanks a lot!

Remove the endpoint s3.ap-northeast-2.amazonaws.com from the S3 path:
COPY my_table
FROM 's3://mybucket/myobject'
CREDENTIALS ''
REGION 'ap-northeast-2'
DELIMITER '|'
;
(See the examples in the documentation.) While the Access Denied error is definitely misleading, the returned message gives some hint as to what went wrong:
bucket=s3.ap-northeast-2.amazonaws.com
prefix=mybucket/myobject
We'd expect to see bucket=mybucket and prefix=myobject, though.

Check encription of bucket.
According doc : https://docs.aws.amazon.com/en_us/redshift/latest/dg/c_loading-encrypted-files.html
The COPY command automatically recognizes and loads files encrypted using SSE-S3 and SSE-KMS.
Check kms: rules on you key|role
If files from EMR, check Security configurations for S3.

your redshift cluster role does not have right to access to the S3 bucket. make sure the role you use for redshift has access to the bucket and bucket does not have policy that blocks the access

Related

putBucketPolicy Invalid principal in policy determine one or more

[nodejs sdk, s3.putBucketPolicy, error handling]
Is there a way to determine (one or more) invalid arn's (invalid account numbers) from error object returned by S3 putBucketPolicy call? Error statusCode is 400 however, trying to figure out which set of principals are invalid.
To clarify further I am not looking for validating role, root ARN patterns. More like, one or more account number(s) thats not correct. Can we extract that from error object or else where?
There are couple of ways to validate an arn:
Using organizations service
Using ARN as principal and apply a bucket policy for a dummy bucket. Let AWS SDK validate it for you, clean up afterwards.

"AllAccessDisabled: All access to this object has been disabled" error being thrown when copying between S3 Buckets

I am getting this error:
AllAccessDisabled: All access to this object has been disabled
When performing the s3.copyObject function in my node Lambda function.
Is this error being thrown because of insufficient permissions on the source bucket, or because of insufficient permissions on the target bucket?
This error means you are trying to access a bucket that has been locked down by AWS so that nobody can access it, regardless of permissions -- all access has been disabled.
It can occur because a bill goes unpaid and probably for other reasons as well...
However... usually this means you've made a mistake in your code and are not accessing the bucket that you think you are.
s3.copyObject expects CopySource to be this:
'/' + source_bucket_name + '/' + object_key
If you overlook this and supply something like /uploads/funny/cat.png you're going to get exactly this error, because here, uploads is the bucket name and funny/cat.png is the object key... and the bucket named uploads happens to be a bucket that returns the AllAccessDisabled error... so the real error here is that you are accessing the wrong bucket.
If your bucket name does Not match with bucket name in your code, it will also throw an error 403 forbidden. Make sure you spell correctly

Grant read to Elasticache on S3 object with boto3

I have a script that uploads a file to S3 then starts an Elasticache server seeded with the file. Elasticache documentation says to provide read and read permission permissions to 540804c33a284a299d2547575ce1010f2312ef3da9b3a053c8bc45bf233e4353 which represents the canonical ID for any region that's not GovCloud or China.
Here is my code:
import boto3
s3_cl = boto3.client('s3')
s3_cl.put_object_acl(Bucket='bucket-name', Key='file.rdb', GrantRead='540804c33a284a299d2547575ce1010f2312ef3da9b3a053c8bc45bf233e4353')
Here is the error I get:
ClientError: An error occurred (InvalidArgument) when calling the PutObjectAcl operation: Argument format not recognized
What parameter am I supposed to provide to GrantRead? My understanding was that it's a grantee, which can be a canonical ID, so what am I doing wrong?
I believe that you should provide grantees in one of the following formats:
id=<canonical user ID of the grantee>
email=<email address of the grantee>
uri=<URI of the grantee group>
displayname=<screen name of the grantee>
type=<type of grantee>
I inferred this information from the awscli put-object-acl documentation.

How can i change policy condition in Amazon S3 for a Bucket

My folder configuration in Amazon S3 looks like BucketName/R/A/123 now i want to add another folder under my Bucket and want to save data as BucketName/**I**/A/123. When i try to save my data, i get an error:
<Error><Code>AccessDenied</Code><Message>Invalid according to Policy: Policy Condition failed: ["starts-with", "$key", "R/"]</Message></Error>
I understand i need to give some permission for folder I to be created under this Bucket but i am struggling to find where.
Can anyone point me where i need to make a change.
Thanks
I understand i need to give some permission for folder I to be created under this Bucket but i am struggling to find where.
No, not according to this error message.
You appear to be supplying this policy in your code...
["starts-with", "$key", "R/"]
...and this policy -- part of a form post upload policy document your code is generating -- is telling S3 that you want it to deny an upload with a key that doesn't stat with "R/" ...
If this isn't what you want, you need to change the policy your code is supplying, so that it allows you to name the key the way you want to name it.

Amazon S3 error- A conflicting conditional operation is currently in progress against this resource.

Why I got this error when I try to create a bucket in amazon S3?
This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the name is available again.
This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the Bucket name is available again.
Kindly note, I received this error when my access-priviliges were blocked.
The error means your Operation for creating new bucket at S3 is aborted.
There can be multiple reasons for this, you can check the below points for rectifying this error:
Is this Bucket available or is Queued for Deletion
Do you have adequate access privileges for this operation
Your Bucket Name must be unique
P.S: Edited this answer to add more details as shared by Sanity below, and his answer is more accurate with updated information.
You can view the related errors for this operation here.
I am editing my asnwer so that correct answer posted below can be selected as correct answer to this question.
Creating a S3 bucket policy and the S3 public access block for a bucket at the same time will cause the error.
Terraform example
resource "aws_s3_bucket_policy" "allow_alb_access_bucket_elb_log" {
bucket = local.bucket_alb_log_id
policy = data.aws_iam_policy_document.allow_alb_access_bucket_elb_log.json
}
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
}
Solution
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
#--------------------------------------------------------------------------------
# To avoid OperationAborted: A conflicting conditional operation is currently in progress
#--------------------------------------------------------------------------------
depends_on = [
aws_s3_bucket_policy.allow_alb_access_bucket_elb_log
]
}
We have also observed this error several times when we try to move bucket from one account to other. In order to achieve this you should do the following :
Backup content of the S3 bucket you want to move.
Delete S3 bucket on the account.
Wait for 1/2 hours
Create a bucket with the same name in another account
Restore s3 bucket backup
I received this error running a terraform apply with the error:
Error: error creating public access block policy for S3 bucket
(bucket-name): OperationAborted: A conflicting
conditional operation is currently in progress against this resource.
Please try again.
status code: 409, request id: 30B386F1FAA8AB9C, host id: M8flEj6+ncWr0174ftzHd74CXBjhlY8Ys70vTyORaAGWA2rkKqY6pUECtAbouqycbAZs4Imny/c=
It said to "please try again" which I did and it worked the second time. It seems there wasn't enough wait time when provisioning the initial resource with Terraform.
To fully resolve this error, I inserted a 5 second sleep between multiple requests. There is nothing else that I had to do.