[nodejs sdk, s3.putBucketPolicy, error handling]
Is there a way to determine (one or more) invalid arn's (invalid account numbers) from error object returned by S3 putBucketPolicy call? Error statusCode is 400 however, trying to figure out which set of principals are invalid.
To clarify further I am not looking for validating role, root ARN patterns. More like, one or more account number(s) thats not correct. Can we extract that from error object or else where?
There are couple of ways to validate an arn:
Using organizations service
Using ARN as principal and apply a bucket policy for a dummy bucket. Let AWS SDK validate it for you, clean up afterwards.
Related
I want to allow roles within an account that have a shared prefix to be able to read from an S3 bucket. For example, we have a number of roles named RolePrefix1, RolePrefix2, etc, and may create more of these roles in the future. We want all roles in an account that begin with RolePrefix to be able to access the S3 bucket, without having to change the policy document in the future.
My terraform for bucket policy document is as below:
data "aws_iam_policy_document" "bucket_policy_document" {
statement {
effect = "Allow"
actions = ["s3:GetObject"]
principals = {
type = "AWS"
identifiers = ["arn:aws:iam::111122223333:role/RolePrefix*"]
}
resources = ["${aws_s3_bucket.bucket.arn}/*"]
}
}
This gives me the following error:
Error putting S3 policy: MalformedPolicy: Invalid principal in policy.
Is it possible to achieve this functionality in another way?
You cannot use wildcard along with the ARN in the IAM principal field. You're allowed to use just "*".
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html
When you specify users in a Principal element, you cannot use a wildcard (*) to mean "all users". Principals must always name a specific user or users.
Workaround:
Keep "Principal":{"AWS":"*"} and create a condition based on ARNLike etc as they accept user ARN with wildcard in condition.
Example:
https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/
S3 always returns error code: NoSuchKey i.e.
"when bucket name given in request is incorrect"
or
"when bucket name given in request is correct but with invalid object key"
Is there any way so that S3 API start returning me some specific error code stating that Bucket do not exist instead of generic error: NoSuchKey in a scenario where invalid bucketname is passed while requesting object.
First, check S3 object URL and the requested object URL is the same. Then check S3 upload handle properly asynchronously.
There can be a GetObject request that happened before the upload is completed.
I am getting this error:
AllAccessDisabled: All access to this object has been disabled
When performing the s3.copyObject function in my node Lambda function.
Is this error being thrown because of insufficient permissions on the source bucket, or because of insufficient permissions on the target bucket?
This error means you are trying to access a bucket that has been locked down by AWS so that nobody can access it, regardless of permissions -- all access has been disabled.
It can occur because a bill goes unpaid and probably for other reasons as well...
However... usually this means you've made a mistake in your code and are not accessing the bucket that you think you are.
s3.copyObject expects CopySource to be this:
'/' + source_bucket_name + '/' + object_key
If you overlook this and supply something like /uploads/funny/cat.png you're going to get exactly this error, because here, uploads is the bucket name and funny/cat.png is the object key... and the bucket named uploads happens to be a bucket that returns the AllAccessDisabled error... so the real error here is that you are accessing the wrong bucket.
If your bucket name does Not match with bucket name in your code, it will also throw an error 403 forbidden. Make sure you spell correctly
I'm really struggling with how to do data transfer from Amazon S3 bucket to Redshift with COPY command.
So far, I created an IAM User and 'AmazonS3ReadOnlyAccess' policy is assigned. But when I call COPY command likes following, Access Denied Error is always returned.
copy my_table from 's3://s3.ap-northeast-2.amazonaws.com/mybucket/myobject' credentials 'aws_access_key_id=<...>;aws_secret_access_key=<...>' REGION'ap-northeast-2' delimiter '|';
Error:
Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid EB18FDE35E1E0CAB,ExtRid ,CanRetry 1
Details: -----------------------------------------------
error: S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid EB18FDE35E1E0CAB,ExtRid ,CanRetry 1
code: 8001
context: Listing bucket=s3.ap-northeast-2.amazonaws.com prefix=mybucket/myobject
query: 1311463
location: s3_utility.cpp:542
process: padbmaster [pid=4527]
-----------------------------------------------;
Is there anyone can give me some clues or advice?
Thanks a lot!
Remove the endpoint s3.ap-northeast-2.amazonaws.com from the S3 path:
COPY my_table
FROM 's3://mybucket/myobject'
CREDENTIALS ''
REGION 'ap-northeast-2'
DELIMITER '|'
;
(See the examples in the documentation.) While the Access Denied error is definitely misleading, the returned message gives some hint as to what went wrong:
bucket=s3.ap-northeast-2.amazonaws.com
prefix=mybucket/myobject
We'd expect to see bucket=mybucket and prefix=myobject, though.
Check encription of bucket.
According doc : https://docs.aws.amazon.com/en_us/redshift/latest/dg/c_loading-encrypted-files.html
The COPY command automatically recognizes and loads files encrypted using SSE-S3 and SSE-KMS.
Check kms: rules on you key|role
If files from EMR, check Security configurations for S3.
your redshift cluster role does not have right to access to the S3 bucket. make sure the role you use for redshift has access to the bucket and bucket does not have policy that blocks the access
My folder configuration in Amazon S3 looks like BucketName/R/A/123 now i want to add another folder under my Bucket and want to save data as BucketName/**I**/A/123. When i try to save my data, i get an error:
<Error><Code>AccessDenied</Code><Message>Invalid according to Policy: Policy Condition failed: ["starts-with", "$key", "R/"]</Message></Error>
I understand i need to give some permission for folder I to be created under this Bucket but i am struggling to find where.
Can anyone point me where i need to make a change.
Thanks
I understand i need to give some permission for folder I to be created under this Bucket but i am struggling to find where.
No, not according to this error message.
You appear to be supplying this policy in your code...
["starts-with", "$key", "R/"]
...and this policy -- part of a form post upload policy document your code is generating -- is telling S3 that you want it to deny an upload with a key that doesn't stat with "R/" ...
If this isn't what you want, you need to change the policy your code is supplying, so that it allows you to name the key the way you want to name it.