Why bucket ARN ending with /* needs to be mentioned for resource in bucket Policy to allow user to upload the file - amazon-s3

I created S3 bucket and I have added one user in IAM. Suppose my bucuket name sample123. When in bucket ploicy I mentioned resource like below statement user is not able to upload document.
Resource": "arn:aws:s3:::sample123"
But when resource is mentioned in policy as below , the user is able to upload document.
Resource": [ "arn:aws:s3:::sample123","arn:aws:s3:::sample123/*"]
what adding /* to ARN will do in policy. Note : I gave full bucket permissions to the user.

sample123/* means the all objects in sample123 bucket.
doc of S3 ARN examples says:
The ARN format for Amazon S3 resources reduces to the following:
arn:aws:s3:::bucket_name/key_name
...
The following ARN uses the wildcard * in the relative-ID part of the
ARN to identify all objects in the examplebucket bucket.
arn:aws:s3:::examplebucket/*
Also refer Example of S3 Actions with policy

Related

How to enable s3 Copy Bucket Permissions in Terraform statement

My goal is to copy the data from a set of s3 buckets into main logging account bucket. Every time I try to perform:
aws s3 cp s3://sub-account-cloudtrail s3://master-acccount-cloudtrail --profile=admin;
I get
(AccessDenied) when calling the CopyObject operation: Access Denied`
I've looked at this post:
How to fix AccessDenied calling CopyObject
I am trying to add the bucket permissions to a Terraform data aws_iam_policy_document. The statement is written like so
data aws_iam_policy_document s3 {
version = "2012-10-17"
statement {
sid = "CopyOobjectPermissions"
effect = "Allow"
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/ops-mgmt-admin"]
}
actions = ["s3:GetObject","s3:PutObject","s3:PutObjectAcl"]
resources = ["${aws_s3_bucket.nfcisbenchmark_cloudtrail.arn}/*"]
}
statement {
sid = "CopyBucketPermissions"
actions = ["s3:ListBucket"]
effect = "Allow"
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/ops-mgmt-admin"]
}
resources = ["${aws_s3_bucket.nfcisbenchmark_cloudtrail.arn}/*"]
}
}
My goal is to restrict the permissions to the role that is assumed from the sub-account to the master account. My specific question is what permissions need to be added in order to enable copy permissions?
Expected:
Terraform plan runs successfully
Actual:
│ Error: Error putting S3 policy: MalformedPolicy: Action does not apply to any resource(s) in statement
How can I resolve this?
Two things to mention:
In your second statement the resource is wrong, this is why you get the MalformedPolicy error. It should be:
resources = [aws_s3_bucket.nfcisbenchmark_cloudtrail.arn]
Be careful with the identifier. At this point I'm not really sure if your buckets are in different accounts or not. If they are, the account_id in the identifier should reference the source account. data.aws_caller_identity.current.account_id returns the account ID to which Terraform is authenticated, which usually is the account where you are deploying resources (destination account). If your are not doing cross account copying, than it should be fine as it is.
Furthermore, in case of cross account access, ops-mgmt-admin role should have a policy applied to it which gives access to get/list/upload objects to an S3 bucket.

Wildcard at end of principal for s3 bucket

I want to allow roles within an account that have a shared prefix to be able to read from an S3 bucket. For example, we have a number of roles named RolePrefix1, RolePrefix2, etc, and may create more of these roles in the future. We want all roles in an account that begin with RolePrefix to be able to access the S3 bucket, without having to change the policy document in the future.
My terraform for bucket policy document is as below:
data "aws_iam_policy_document" "bucket_policy_document" {
statement {
effect = "Allow"
actions = ["s3:GetObject"]
principals = {
type = "AWS"
identifiers = ["arn:aws:iam::111122223333:role/RolePrefix*"]
}
resources = ["${aws_s3_bucket.bucket.arn}/*"]
}
}
This gives me the following error:
Error putting S3 policy: MalformedPolicy: Invalid principal in policy.
Is it possible to achieve this functionality in another way?
You cannot use wildcard along with the ARN in the IAM principal field. You're allowed to use just "*".
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html
When you specify users in a Principal element, you cannot use a wildcard (*) to mean "all users". Principals must always name a specific user or users.
Workaround:
Keep "Principal":{"AWS":"*"} and create a condition based on ARNLike etc as they accept user ARN with wildcard in condition.
Example:
https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/

How can i change policy condition in Amazon S3 for a Bucket

My folder configuration in Amazon S3 looks like BucketName/R/A/123 now i want to add another folder under my Bucket and want to save data as BucketName/**I**/A/123. When i try to save my data, i get an error:
<Error><Code>AccessDenied</Code><Message>Invalid according to Policy: Policy Condition failed: ["starts-with", "$key", "R/"]</Message></Error>
I understand i need to give some permission for folder I to be created under this Bucket but i am struggling to find where.
Can anyone point me where i need to make a change.
Thanks
I understand i need to give some permission for folder I to be created under this Bucket but i am struggling to find where.
No, not according to this error message.
You appear to be supplying this policy in your code...
["starts-with", "$key", "R/"]
...and this policy -- part of a form post upload policy document your code is generating -- is telling S3 that you want it to deny an upload with a key that doesn't stat with "R/" ...
If this isn't what you want, you need to change the policy your code is supplying, so that it allows you to name the key the way you want to name it.

Finding who uploaded an S3 file

Just a quick one - how do you identify which IAM user uploaded a file to an S3 bucket? I can see properties like 'last modified', but not the IAM user.
For my use case, I can't add random metadata because the file is being uploaded by Cyberduck.
Thanks!
John
You can try hit the Get Bucket REST API programmatically or with something like curl. The Contents > Owner > DisplayName key might be what you're looking for.
Sample response:
<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Name>bucket</Name>
<Prefix/>
<Marker/>
<MaxKeys>1000</MaxKeys>
<IsTruncated>false</IsTruncated>
<Contents>
<Key>my-image.jpg</Key>
<LastModified>2009-10-12T17:50:30.000Z</LastModified>
<ETag>"fba9dede5f27731c9771645a39863328"</ETag>
<Size>434234</Size>
<StorageClass>STANDARD</StorageClass>
<Owner>
<ID>75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>
<DisplayName>mtd#amazon.com</DisplayName>
</Owner>
</Contents>
</ListBucketResult>
Have you looked at S3 server access logging? I believe the logs will include the canonical user ID of the uploader, unless anonymous. Not quite sure how you turn that into an IAM access key, but perhaps there's a way.
Or you could look at CloudTrail logs (assuming that you have CloudTrail enabled). They should show you the access key used to perform the upload.
Or I guess you could set up different upload locations, one per authorized IAM user, and then add appropriate policies so that only user X could upload to his specific location.
[Added] You might also want to specify a bucket policy that requires uploaders to give you, the bucket owner, full control of the object. Then you can query the ACLs of the object and determine the owner (which will be the original uploader).
If the file to the S3 bucket is uploaded by POST operation and you can grab the details information using Amazon Athena > Query editor. The query would look like this:
SELECT bucketowner, Requester, RemoteIP, Operation, Key, HTTPStatus, ErrorCode, RequestDateTime
FROM "databasename"."tablename"
WHERE (Operation='REST.PUT.OBJECT' OR Operation = 'REST.POST.UPLOAD')
AND parse_datetime(RequestDateTime,'dd/MMM/yyyy:HH:mm:ss Z')
BETWEEN parse_datetime('2021-11-11:00:42:42','yyyy-MM-dd:HH:mm:ss')
AND parse_datetime('2021-12-31:00:42:42','yyyy-MM-dd:HH:mm:ss')
AND Requester='arn:aws:sts::accoint-number::assumed-role/ROLE-NAME/email'
For more information on athena

Amazon S3 error- A conflicting conditional operation is currently in progress against this resource.

Why I got this error when I try to create a bucket in amazon S3?
This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the name is available again.
This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the Bucket name is available again.
Kindly note, I received this error when my access-priviliges were blocked.
The error means your Operation for creating new bucket at S3 is aborted.
There can be multiple reasons for this, you can check the below points for rectifying this error:
Is this Bucket available or is Queued for Deletion
Do you have adequate access privileges for this operation
Your Bucket Name must be unique
P.S: Edited this answer to add more details as shared by Sanity below, and his answer is more accurate with updated information.
You can view the related errors for this operation here.
I am editing my asnwer so that correct answer posted below can be selected as correct answer to this question.
Creating a S3 bucket policy and the S3 public access block for a bucket at the same time will cause the error.
Terraform example
resource "aws_s3_bucket_policy" "allow_alb_access_bucket_elb_log" {
bucket = local.bucket_alb_log_id
policy = data.aws_iam_policy_document.allow_alb_access_bucket_elb_log.json
}
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
}
Solution
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
#--------------------------------------------------------------------------------
# To avoid OperationAborted: A conflicting conditional operation is currently in progress
#--------------------------------------------------------------------------------
depends_on = [
aws_s3_bucket_policy.allow_alb_access_bucket_elb_log
]
}
We have also observed this error several times when we try to move bucket from one account to other. In order to achieve this you should do the following :
Backup content of the S3 bucket you want to move.
Delete S3 bucket on the account.
Wait for 1/2 hours
Create a bucket with the same name in another account
Restore s3 bucket backup
I received this error running a terraform apply with the error:
Error: error creating public access block policy for S3 bucket
(bucket-name): OperationAborted: A conflicting
conditional operation is currently in progress against this resource.
Please try again.
status code: 409, request id: 30B386F1FAA8AB9C, host id: M8flEj6+ncWr0174ftzHd74CXBjhlY8Ys70vTyORaAGWA2rkKqY6pUECtAbouqycbAZs4Imny/c=
It said to "please try again" which I did and it worked the second time. It seems there wasn't enough wait time when provisioning the initial resource with Terraform.
To fully resolve this error, I inserted a 5 second sleep between multiple requests. There is nothing else that I had to do.