We have a service where customers of ours give us access to their S3 buckets and we push items into those S3 buckets. We need to be able to do 2 things:
Set the permissions on the item to be publicly readable
Set the owner of the bucket to have full permissions to the item
Here is what I already know:
I cannot have 2 canned-ACLs with the PUT
Problem:
I "could" set ACL headers, but AFAIK there is no way to set the "owner-has-full-permissions" via header without knowing information about the owner (Like cannonical_id or email), correct? Is there a "uri" version of "owner-has-full-permissions" like there is for "public-read" (e.g. "http://acs.amazonaws.com/groups/global/AllUsers")?
I don't want to have to make 2 separate calls (one to get the buckets owner info) and one to put the item with both permissions.
I had the same problem, the following code can get the permissions you require.
AccessControlList accessControlList = new AccessControlList();
accessControlList.grantPermission(GroupGrantee.AllUsers, Permission.Read);
accessControlList.grantPermission(new CanonicalGrantee(s3Client.getS3AccountOwner()
.getId()), Permission.FullControl);
putReq.setAccessControlList(accessControlList);
Related
I want to allow roles within an account that have a shared prefix to be able to read from an S3 bucket. For example, we have a number of roles named RolePrefix1, RolePrefix2, etc, and may create more of these roles in the future. We want all roles in an account that begin with RolePrefix to be able to access the S3 bucket, without having to change the policy document in the future.
My terraform for bucket policy document is as below:
data "aws_iam_policy_document" "bucket_policy_document" {
statement {
effect = "Allow"
actions = ["s3:GetObject"]
principals = {
type = "AWS"
identifiers = ["arn:aws:iam::111122223333:role/RolePrefix*"]
}
resources = ["${aws_s3_bucket.bucket.arn}/*"]
}
}
This gives me the following error:
Error putting S3 policy: MalformedPolicy: Invalid principal in policy.
Is it possible to achieve this functionality in another way?
You cannot use wildcard along with the ARN in the IAM principal field. You're allowed to use just "*".
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html
When you specify users in a Principal element, you cannot use a wildcard (*) to mean "all users". Principals must always name a specific user or users.
Workaround:
Keep "Principal":{"AWS":"*"} and create a condition based on ARNLike etc as they accept user ARN with wildcard in condition.
Example:
https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/
My folder configuration in Amazon S3 looks like BucketName/R/A/123 now i want to add another folder under my Bucket and want to save data as BucketName/**I**/A/123. When i try to save my data, i get an error:
<Error><Code>AccessDenied</Code><Message>Invalid according to Policy: Policy Condition failed: ["starts-with", "$key", "R/"]</Message></Error>
I understand i need to give some permission for folder I to be created under this Bucket but i am struggling to find where.
Can anyone point me where i need to make a change.
Thanks
I understand i need to give some permission for folder I to be created under this Bucket but i am struggling to find where.
No, not according to this error message.
You appear to be supplying this policy in your code...
["starts-with", "$key", "R/"]
...and this policy -- part of a form post upload policy document your code is generating -- is telling S3 that you want it to deny an upload with a key that doesn't stat with "R/" ...
If this isn't what you want, you need to change the policy your code is supplying, so that it allows you to name the key the way you want to name it.
Just a quick one - how do you identify which IAM user uploaded a file to an S3 bucket? I can see properties like 'last modified', but not the IAM user.
For my use case, I can't add random metadata because the file is being uploaded by Cyberduck.
Thanks!
John
You can try hit the Get Bucket REST API programmatically or with something like curl. The Contents > Owner > DisplayName key might be what you're looking for.
Sample response:
<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Name>bucket</Name>
<Prefix/>
<Marker/>
<MaxKeys>1000</MaxKeys>
<IsTruncated>false</IsTruncated>
<Contents>
<Key>my-image.jpg</Key>
<LastModified>2009-10-12T17:50:30.000Z</LastModified>
<ETag>"fba9dede5f27731c9771645a39863328"</ETag>
<Size>434234</Size>
<StorageClass>STANDARD</StorageClass>
<Owner>
<ID>75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>
<DisplayName>mtd#amazon.com</DisplayName>
</Owner>
</Contents>
</ListBucketResult>
Have you looked at S3 server access logging? I believe the logs will include the canonical user ID of the uploader, unless anonymous. Not quite sure how you turn that into an IAM access key, but perhaps there's a way.
Or you could look at CloudTrail logs (assuming that you have CloudTrail enabled). They should show you the access key used to perform the upload.
Or I guess you could set up different upload locations, one per authorized IAM user, and then add appropriate policies so that only user X could upload to his specific location.
[Added] You might also want to specify a bucket policy that requires uploaders to give you, the bucket owner, full control of the object. Then you can query the ACLs of the object and determine the owner (which will be the original uploader).
If the file to the S3 bucket is uploaded by POST operation and you can grab the details information using Amazon Athena > Query editor. The query would look like this:
SELECT bucketowner, Requester, RemoteIP, Operation, Key, HTTPStatus, ErrorCode, RequestDateTime
FROM "databasename"."tablename"
WHERE (Operation='REST.PUT.OBJECT' OR Operation = 'REST.POST.UPLOAD')
AND parse_datetime(RequestDateTime,'dd/MMM/yyyy:HH:mm:ss Z')
BETWEEN parse_datetime('2021-11-11:00:42:42','yyyy-MM-dd:HH:mm:ss')
AND parse_datetime('2021-12-31:00:42:42','yyyy-MM-dd:HH:mm:ss')
AND Requester='arn:aws:sts::accoint-number::assumed-role/ROLE-NAME/email'
For more information on athena
How to delete the log files in Amazon s3 according to date.? I have log files in a logs folder folder inside my bucket.
string sdate = datetime.ToString("yyyy-MM-dd");
string key = "logs/" + sdate + "*" ;
AmazonS3 s3Client = AWSClientFactory.CreateAmazonS3Client();
DeleteObjectRequest delRequest = new DeleteObjectRequest()
.WithBucketName(S3_Bucket_Name)
.WithKey(key);
DeleteObjectResponse res = s3Client.DeleteObject(delRequest);
I tried this but doesn't seem to work. I can delete individual files if I put the whole name in the key. But I want to delete all the log files created for a particular date.
You can use S3's Object Lifecycle feature, specifically Object Expiration, to delete all objects under a given prefix and over a given age. It's not instantaneous, but it beats have to make myriad individual requests. To delete everything, just make the age small.
http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
Here is the explanation for my error. I have registered two User's A, B in Eucalyptus (open-source). I Created a bucket B1 using Jets3t API in User A's Account and granted Read Permission To user B (Using "CanonicalGrantee" Interface). While Listing Access Control List Using A's Credentials i got FULL_CONTROL for A and READ For B. But When I tried to access Bucket B1 information using B's Credentials I got this error
Exception in thread "main" org.jets3t.service.S3ServiceException: The action listObjects cannot be performed with an invalid bucket: null
at org.jets3t.service.S3Service.listObjects(S3Service.java:1410)
at test.ObjectPermission.main(ObjectPermission.java:40)
problematic code is S3Bucket publicBucket =s3Service.getBucket("B1");
Here B1 is Bucket belongs User A. in the above code s3service returns a null Values . I know that s3service only retrieves the information belongs which are created under B' Credential.
I don't know how to resolve this and to access shared bucket using Jets3t API
The method you are using is search the bucket B1 of account A in the list of bucket of Account B. So always it does not found that bucket in the Account B and return null.
So you have to check it in another way. You can check it by doing head request for Bucket B1 for Account B Service object for that call isBucketAccessible() if it returns true thats means bucket is accessible else not.
I am 100% sure it will work :)