S3 Browser Client won't show objects in bucket root folder - amazon-s3

I need to create an IAM policy that restricts access only to the user's folder. I followed the guidelines as specified here:
https://docs.amazonaws.cn/en_us/AmazonS3/latest/dev/walkthrough1.html
I also am using this S3 browser since I don't like my users to be using the console: https://s3browser.com/
However, when I tried navigating to the bucket root folder, it gives me an "Access Denied. Would you like to try Requester Pays access?" error.
But if I specify the prefix with the user's folder, I receive no error. Here's the IAM policy I have created:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRequiredAmazonS3ConsolePermissions",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": [
"arn:aws:s3:::bucket-name"
],
"Condition": {
"StringEquals": {
"s3:prefix": [
""
],
"s3:delimiter": [
"/"
]
}
}
}
]
}
The expected result for this IAM policy should allow the user to navigate from the root folder to his/her specific folder.

Your s3:ListBucket has a condition that it will only allow listing of objects when in certain prefix. This is the expected behavior of this policy. If you remove this condition then the person can view all objects in your whole bucket.
There is no way to let user navigate via gui since he isn't allowed to see the folders in the root.

Related

minio - s3 - bucket policy explanation

In minio. when you set bucket policy to download with mc command like this:
mc policy set download server/bucket
The policy of bucket changes to:
{
"Statement": [
{
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Resource": [
"arn:aws:s3:::public-bucket"
]
},
{
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Resource": [
"arn:aws:s3:::public-bucket/*"
]
}
],
"Version": "2012-10-17"
}
I understand that in second statement we give read access to anonymous users to download the files with url. What I don't understand is that why do we need to allow them to the actions s3:GetBucketLocation, s3:ListBucket.
Can anyone explain this?
Thanks in advance
GetBucketLocation is required to find the location of a bucket in some setups, and is required for compatibility with standard S3 tools like the awscli and mc tools.
ListBuckets is required to list the objects in a bucket. Without this permission you are still able to download objects, but you cannot list and discover them anonymously.
These are standard permissions that are safe to use and setup automatically by the mc anonymous command (previously called mc policy). It is generally not required to change them - though you can do so by directly calling the PutBucketPolicy API.

Amazon S3 bucket permission for unauthenticated cognito role user

I have setup an unauthenticated role under Amazon Cognito Identity pool. My goal is that guest users of my mobile app would be able to upload debugging logs (small text files) to my S3 bucket so I can troubleshoot issues. I notice I would get "Access Denied" from S3 if I don't modify my S3 bucket permission. If I add allow "Everyone" to have "Upload/Delete" privilege, the file upload succeeded. My concern is someone would then be able to upload large files to my bucket and cause a security issue. What is the recommend configuration for my need above? I am a newbie to S3 and Cognito.
I am using Amazon AWS SDK for iOS but I suppose this question is platform neutral.
Edit:
My policy is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iam:GetUser",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:DeleteObject",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": ["arn:aws:s3:::import-to-ec2-*", "arn:aws:s3:::<my bucket name>/*"]
}
]
}
You don't need to modify the S3 bucket permission, but rather the IAM role associated with your identity pool. Try the following:
Visit the IAM console.
Find the role associated with your identity pool.
Attach a policy similar to the following to your role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject"],
"Resource": ["arn:aws:s3:::MYBUCKET/*"]
}
]
}
Replace MYBUCKET with your bucket name
Access your bucket as normal from your application use the iOS SDK and Cognito
You may want to consider limiting permissions further, including ${cognito-identity.amazonaws.com:sub} to partition your users, but the above policy will get you started.
The answer above is incomplete as of 2015, you need to authorize BOTH the role AND the bucket polity in S3 to authorize that Role to write to the bucket. Use s3:PutObject in both cases. The console has wizards for both cases
As #einarc said (cannot comment yet), to make it works I had to edit role and Bucket Policy. This is good enough for testing:
Bucket Policy:
{
"Id": "Policy1500742753994",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1500742752148",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::admin1.user1",
"Principal": "*"
}
]
}
Authenticated role's policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}

S3 Policy to allow a user to Put, Get, Delete and modify permissions

I'm working to create a policy document to allow a IAM users to S3 to a specific "blog" directory where they can create/edit/delete files as well as modify file permissions inside the bucket to global read so uploaded files can be made public on a blog. Here is what I have so far, only issue is the policy is not letting the user modify permissions.
How can this policy be updated to allow the user to modify permissions to global read access?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::blog"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::blog/*"
}
]
}
only issue is the policy is not letting the user modify permissions.
Correct. You have granted only the Put, Get and Delete Permission. In order to provide access for manipulating the Object level permission, you need to provide s3:PutObjectAcl API access.
Check s3:PutObjectAcl IAM Action documentation and S3 PUT Object acl Documentation for more details on how you can leverage this API.

Amazon S3 folder permissions

I am planning to map an Amazon S3 bucket as a basic shared folder/file server using TNT drive.
I've managed to map a drive in Windows perfectly fine I just need to lock down the shared folder permissions. I have some unix experience but I'm completely baffled by the Amazon permissions commands, if the folder path is;
'MyBucket/Shares/LON1'
& the user group is
'Design'
how can I grant the Design group the usual read/write/delete/list contents/create folder permissions for the folder 'LON1' but no higher?
Thanks very much,
Use IAM to assign permissions to buckets. Each user will have it's own Access Key ID and Secret Access Key. You can restrict access to the bucket and folders based on the policies you attach to the user or the group that the user belong to.
For example the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1388785271000",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucketname/objectpath"
]
}
]
}
will allow the user or group to do all possible actions on bucketname/objectpath.
Here's a walkthrough from Amazon:
http://docs.aws.amazon.com/AmazonS3/latest/dev/walkthrough1.html
Please refer to the following policy to restrict the user to upload or list objects only to specific folders. I have created a policy that allows me to list only the objects of folder1 and folder2, and also allows to put the object to folder1 and deny uploads to other folders of the buckets. The policy does as below: 1.List all the folders of bucket 2.List objects and folders of allowed folders 3.Uploads files only to allowed folders
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::"]
}, {
"Sid": "AllowListingOfFolder1And2",
"Action": ["s3:"],
"Effect": "Deny",
"Resource": ["arn:aws:s3:::bucketname"],
"Condition": {
"StringNotLike": {
"s3:prefix": ["folder1/", "folder2/"]
},
"StringLike": {
"s3:prefix": ""
}
}
}, {
"Sid": "Allowputobjecttofolder1only",
"Effect": "Deny",
"Action": "s3:PutObject",
"NotResource": "arn:aws:s3:::bucketname/folder1/"
}]
}

S3 Bucket Policy for hotlinking is preventing writes

I have a website that serves our content from Amazon S3. Currently, I am able to read and write data to S3 just fine from my web server / website. The ACL permission are fine - I have full permissions for the website, and simply read permissions for the public.
Then, I added an S3 Bucket Policy to prevent hotlinking. You can see the S3 policy below.
This policy works well - except for one issue - it is now preventing file write requests from my webserver. So, while my public website serves content just fine, when I try to do file or directory operations, such as upload images or move images (or directories), I get an "Access denied" error now. (by my web application server, which is Railo / Coldfusion)
I'm not sure why this is happening? Initially I thought that it might be because the file read/write requests between my web server and S3 were coming via my IP and not my domain name.But even after adding my IP, the errors persist.
If I remove the policy, everything works fine again.
Does anyone know what is causing this or what I'm missing here? Thanks
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Allowinmydomains",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::cdn.babeswithbraces.com/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.babeswithbraces.com/*",
"http://babeswithbraces.com/*",
"http://64.244.61.40/*"
]
}
}
},
{
"Sid": "Givenotaccessifrefererisnomysites",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::cdn.babeswithbraces.com/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://www.babeswithbraces.com/*",
"http://babeswithbraces.com/*",
"http://64.244.61.40/*"
]
}
}
}
]
}
When you use bucket policies, a deny always overrides a grant. Because you are denying access to GetObject from your bucket policy for all accounts (including authenticated users) that don't match your specific referrers list, your app produces Access denied errors.
By default, objects in S3 have their ACLs set to private. If this is the case with your bucket, then there is no need to have an Allow and a Deny rule in your bucket policy. It would be enough to have an Allow condition that grants anonymous users, which match some specific referrers, the permission to access objects in the bucket.
In the case mentioned above, your bucket policy should look like:
{
"Id": "Policy1380565362112",
"Statement": [
{
"Sid": "Stmt1380565360133",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::cdn.babeswithbraces.com/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.babeswithbraces.com/*",
"http://babeswithbraces.com/*",
"http://64.244.61.40/*"
]
}
},
"Principal": {
"AWS": [
"*"
]
}
}
]
}
If the object ACLs already allow public access you can either remove those ACLs to make the objects private by default or include a Deny rule in your bucket policy and modify the requests you send to S3 from your app to include the expected referrer header. There is currently no way to have a Deny rule in your bucket policy that only affects anonymous requests.