minio - s3 - bucket policy explanation - amazon-s3

In minio. when you set bucket policy to download with mc command like this:
mc policy set download server/bucket
The policy of bucket changes to:
{
"Statement": [
{
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Resource": [
"arn:aws:s3:::public-bucket"
]
},
{
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Resource": [
"arn:aws:s3:::public-bucket/*"
]
}
],
"Version": "2012-10-17"
}
I understand that in second statement we give read access to anonymous users to download the files with url. What I don't understand is that why do we need to allow them to the actions s3:GetBucketLocation, s3:ListBucket.
Can anyone explain this?
Thanks in advance

GetBucketLocation is required to find the location of a bucket in some setups, and is required for compatibility with standard S3 tools like the awscli and mc tools.
ListBuckets is required to list the objects in a bucket. Without this permission you are still able to download objects, but you cannot list and discover them anonymously.
These are standard permissions that are safe to use and setup automatically by the mc anonymous command (previously called mc policy). It is generally not required to change them - though you can do so by directly calling the PutBucketPolicy API.

Related

S3 Browser Client won't show objects in bucket root folder

I need to create an IAM policy that restricts access only to the user's folder. I followed the guidelines as specified here:
https://docs.amazonaws.cn/en_us/AmazonS3/latest/dev/walkthrough1.html
I also am using this S3 browser since I don't like my users to be using the console: https://s3browser.com/
However, when I tried navigating to the bucket root folder, it gives me an "Access Denied. Would you like to try Requester Pays access?" error.
But if I specify the prefix with the user's folder, I receive no error. Here's the IAM policy I have created:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRequiredAmazonS3ConsolePermissions",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": [
"arn:aws:s3:::bucket-name"
],
"Condition": {
"StringEquals": {
"s3:prefix": [
""
],
"s3:delimiter": [
"/"
]
}
}
}
]
}
The expected result for this IAM policy should allow the user to navigate from the root folder to his/her specific folder.
Your s3:ListBucket has a condition that it will only allow listing of objects when in certain prefix. This is the expected behavior of this policy. If you remove this condition then the person can view all objects in your whole bucket.
There is no way to let user navigate via gui since he isn't allowed to see the folders in the root.

Amazon S3 Can't Delete Object via API

I'm setting up a new policy so my website can store images on S3, and I'm trying to keep it as secure as possible.
I can put an object and read it, but can not delete it, even though it appears I've followed the recommendations from Amazon. I am not using versioning.
What am I doing wrong?
Here's my policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:PutLifecycleConfiguration",
"s3:DeleteObject",
"s3:ListObjects"
],
"Resource": "*"
}
]
}
After screwing around with multiple permission actions it turns out I needed to add s3:ListBucket and s3:ListObjects. Once added I can now delete objects.

Amazon s3 user policies

I'm trying to define a policy for a specific user.
I have several buckets in my S3 but I want to give the user access to some of them.
I created the following policy:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject",
"s3:ListBucket",
"s3:ListAllMyBuckets",
"s3:GetBucketLocation",
"s3:PutObject"],
"Resource":["arn:aws:s3:::examplebucket"]
}
when I try to add a list of resources like this:
"Resource":["arn:aws:s3:::examplebucket1","arn:aws:s3:::examplebucket2"]
I get access denied
The only option that works for me (I get buckets lists) is:
"Resource": ["arn:aws:s3:::*"]
whats the problem?
Some Amazon S3 API calls operate at the Bucket-level, while some operate at the Object-level. Therefore, you will need a policy like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::test"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": ["arn:aws:s3:::test/*"]
}
]
}
See: AWS Security Blog - Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket
I found that its an AWS limitation.
There is no option get filtered list of buckets.
Once you give permissions to ListAllMyBuckets like this:
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": ["s3:GetBucketLocation", "s3:ListAllMyBuckets"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::*"]
}
you get the list of all bucket (including buckets that you don't have permissions to it).
More info could be found here: https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/
Few workarounds could be found here: Is there an S3 policy for limiting access to only see/access one bucket?

Google Cloud Storage transfer from Amazon S3 - Invalid access key

I'm trying to create a transfer from my S3 bucket to Google Cloud - it's basically the same problem as in this question, but none of the answers work for me. Whenever I try to make a transfer, I get the following error:
Invalid access key. Make sure the access key for your S3 bucket is correct, or set the bucket permissions to Grant Everyone.
I've tried the following policies, to no success:
First policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:GetBucketLocation"
],
"Resource": "*"
}
]
}
Second policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
Third policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
]
}
I've also made sure to grant the 'List' permission to 'Everyone'. Tried this on buckets in two different locations - Sao Paulo and Oregon. I'm starting to run out of ideas, hope you can help.
I know this question is over a year old but I just encountered the same error when trying to do the transfer via the console. I worked around this by executing IT via the gsutils command line tool instead.
After installing and configuring the tool, simply run:
gsutils cp s3://sourcebucket gs://targetbucket
Hope this is helpful!

What are appropriate S3 permissions for deploying to Elastic Beanstalk from CodeShip

What are the appropriate S3 permissions to deploy an Elastic Beanstalk app using CodeShip? When deploying a new version to a tomcat app I get these errors:
Service:Amazon S3, Message:You do not have permission to perform the
's3:ListBucket' action. Verify that your S3 policies and your ACLs
allow you to perform these actions.
Service:Amazon S3, Message:You do
not have permission to perform the 's3:GetObject' or 's3:ListBucket'
action. Verify that your S3 policies and your ACLs allow you to
perform these actions.
If I give the CodeShip user full access to S3 everything works, but this is not ideal. The current S3 permissions for my CodeShip user are
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:ListBucket",
"s3:DeleteObject",
"s3:GetBucketPolicy"
],
"Resource": [
"arn:aws:s3:::codeshipbucket/*"
]
}
]
}
My S3 bucket I have given CodeShip is a subfolder under codeshipbucket if it matters.
What are appropriate permissions?
These are the S3 permissions we had to give the IAM user we use with Codeship:
{
"Action": [
"s3:CreateBucket",
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"s3:ListBucket",
"s3:GetObjectAcl",
"s3:GetBucketPolicy",
"s3:DeleteObject",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::elasticbeanstalk-[region]-[account-id]",
"arn:aws:s3:::elasticbeanstalk-[region]-[account-id]/*"
]
}
We executed eb deploy --debug and added the permissions one-by-one.
In our internal test we've been able to deploy to ElasticBeanstalk with just the following S3 permissions
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::YOUR_S3_BUCKET_NAME/*"
]
}
]
}
And this is what we currently recommend in our documentation available at https://codeship.com/documentation/continuous-deployment/deployment-to-elastic-beanstalk/#s3
That said, one of our awesome users published a very extensive guide on how to deploy to Elastic Beanstalk, which is available at http://nudaygames.squarespace.com/blog/2014/5/26/deploying-to-elastic-beanstalk-from-your-continuous-integration-system and recommends a broader set of S3 permissions.
Disclaimer: I work for Codeship, but you probably already guessed so from my answer.