AWS policy evaluation - amazon-s3

I have an IAM role to be attached to a microservice in order to limit S3 folder access based on user-agent. The microservice parent account and the bucket owner are the same.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket-test/service/${aws:useragent}/*"
]
},
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-test"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"service/${aws:useragent}/*"
]
}
}
}
]
}
The same S3 bucket has a default ACL where the account has R/W on objects and permissions.
Given the ACL and the IAM policy, I don't understand how this policy evaluates. For example, a user with the above role makes a put_object request to bucket-test/service/micro-b/new_object with user agent micro-a. Is this an explicit or implicit deny? Why?

Based on AWS Policy evaluation logic:
When a request is made, the AWS service decides whether a given request should be allowed or denied. The evaluation logic follows these rules:
By default, all requests are denied. (In general, requests made
using the account credentials for resources in the account are
always allowed.)
An explicit allow overrides this default.
An explicit deny overrides any allows.
Now if we look at S3 access policy language documentation:
Effect – What the effect will be when the user requests the specific action—this can be either allow or deny.
If you do not explicitly grant access to (allow) a resource, access is implicitly denied. You can also explicitly deny access to a resource, which you might do in order to make sure that a user cannot access it, even if a different policy grants access.
Now specifying Conditions in S3 policy documentation:
The access policy language allows you to specify conditions when granting permissions. The Condition element (or Condition block) lets you specify conditions for when a policy is in effect.
From these 3 pieces, specially the last one we can say that your case is "Conditional allow", because "Condition element lets you specify conditions for when a policy is in effect" and here the condition in your policy is "Allow".
EDIT:
Here is another interesting blog from AWS on "How does authorization work with multiple access control mechanisms?"
Whenever an AWS principal issues a request to S3, the authorization decision depends on the union of all the IAM policies, S3 bucket policies, and S3 ACLs that apply.
In accordance with the principle of least-privilege, decisions default
to DENY and an explicit DENY always trumps an ALLOW. For example, if
an IAM policy grants access to an object, the S3 bucket policies
denies access to that object, and there is no S3 ACL, then access will
be denied. Similarly, if no method specifies an ALLOW, then the
request will be denied by default. Only if no method specifies a DENY
and one or more methods specify an ALLOW will the request be allowed.

Related

Getting AWS Lambda access to private S3 resource

We have been trying to crack an issue with resource permissions related to S3 and Lambda.
We have a root account which inturn has -
Account A - Bucket owner
Account B - Used to upload (through CORS) and give access to S3 images
ROLE L - We have a lambda function which assigned this role with Full S3 access
The buckets have access policy like below -
{
"Version": "2012-10-17",
"Id": "Policyxxxxxxxxx",
"Statement": [
{
"Sid": "Stmt44444444444",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::xxxxxxxxxxxx:user/account-A",
"arn:aws:iam::xxxxxxxxxxxx:role/role-L"
]
},
"Action": [
"s3:*",
],
"Resource": [
"arn:aws:s3:::bucket",
"arn:aws:s3:::bucket/*"
]
}
]
}
The issue -
The lambda is able to access S3 resource only if object ACL is set to Public/read-only. But Lambda fails when the resource is set to 'private'.
Bucket policy just gives access to the bucket. Is there a way to give Role L read access to the resource?
Objects stored in Amazon S3 buckets are private by default. There is no need to use a Deny policy unless you wish to override another policy that grants access to the content.
I would recommend:
Remove your Deny policy
Create an IAM Role for your AWS Lambda function and grant permission to access the S3 bucket within that role.
Feel free to add a Bucket Policy for normal use as appropriate, but that should not impact your Lambda function's access that is granted via the Role.

AWS S3 Bucket Policy Source IP not working

I've been trying all possible options but with no results. My Bucket Policy works well with aws:Referer but it doesn't work at all with Source Ip as the condition.
My Server is hosted with EC2 and I am using the Public IP in this format xxx.xxx.xxx.xxx/32 (Public_Ip/32) as the Source Ip parameter.
Can anyone tell me what I am doing wrong?
Currently my Policy is the following
{
"Version": "2008-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "xx.xx.xxx.xxx/32"
}
}
}
]
}
I read all examples and case studies but it doesn't seem to allow access based on Source IP...
Thanks a lot!!!
While I won't disagree that policies are better than IP address wherever possible, the accepted answer didn't actually achieve the original question's goal. I needed to do this (I need access from a machine that wasn't EC2, and thus didn't have policies).
Here is a policy that only allows a certain (or multiple IPs) to access a bucket's object. This assumes that there is no other policy to allow access to the bucket (by default, buckets grant no public access).
This policy also does not allow listing. Only if you know if the full url to the object you need. If you need more permissions, just add them to the Action bit.
{
"Id": "Policy123456789",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::mybucket/*",
"Condition" : {
"IpAddress" : {
"aws:SourceIp": [
"xx.xx.xx.xx/32"
]
}
}
}
]
}
From the discussion on the comments on the question, it looks like your situation can be rephrased as follows:
How can I give an specific EC2 instance full access to an S3 bucket, and deny access from every other source?
Usually, the best approach is to create an IAM Role and launch your EC2 instance associated with that IAM Role. As I'm going to explain, it is usually much better to use IAM Roles to define your access policies than it is to specify source IP addresses.
IAM Roles
IAM, or Identity and Access Management, is a service that can be used to create users, groups and roles, manage access policies associated with those three kinds of entities, manage credentials, and more.
Once you have your IAM role created, you are able to launch an EC2 instance "within" that role. In simple terms, it means that the EC2 instance will inherit the access policy you associated with that role. Note that you cannot change the IAM Role associated with an instance after you launched the instance. You can, however, modify the Access Policy associated with an IAM Role whenever you want.
The IAM service is free, and you don't pay anything extra when you associate an EC2 instance with an IAM Role.
In your situation
In your situation, what you should do is create an IAM Role to use within EC2 and attach a policy that will give the permissions you need, i.e., that will "Allow" all the "s3:xxx" operations it will need to execute on that specific resource "arn:aws:s3:::my_bucket/*".
Then you launch a new instance with this role (on the current AWS Management Console, on the EC2 Launch Instance wizard, you do this on the 3rd step, right after choosing the Instance Type).
Temporary Credentials
When you associate an IAM Role with an EC2 instance, the instance is able to obtain a set of temporary AWS credentials (let's focus on the results and benefits, and not exactly on how this process works). If you are using the AWS CLI or any of the AWS SDKs, then you simply don't specify any credential at all and the CLI or SDK will figure out it has to look for those temporary credentials somewhere inside the instance.
This way, you don't have to hard code credentials, or inject the credentials into the instance somehow. The instance and the CLI or SDK will manage this for you. As an added benefit, you get increased security: the credentials are temporary and rotated automatically.
In your situation
If you are using the AWS CLI, you would simply run the commands without specifying any credentials. You'll be allowed to run the APIs that you specified in the IAM Role Access Policy. For example, you would be able to upload a file to that bucket:
aws s3 cp my_file.txt s3://my_bucket/
If you are using an SDK, say the Java SDK, you would be able to interact with S3 by creating the client objects without specifying any credentials:
AmazonS3 s3 = new AmazonS3Client(); // no credentials on the constructor!
s3.putObject("my_bucket", ........);
I hope this helps you solve your problem. If you have any further related questions, leave a comment and I will try to address them on this answer.

What role permissions are required to use get_contents_to_filename?

I recently setup an IAM role for accessing a bucket with the following policy:
{
"Statement": [
{
"Sid": "Stmt1359923112752",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<BUCKET_NAME>"
]
}
]
}
While I can list the contents of the bucket fine, when I call get_contents_to_filename on a particular key, I receive a boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden exception.
Is there a role permission that I need to add to fetch keys from S3? I have checked the permissions on the individual key, and there appears to be nothing that explicitly forbids access to other users; there is only a single permission that grants the owner full permissions.
For completeness, I verified that removing the role policy above prevents access to the bucket completely thus it's not an issue with the policy being applied.
Thanks!
You have to give permission to the objects in the bucket, not just to the bucket. So your resource would have to be arn:aws:s3:::<bucketname>/*. That matches every object.
Unfortunately, that doesn't match the bucket itself. So you either need to give bucket related permissions to arn:aws:s3:::<bucketname> and object permissions to arn:aws:s3:::<bucketname>/*, or just give permissions to arn:aws:s3:::<bucketname>*. Though in that latter case, giving permissions to a bucket named fred would also give the same permissions to one named freddy.

How can I make a S3 bucket public (the amazon example policy doesn't work)?

Amazon provides an example for Granting Permission to an Anonymous User as follows (see Example Cases for Amazon S3 Bucket Policies):
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
Within my policy I've changed "bucket" in ""arn:aws:s3:::bucket/" to "my-bucket".
However, once I try to access an image within a folder of that bucket, I get the following Access denied error:
This XML file does not appear to have any style information associated
with it. The document tree is shown below.
(if I explicitly change the properties of that image to public, then reload its url, the image loads perfectly)
What am I doing wrong?
Update #1: Apparently it has something to do with a third party site that I've given access to. Although it has all of the permissions as the main user (me), and its objects are in the same folder, with the exact same permissions, it still won't let me make them publicly viewable. No idea why.
Update #2: Bucket policies do not apply to objects "owned" by others, even though they are within your bucket, see my answer for details.
Update
As per GoodGets' comment, the real issue has been that bucket policies to do not apply to objects "owned" by someone else, even though they are in your bucket, see GoodGets' own answer for details (+1).
Is this a new bucket/object setup or are you trying to add a bucket policy to a pre-existing setup?
In the latter case you might have stumbled over a related pitfall due to the interaction between the meanwhile three different S3 access control mechanisms available, which can be rather confusing indeed. This is addressed e.g. in Using ACLs and Bucket Policies Together:
When you have ACLs and bucket policies assigned to buckets, Amazon S3
evaluates the existing Amazon S3 ACLs as well as the bucket policy
when determining an account’s access permissions to an Amazon S3
resource. If an account has access to resources that an ACL or policy
specifies, they are able to access the requested resource.
While this sounds easy enough, unintentional interferences may result from the subtle different defaults between ACLs an policies:
With existing Amazon S3 ACLs, a grant always provides access to a
bucket or object. When using policies, a deny always overrides a
grant. [emphasis mine]
This explains why adding an ACL grant always guarantees access, however, this does not apply to adding a policy grant, because an explicit policy deny provided elsewhere in your setup would still be enforced, as further illustrated in e.g. IAM and Bucket Policies Together and Evaluation Logic.
Consequently I recommend to start with a fresh bucket/object setup to test the desired configuration before applying it to a production scenario (which might still interfere of course, but identifying/debugging the difference will be easier in case).
Good luck!
Bucket policies do not apply files with other owners. So although I've given write access to a third party, the ownership remains them, and my bucket policy will not apply to those objects.
I wasted hours on this, the root cause was stupid, and the solutions mentioned here didn't help (I tried them all), and the AWS s3 permissions docs didn't emphasize this point.
If you have Requester Pays setting ON, you cannot enable Anonymous access (either by bucket policy or ACL 'Everyone'). You can sure write the policies and ACL and apply them and even use the console to explicitly set a file to public, but a non signed url will still get a 403 access denied 100% of the time on that file, until you uncheck requester pays setting in the console for the entire bucket (properties tab when bucket is selected). Or, I assume, via some API REST call.
Unchecked Requester Pays and now anonymous access is working, with referrer restrictions, ect. In fairness, the AWS console does tell us:
While Requester Pays is enabled, anonymous access to this bucket is disabled.
The issue is with your Action it should be in array format
Try this:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
}
]
}
Pass your Bucket name in 'Resource'
If you're having this problem with Zencoder uploads, checkout this page: https://app.zencoder.com/docs/api/encoding/s3-settings/public
The following policy will make the entire bucket public :
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
}
]
}
If you want a specific folder under that bucket to be public using Bucket policies , then you have to explicitly make that folder/prefix as public and then apply the bucket policy as follows :
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/images/*"]
}
]
}
The above policy will allow public read to all of the objects under images , but you will not be able to access other objects inside the bucket.
I know it is an old question but I would like to add information that may still be relevant today.
I believe that this bucket should be a static site. Because of this, you must use a specific URL for your rules to be accepted. To do this, you must add a "website" to your URL. Otherwise, it will treat it just like an object repository.
Example:
With the problem pointed out:
https://name-your-bucket.sa-east-1.amazonaws.com/home
Without the problem pointed out:
http://name-your-bucket.s3-website-sa-east-1.amazonaws.com/home
Hope this helps :)
This works.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::example-bucket/*"
}
]
}

How to set a bucket's ACL on S3?

I tried a couple of things: S3Browse, the RightAws Ruby gem and other tools. All allow granting access on an individual key basis, but I wasn't able to set the ACL on buckets. Actually, I set the ACL on the bucket, no errors are returned. But when I refresh or check in another tool, the bucket's ACL is reset to owner only.
I want to give read and write access to FlixCloud for an application I'm developing. They need the access to write the output files.
I was struggling with the ACL vs. Bucket Policy and found the following useful.
ACL
The ACL defines the permissions attached to a single file in your bucket. The Bucket Policy is a script that explains the permissions for any folder or file in a bucket. Use the bucket polcies to restrict hot linking, grant or deny access to specific or all files, restrict IP address, etc.
Edit the S3 Bucket Policy
Log into Amazon Web Services, click to S3 and click on the bucket name in the left column. View the bucket Properties panel at the bottom of the page. Click the button on the lower right corner that says "Edit bucket policy". This brings up a lightbox that you can paste the policy script into. If the script fails validation it will not save.
Sample Policy that enabled read access to everyone (useful if the bucket is being used as a content delivery network)
{
"Version": "2008-10-17",
"Id": "",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my_bucket_name/*"
}
]
}
Sample policy to prevent unauthorized hotlinking (third party sites linking to it) but allow anybody to download the files:
{
"Version":"2008-10-17",
"Id":"preventHotLinking",
"Statement":[ {
"Sid":"1",
"Effect":"Allow",
"Principal": {
"AWS":"*"
},
"Action":"s3:GetObject",
"Resource":"arn:aws:s3:::your.bucket.name/*",
"Condition":{
"StringLike": {
"aws:Referer": [
"http://yourwebsitename.com/*",
"http://www.yourwebsitename.com/*"
]
}
}
}]
}
Generate a Policy
http://awspolicygen.s3.amazonaws.com/policygen.html
Sample Bucket Policies
http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?AccessPolicyLanguage_UseCases_s3_a.html
Yup, just checked it again after 10 min. ACL remains as configured. I guess this is something at your end then. Try different account/workstation.
I have just double checked that for you - S3fm was able to change the ACL successfully. I used their email s3#flixcloud.com as userid. You can see the user in the list afterwords as flixclouds3.