AWS S3 Bucket Policy Source IP not working - amazon-s3

I've been trying all possible options but with no results. My Bucket Policy works well with aws:Referer but it doesn't work at all with Source Ip as the condition.
My Server is hosted with EC2 and I am using the Public IP in this format xxx.xxx.xxx.xxx/32 (Public_Ip/32) as the Source Ip parameter.
Can anyone tell me what I am doing wrong?
Currently my Policy is the following
{
"Version": "2008-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "xx.xx.xxx.xxx/32"
}
}
}
]
}
I read all examples and case studies but it doesn't seem to allow access based on Source IP...
Thanks a lot!!!

While I won't disagree that policies are better than IP address wherever possible, the accepted answer didn't actually achieve the original question's goal. I needed to do this (I need access from a machine that wasn't EC2, and thus didn't have policies).
Here is a policy that only allows a certain (or multiple IPs) to access a bucket's object. This assumes that there is no other policy to allow access to the bucket (by default, buckets grant no public access).
This policy also does not allow listing. Only if you know if the full url to the object you need. If you need more permissions, just add them to the Action bit.
{
"Id": "Policy123456789",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::mybucket/*",
"Condition" : {
"IpAddress" : {
"aws:SourceIp": [
"xx.xx.xx.xx/32"
]
}
}
}
]
}

From the discussion on the comments on the question, it looks like your situation can be rephrased as follows:
How can I give an specific EC2 instance full access to an S3 bucket, and deny access from every other source?
Usually, the best approach is to create an IAM Role and launch your EC2 instance associated with that IAM Role. As I'm going to explain, it is usually much better to use IAM Roles to define your access policies than it is to specify source IP addresses.
IAM Roles
IAM, or Identity and Access Management, is a service that can be used to create users, groups and roles, manage access policies associated with those three kinds of entities, manage credentials, and more.
Once you have your IAM role created, you are able to launch an EC2 instance "within" that role. In simple terms, it means that the EC2 instance will inherit the access policy you associated with that role. Note that you cannot change the IAM Role associated with an instance after you launched the instance. You can, however, modify the Access Policy associated with an IAM Role whenever you want.
The IAM service is free, and you don't pay anything extra when you associate an EC2 instance with an IAM Role.
In your situation
In your situation, what you should do is create an IAM Role to use within EC2 and attach a policy that will give the permissions you need, i.e., that will "Allow" all the "s3:xxx" operations it will need to execute on that specific resource "arn:aws:s3:::my_bucket/*".
Then you launch a new instance with this role (on the current AWS Management Console, on the EC2 Launch Instance wizard, you do this on the 3rd step, right after choosing the Instance Type).
Temporary Credentials
When you associate an IAM Role with an EC2 instance, the instance is able to obtain a set of temporary AWS credentials (let's focus on the results and benefits, and not exactly on how this process works). If you are using the AWS CLI or any of the AWS SDKs, then you simply don't specify any credential at all and the CLI or SDK will figure out it has to look for those temporary credentials somewhere inside the instance.
This way, you don't have to hard code credentials, or inject the credentials into the instance somehow. The instance and the CLI or SDK will manage this for you. As an added benefit, you get increased security: the credentials are temporary and rotated automatically.
In your situation
If you are using the AWS CLI, you would simply run the commands without specifying any credentials. You'll be allowed to run the APIs that you specified in the IAM Role Access Policy. For example, you would be able to upload a file to that bucket:
aws s3 cp my_file.txt s3://my_bucket/
If you are using an SDK, say the Java SDK, you would be able to interact with S3 by creating the client objects without specifying any credentials:
AmazonS3 s3 = new AmazonS3Client(); // no credentials on the constructor!
s3.putObject("my_bucket", ........);
I hope this helps you solve your problem. If you have any further related questions, leave a comment and I will try to address them on this answer.

Related

How do I edit a bucket policy deployed by organizational-level CloudTrail

we have a multi-account setup where we deployed an organizational-level CloudTrail in our root account's Control Tower.
Organizational-level CloudTrail allows us to deploy CloudTrail in each of our respective accounts and provides them the ability to send logs to CloudWatch in our Root account and to an S3 logging bucket in our central logging account.
Now I have AWS Athena set up in our logging account to try and run queries on the logs generated through our organizational-level CloudTrail deployment. So far, I have managed to create the Athena Table that is built on the mentioned logging bucket and I also created a destination bucket for the query results.
When I try to run a simple "preview table" query, I get the following error:
Permission denied on S3 path: s3://BUCKET_NAME/PREFIX/AWSLogs/LOGGING_ACCOUNT_NUMBER/CloudTrail/LOGS_DESTINATION
This query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: f72e7dbf-929c-4096-bd29-b55c6c41f582
I figured that the error is caused by the logging bucket's policy lacking any statement allowing Athena access, but when I try to edit the bucket policy I get the following error:
Your bucket policy changes can’t be saved:
You either don’t have permissions to edit the bucket policy, or your bucket policy grants a level of public access that conflicts with your Block Public Access settings. To edit a bucket policy, you need s3:PutBucketPolicy permissions. To review which Block Public Access settings are turned on, view your account and bucket settings. Learn more about Identity and access management in Amazon S3
This is strange since the role I am using has full admin access to this account.
Please advise.
Thanks in advance!
I see this is is a follow up question to your previous one: S3 Permission denied when using Athena
Control Tower guardrail automatically deploys a guardrail which prohibits updating the aws-controltower bucket policy.
In your master account, go to AWS Organizations. Then, go to your Security OU. Then go to Policies tab. You should see 2 guardrail policies:
One of them will contain this policy:
{
"Condition": {
"ArnNotLike": {
"aws:PrincipalARN": "arn:aws:iam::*:role/AWSControlTowerExecution"
}
},
"Action": [
"s3:PutBucketPolicy",
"s3:DeleteBucketPolicy"
],
"Resource": [
"arn:aws:s3:::aws-controltower*"
],
"Effect": "Deny",
"Sid": "GRCTAUDITBUCKETPOLICYCHANGESPROHIBITED"
},
Add these principals below AWSControlTowerExecution:
arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AWSAdministratorAccess*
arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AdministratorAccess*
Your condition should look like this:
"Condition": {
"ArnNotLike": {
"aws:PrincipalArn": [
"arn:aws:iam::*:role/AWSControlTowerExecution",
"arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AWSAdministratorAccess*",
"arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AdministratorAccess*"
]
}
},
You shoulld be able to update the bucket after this is applied.

Amazon S3 Server Side Encryption Bucket Policy problems

I am using a bucket policy that denies any non-SSL communications and UnEncryptedObjectUploads.
{
"Id": "Policy1361300844915",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyUnSecureCommunications",
"Action": "s3:*",
"Effect": "Deny",
"Resource": "arn:aws:s3:::my-bucket",
"Condition": {
"Bool": {
"aws:SecureTransport": false
}
},
"Principal": {
"AWS": "*"
}
},
{
"Sid": "DenyUnEncryptedObjectUploads",
"Action": "s3:PutObject",
"Effect": "Deny",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
},
"Principal": {
"AWS": "*"
}
}
]
}
This policy works for applications that support SSL and SSE settings but only for the objects being uploaded.
I ran into these issues:
CloudBerry Explorer and S3 Browser failed during folders and files RENAME in the bucket with that Bucket Policy. After I applied only SSL requirement in the bucket policy, those browsers successfully completed file/folder renaming.
CloudBerry Explorer was able to RENAME objects with the full SSL/SSE bucket policy only after I enabled in Options – Amazon S3 Copy/Move through the local computer (slower and costs money).
All copy/move inside Amazon S3 failed due to that restrictive policy.
That means that we cannot control copy/move process that is not originated from the application that manipulates local objects. At least above mentioned CloudBerry Options proved that.
But I might be wrong, that is why I am posting this question.
In my case, with that bucket policy enabled, S3 Management Console becomes useless. Users cannot create folders, delete them, what they can is only upload files.
Is there something wrong with my bucket policy? I do not know those Amazon S3 mechanisms that used for objects manipulating.
Does Amazon S3 treat external requests (API/http headers) and internal requests differently?
Is it possible to apply this policy only to the uploads and not to internal Amazon S3 GET/PUT etc..? I have tried http referer with the bucket URL to no avail.
The bucket policy with SSL/SSE requirements is a mandatory for my implementation.
Any ideas would be appreciated.
Thank you in advance.
IMHO There is no way to automatically tell Amazon S3 to turn on SSE for every PUT requests.
So, what I would investigate is the following :
write a script that list your bucket
for each object, get the meta data
if SSE is not enabled, use the PUT COPY API (http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html) to add SSE
"(...) When copying an object, you can preserve most of the metadata (default) or specify new metadata (...)"
If the PUT operation succeeded, use the DELETE object API to delete the original object
Then run that script on an hourly or daily basis, depending on your business requirements.
You can use S3 API in Python (http://boto.readthedocs.org/en/latest/ref/s3.html) to make it easier to write the script.
If this "change-after-write" solution is not valid for you business wise, you can work at different level
use a proxy between your API client and S3 API (like a reverse proxy on your site), and configure it to add the SSE HTTP header for every PUT / POST requests.
Developer must go through the proxy and not be authorised to issue requests against S3 API endpoints
write a wrapper library to add the SSE meta data automatically and oblige developer to use your library on top of the SDK.
The later today are a matter of discipline in the organisation, as it is not easy to enforce them at a technical level.
Seb

Amazon S3 Write Only access

I'm backing up files from several customers directly into an Amazon S3 bucket - each customer to a different folder. I'm using a simple .Net client running under a Windows task once a night. To allow writing to the bucket, my client requires both the AWS access key and the secret key (I created a new pair).
My problem is:
How do I make sure none of my customers could potentially use the pair to peek in the bucket and in a folder not his own? Can I create a "write only" access pair?
Am I approaching this the right way? Should this be solved through AWS access settings, or should I client-side encrypt files on the customer's machine (each customer with a different key) prior to uploading and avoid a the above mentioned cross-access?
I just created a write-only policy like this and it seems to be working:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*"
]
}
]
}
I think creating a drop like that is a much neater solution.
Use IAM to create a separate user for each customer (not just an additional key pair), then give each user access to only their S3 folder. For instance, if the bucket is called everybodysbucket, and customer A's files all start with userA/ (and customer B's with userB/), then you can grant permission to everybodysbucket/userA/* to the user for customer A, and to everybodysbucket/userB/* for customer B.
That will prevent each user from seeing any resources not their own.
Use can also control specific S3 operations, not just resources, that each user can access. So yes, you can grant write-only permission to the users if you want.
As a variation on the approach recommended in Charles' answer, you can also manage access control at the user level via SFTP. These users can all share the same global IAM policy:
SFTP does support user-specific home directories (akin to "chroot").
SFTP allows you to manage user access via service managed authentication or via your own authentication provider. I am unsure if service managed authentication has user limits.
If you wish to allow users to access uploaded files using a client, SFTP provides it very cleanly.

Multiple access_keys for different privileges with same S3 account?

I have a single S3/AWS account. I have several websites each which use their own bucket on S3 for reading/writing storage. I also host a lot of personal stuff (backups, etc) on other buckets on S3, which are not publicly accessible.
I would like to not have these websites-- some of which may have other people accessing their source code and configuration properties and seeing the S3 keys-- having access to my private data!
It seems from reading Amazon's docs that I need to partition privileges, by Amazon USER per bucket, not by access key per bucket. But that's not going to work. It also seems like I only get 2 access keys. I need to have one access key which is the master key, and several others which have much more circumscribed permissions-- only for certain buckets.
Is there any way to do that, or to approximate that?
You can achieve your goal by facilitating AWS Identity and Access Management (IAM):
AWS Identity and Access Management (IAM) enables you to securely
control access to AWS services and resources for your users. IAM
enables you to create and manage users in AWS, and it also enables you
to grant access to AWS resources for users managed outside of AWS in
your corporate directory. IAM offers greater security, flexibility,
and control when using AWS. [emphasis mine]
As emphasized, using IAM is strongly recommended for all things AWS anyway, i.e. ideally you should never use your main account credentials for anything but setting up IAM initially (as mentioned by Judge Mental already, you can generate as many access keys as you want like so).
You can use IAM just fine via the AWS Management Console (i.e. their is no need for 3rd party tools to use all available functionality in principle).
Generating the required policies can be a bit tricky in times, but the AWS Policy Generator is extremely helpful to get you started and explore what's available.
For the use case at hand you'll need a S3 Bucket Policy, see Using Bucket Policies in particular and Access Control for a general overview of the various available S3 access control mechanisms (which can interfere in subtle ways, see e.g. Using ACLs and Bucket Policies Together).
Good luck!
Yes, To access different login account with different permission using same AWS Account, you can use AWS IAM. As a developer of Bucket Explorer, I am suggesting try Bucket Explorer- team edition if you are looking for the tool provide you gui interface with different login and different access permission. read http://team20.bucketexplorer.com
Simply create a custom IAM group policy that limits access to a particular bucket
Such as...
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource":
[
"arn:aws:s3:::my.bucket",
"arn:aws:s3:::my.bucket/*"
]
}
]
}
The first action s3:ListAllMyBuckets allows the users to list all of the buckets. Without that, their S3 client will show nothing in the bucket listing when the users logon.
The second action grants full S3 privileges to the user for the bucket named 'my.bucket'. That means they're free to create/list/delete bucket resources and user ACLs within the bucket.
Granting s3:* access is pretty lax. If you want tighter controls on the bucket just look up the relevant actions you want to grant and add them as a list.
"Action": "s3:Lists3:GetObject, s3:PutObject, s3:DeleteObject"
I suggest you create this policy as a group (ex my.bucket_User) so you can assign it to every user who needs access to this bucket without any unnecessary copypasta.

How can I make a S3 bucket public (the amazon example policy doesn't work)?

Amazon provides an example for Granting Permission to an Anonymous User as follows (see Example Cases for Amazon S3 Bucket Policies):
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
Within my policy I've changed "bucket" in ""arn:aws:s3:::bucket/" to "my-bucket".
However, once I try to access an image within a folder of that bucket, I get the following Access denied error:
This XML file does not appear to have any style information associated
with it. The document tree is shown below.
(if I explicitly change the properties of that image to public, then reload its url, the image loads perfectly)
What am I doing wrong?
Update #1: Apparently it has something to do with a third party site that I've given access to. Although it has all of the permissions as the main user (me), and its objects are in the same folder, with the exact same permissions, it still won't let me make them publicly viewable. No idea why.
Update #2: Bucket policies do not apply to objects "owned" by others, even though they are within your bucket, see my answer for details.
Update
As per GoodGets' comment, the real issue has been that bucket policies to do not apply to objects "owned" by someone else, even though they are in your bucket, see GoodGets' own answer for details (+1).
Is this a new bucket/object setup or are you trying to add a bucket policy to a pre-existing setup?
In the latter case you might have stumbled over a related pitfall due to the interaction between the meanwhile three different S3 access control mechanisms available, which can be rather confusing indeed. This is addressed e.g. in Using ACLs and Bucket Policies Together:
When you have ACLs and bucket policies assigned to buckets, Amazon S3
evaluates the existing Amazon S3 ACLs as well as the bucket policy
when determining an account’s access permissions to an Amazon S3
resource. If an account has access to resources that an ACL or policy
specifies, they are able to access the requested resource.
While this sounds easy enough, unintentional interferences may result from the subtle different defaults between ACLs an policies:
With existing Amazon S3 ACLs, a grant always provides access to a
bucket or object. When using policies, a deny always overrides a
grant. [emphasis mine]
This explains why adding an ACL grant always guarantees access, however, this does not apply to adding a policy grant, because an explicit policy deny provided elsewhere in your setup would still be enforced, as further illustrated in e.g. IAM and Bucket Policies Together and Evaluation Logic.
Consequently I recommend to start with a fresh bucket/object setup to test the desired configuration before applying it to a production scenario (which might still interfere of course, but identifying/debugging the difference will be easier in case).
Good luck!
Bucket policies do not apply files with other owners. So although I've given write access to a third party, the ownership remains them, and my bucket policy will not apply to those objects.
I wasted hours on this, the root cause was stupid, and the solutions mentioned here didn't help (I tried them all), and the AWS s3 permissions docs didn't emphasize this point.
If you have Requester Pays setting ON, you cannot enable Anonymous access (either by bucket policy or ACL 'Everyone'). You can sure write the policies and ACL and apply them and even use the console to explicitly set a file to public, but a non signed url will still get a 403 access denied 100% of the time on that file, until you uncheck requester pays setting in the console for the entire bucket (properties tab when bucket is selected). Or, I assume, via some API REST call.
Unchecked Requester Pays and now anonymous access is working, with referrer restrictions, ect. In fairness, the AWS console does tell us:
While Requester Pays is enabled, anonymous access to this bucket is disabled.
The issue is with your Action it should be in array format
Try this:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
}
]
}
Pass your Bucket name in 'Resource'
If you're having this problem with Zencoder uploads, checkout this page: https://app.zencoder.com/docs/api/encoding/s3-settings/public
The following policy will make the entire bucket public :
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
}
]
}
If you want a specific folder under that bucket to be public using Bucket policies , then you have to explicitly make that folder/prefix as public and then apply the bucket policy as follows :
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/images/*"]
}
]
}
The above policy will allow public read to all of the objects under images , but you will not be able to access other objects inside the bucket.
I know it is an old question but I would like to add information that may still be relevant today.
I believe that this bucket should be a static site. Because of this, you must use a specific URL for your rules to be accepted. To do this, you must add a "website" to your URL. Otherwise, it will treat it just like an object repository.
Example:
With the problem pointed out:
https://name-your-bucket.sa-east-1.amazonaws.com/home
Without the problem pointed out:
http://name-your-bucket.s3-website-sa-east-1.amazonaws.com/home
Hope this helps :)
This works.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::example-bucket/*"
}
]
}