S3 Bucket Policy for hotlinking is preventing writes - amazon-s3

I have a website that serves our content from Amazon S3. Currently, I am able to read and write data to S3 just fine from my web server / website. The ACL permission are fine - I have full permissions for the website, and simply read permissions for the public.
Then, I added an S3 Bucket Policy to prevent hotlinking. You can see the S3 policy below.
This policy works well - except for one issue - it is now preventing file write requests from my webserver. So, while my public website serves content just fine, when I try to do file or directory operations, such as upload images or move images (or directories), I get an "Access denied" error now. (by my web application server, which is Railo / Coldfusion)
I'm not sure why this is happening? Initially I thought that it might be because the file read/write requests between my web server and S3 were coming via my IP and not my domain name.But even after adding my IP, the errors persist.
If I remove the policy, everything works fine again.
Does anyone know what is causing this or what I'm missing here? Thanks
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Allowinmydomains",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::cdn.babeswithbraces.com/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.babeswithbraces.com/*",
"http://babeswithbraces.com/*",
"http://64.244.61.40/*"
]
}
}
},
{
"Sid": "Givenotaccessifrefererisnomysites",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::cdn.babeswithbraces.com/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://www.babeswithbraces.com/*",
"http://babeswithbraces.com/*",
"http://64.244.61.40/*"
]
}
}
}
]
}

When you use bucket policies, a deny always overrides a grant. Because you are denying access to GetObject from your bucket policy for all accounts (including authenticated users) that don't match your specific referrers list, your app produces Access denied errors.
By default, objects in S3 have their ACLs set to private. If this is the case with your bucket, then there is no need to have an Allow and a Deny rule in your bucket policy. It would be enough to have an Allow condition that grants anonymous users, which match some specific referrers, the permission to access objects in the bucket.
In the case mentioned above, your bucket policy should look like:
{
"Id": "Policy1380565362112",
"Statement": [
{
"Sid": "Stmt1380565360133",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::cdn.babeswithbraces.com/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.babeswithbraces.com/*",
"http://babeswithbraces.com/*",
"http://64.244.61.40/*"
]
}
},
"Principal": {
"AWS": [
"*"
]
}
}
]
}
If the object ACLs already allow public access you can either remove those ACLs to make the objects private by default or include a Deny rule in your bucket policy and modify the requests you send to S3 from your app to include the expected referrer header. There is currently no way to have a Deny rule in your bucket policy that only affects anonymous requests.

Related

How to allow S3 downloads from "owner" while restricting referers in Bucket Policy

I have put the following bucket policy in effect for the product downloads bucket on my website. It works perfectly for http traffic. However this policy also prevents me from downloading directly from the S3 console, or from 3rd party S3 clients like S3Hub.
How can I add to or change this policy to be able to interact with my files "normally" as a logged-in owner, but still restrict http traffic as below?
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::downloads.example.net/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://example16.herokuapp.com/*",
"http://localhost*",
"https://www.example.net/*",
"http://stage.example.net/*",
"https://stage.example.net/*",
"http://www.example.net/*"
]
}
}
}
]
}
Remove:
"Principal": "*",
Replace with:
"NotPrincipal": { "AWS": "Your-AWS-account-ID" },
The policy should then apply only to requests that are not authorized by credentials associated with your account.
Note that because of the security implications of its logic inversion, NotPrincipal should only ever be used with Deny policies, not Allow policies, with few exceptions.

Amazon S3 - Returns 403 error instead of 404 despite GetObject allowance

I've set up my S3 bucket with this tutorial to only accept requests from specific IP addresses. But even though those IPs are allowed to do GetObject, they get 403 errors instead of 404 for any files that are missing.
My updated bucket policy is (with fictitious IP addresses):
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.bucketname.com/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"100.100.100.0/22",
"101.101.101.0/22"
]
}
}
},
{
"Sid": "ListItems",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::www.bucketname.com",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"100.100.100.0/22",
"101.101.101.0/22"
]
}
}
}
]
}
(Updated with the ListBucket command, as pointed out by Mark B.)
I've found several related questions here on SO (like this and this), but their solutions are based on giving everyone permission to access the bucket's contents.
And that approach works, because if I lift my IP filter then 404 errors are given for missing files instead of 403. But that defeats the purpose of an IP filter.
I learned here that:
S3 returns a 403 instead of a 404 when the user doesn't have
permission to list the bucket contents.
But I cannot find way to have the bucket generate 404 error codes for missing files without removing my IP whitelist filter. And that is with including the GetObject command for retrieving the objects and ListBucket for listing the objects.
My reasoning is as follows: if the IP addresses are allowed to access the bucket's content, then shouldn't S3 generate a 404 error for these IPs instead of 403? How do I do that without removing my existing filter?
Note the documentation you quoted:
S3 returns a 403 instead of a 404 when the user doesn't have
permission to list the bucket contents.
The GetObject permission you have granted only gives permission to get an object that exists, it does not give permission to list all the objects in a bucket. You would need to add the ListBucket permission to your bucket policy. See this page for the full list of S3 IAM permissions, and the S3 operations they cover.
I've solved the problem of S3 issuing 403 instead of 404 errors not by changing the bucket policy, but by simply adding an 'Everyone' listing policy in the bucket settings:
I feel it's a less elegant than setting the bucket policy, but it at least works now.
My accompanying bucket policy is now still based on only whitelisting a few IPs:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::website-bucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"10.1.1.0/22",
"11.1.1.0/22"
]
}
}
}
]
}
My issue was that my computer clock was not set correctly. (because of DST issues)

Amazon S3: Grant anonymous access from IP (via bucket policy)

I have a Amazon S3 bucket and would like to make it available to scripts on a certain machine, whithout the need to deploy login credentials. So my plan was to allow anonymous access only from the IP of that machine. I'm quite new to the Amazon cloud and bucket policies look like the way to go. I added the following policy to my bucket:
{
"Version": "2008-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::name_of_my_bucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"my_ip_1/24",
"my_ip_2/24"
]
}
}
}
]
}
But anonymous access still does not work. For testing, I granted access to "Everyone" in the S3 management console. That works fine, but is obviously not what I want to do. ;-) Any hint what I'm doing wrong and how to get this working?
My use case is some data processing using EC2 and S3, so access control by IP would be much simpler than fiddling around with user accounts. If there's a simpler solution, I'm open for suggestions.
But anonymous access still does not work.
What operation still does not work exactly, do you by chance just try to list the objects in the bucket?
Quite often a use case implicitly involves Amazon S3 API calls also addressing different resource types besides the Resource explicitly targeted by the policy already. Specifically, you'll need to be aware of the difference between Operations on the Service (e.g. ListAllMyBuckets), Operations on Buckets (e.g. ListBucket) and Operations on Objects (e.g. GetObject).
In particular, the Resource specification of your policy currently addresses the objects within the bucket only (arn:aws:s3:::name_of_my_bucket/*), which implies that you cannot list objects in the bucket (you should be able to put/get/delete objects though in case) - in order to also allow listing of the objects in the bucket via ListBucket you would need to amend your policy as follows accordingly:
{
"Version": "2008-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
// ... your existing statement for objects here ...
},
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::name_of_my_bucket",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"my_ip_1/24",
"my_ip_2/24"
]
}
}
}
]
}

Amazon s3 bucket policies restrict access has no effect

I want restrict access file in amazon s3 from all public except by some sites (using referer),
currently I have this bucket policies
{
"Id":"foosite-test",
"Statement":[
{
"Sid": "Allow from foosite admin",
"Action": "s3:GetObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket/*",
"Principal": {
"AWS":["*"]
},
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.foosite.co.uk/admin/*",
"http://www.foosite.co.au/admin/*"
]
}
}
}
]
}
But seems this policy doesn't have effect. I can copy paste s3 object url and still can access that file.
What's wrong with this policies?
You need to add policy for deny also, that is to block all the other referer. I had answered similar question, but did not included the deny part in that sample policy. You can find my updated answer here: Amazon S3 objects: is it possible to restrict public read policy to some IP adresses only ?
In your case you have to use referer instead of IP.

amazon s3 video files accessible only from my domain/server?

Now, I know that I cannot stop someone from downloading my videos and sharing, however I would prefer to have it to so that people do not copy paste links directly to my bucket. Thus, is there a way to make my bucket accessible only from my server/domain making the request?
If it helps, I'm using jwplayer which loads from a xml playlist that has all the links. This playlist definitely can be opened and viewed from anywhere and is where I expect the easy copy and paste comes from.
I don't want to mask the urls because that means my bucket is readable to everyone. There is probably some chance that someone will find the url of my bucket and the name of the files and connect everything together...
This is possible by Using Bucket Policies, which allows you to define access rights for Amazon S3 resources - there are a couple of Example Cases for Amazon S3 Bucket Policies illustrating the functionality, and amongst these you'll find an example for Restricting Access to Specific IP Addresses as well:
This statement grants permissions to any user to perform any S3 action
on objects in the specified bucket. However, the request must
originate from the range of IP addresses specified in the condition.
Depending on the specifics of your use case, a bucket policy for this might look like so:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket/*",
"Condition" : {
"IpAddress" : {
"aws:SourceIp": "192.168.143.0/24"
},
"NotIpAddress" : {
"aws:SourceIp": "192.168.143.188/32"
}
}
}
]
}
As shown the aws:sourceIp value for parameters IPAddress and NotIpAddress is expressed in CIDR notation, enabling respective flexibility for composing the desired scope.
Finally, you might want to check out the recommended AWS Policy Generator, select type S3 Bucket Policy and explore the available Actions and Conditions to compose more targeted policies for your use case eventually - the documentation for Conditions explains this in detail.
The Ip address will help if your server going to access your bucket. But JWPlayer is from client side. So the request is directly goes from jwplayer(browser) to s3 bucket url, Not via your server. In this case "referrer bucket policy" will help you in this.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://yoursitename.com/*",
"http://*.yoursitename.com/*"
]
}
}
}
]
}
So now s3 will allow if that request come from your site only.
You can have your bucket protected, which is by default the way it is. (meaning you only have access to objects in it) Then you can request files from Amazon S3 from your website and give it a time limit to which the user can see it.
//set time so that users can see file for 1 minute. then it is protected again.
$response = $s3->get_object_url(YOUR_A3_BUCKET, PATH/TO/FILE, '1 minutes');
This will automatically give you a url that has parameters associated with it which only is accessible for 1 minute. You can use that as your source within your website and then they could not copy and paste it into the browser after that 1 minute.
You can read more about this at the Amazon SDK for PHP
Restricting Access to a Specific HTTP Referrer
Suppose you have a website with domain name (www.example.com or example.com) with links to photos and videos stored in your Amazon S3 bucket, examplebucket. By default, all the Amazon S3 resources are private, so only the AWS account that created the resources can access them. To allow read access to these objects from your website, you can add a bucket policy that allows s3:GetObject permission with a condition, using the aws:referer key, that the get request must originate from specific webpages. The following policy specifies the StringLike condition with the aws:Referer condition key.
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
For everyone who is stumbling upon this now, please take note that Amazon has changed the JSON format for the bucket policies and now requires each allowed / denied IP or domain to be listed separately. See below for an example.
Either way, I strongly recommend to use the AWS Policy Generator to make sure your formatting is correct.
AWS S3 Bucket Policy - Allow Access only from multiple IPs
{
"Id": "Policy1618636210012",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1618635877058",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/folder/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "333.444.555.666"
}
},
"Principal": "*"
},
{
"Sid": "Stmt1618636151833",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/folder/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "222.333.444.555"
}
},
"Principal": "*"
},
{
"Sid": "Stmt1618636203591",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/folder/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "111.222.333.444"
}
},
"Principal": "*"
}
]
}