I am working on a demo website using AWS S3 and have restricted to certain number of IPs using a bucket policy (e.g).
{
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-wicked-awesome-bucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "XX.XX.XX.XX/XX"
}
}
}
]
}
This works nicely. Now I want to use CloudFront to serve the website over HTTPS on a custom domain. I have created the distribution and the bucket policy has been modified (to allow CloudFront access) but I keep getting an access denied error when I try to access the CloudFront URL.
Is it possible to still use the bucket policy IP access list using CloudFront? If so, how do I do it?
You can remove the IP blacklisting/ whitelisting from S3 bucket policy and attach AWS WAF with the required access rules to the CloudFront distribution.
Note: Make sure when you make the S3 bucket private, to setup the Origin Access Identity User properly both in CloudFront and S3. Also if the bucket is in a different region than North Virginia, it can take some time for the DNS propagation.
A lambda function that changes the Bucket policy based on changes in the published list of IPs Here
The function can be invoked by SNS topic monitoring the list of IP's. Here is the documentation on that.
Here is the SNS topic for it.
arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged
Related
I am trying to host a static web page on AWS S3 but get "This site can’t be reached" when trying to reach my endpoint: http://iekdosha-test1.com.s3-website-eu-central-1.amazonaws.com/
This is my policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::iekdosha-test1.com/*"
}
]
}
The "Block public access" all set to "Off" both in bucket and in account settings
"Static website hosting" is enabled with my index.html page (test file with simple message)
My bucket name is iekdosha-test1.com
An orange "Public" tag appears under "permissions" in bucket page and on all buckets page i have the same orange "Public" tag on my bucket's "Access" column.
I have followed this guide and got stuck on step 2.8 (testing the endpoint)
What could possibly be the issue?
The eu-central-1 region does not use the old style ${bucket}.s3-website-${region}.amazonaws.com endoint convention for web site hosting endpoints. You need a dot . rather than a dash - after the word website in URLs for this region -- ${bucket}.s3-website.${region}.amazonaws.com.
This newer style actually works in any region, even though the Regions and Endpoints documentation still shows the old style endpoints for older regions.
We created a cloudfront distribution with 2 origins (1 s3 origin and 1 custom origin). We want the errors(5xx/4xx) from the custom origin to reach the client/user without modification but the error pages from s3 be served by cloudfront error pages configuration. Is this possible ? Currently Cloudfront does not support different custom error pages for different origin - if either of the origin returns an error, the same error page is served by cloudfront.
You can customize the error responses for your origins using Lambda#Edge.
You will need to associate an origin-response trigger to the behavior associated with your origin.
The origin-response is triggered after CloudFront receives the response from the origin:
This way you can add headers, issue redirects, dynamically generate a response or change the HTTP status code.
Depending on your use case, you may have to customize for both origins.
See also Lambda#Edge now Allows you to Customize Error responses From Your Origin.
I solved (and... ultimately abandoned my solution to) this problem by switching my S3 bucket to act as a static website that I use as a CloudFront custom origin (i.e. no Origin Access Identity). I used an aws:Referer bucket policy to restrict access to only requests that were coming through CloudFront.
NOTE A Referer header usually contains the request's URL. In this scenario, you are just overriding it with a unique, secret token that you share between CloudFront and S3.
This is described on this AWS Knowledge center page under "Using a website endpoint as the origin, with access restricted by a Referer header".
I eventually used a random UUID as my token and set that in my CloudFront Origin configuration.
With that same UUID, I ended up with a Bucket Policy like:
{
"Id": "Policy1603604021476",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1603604014855",
"Principal": "*",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::example/*",
"Condition": {
"StringEquals": {
"aws:Referer": "b4355bde-9c68-4410-83cf-058540d83491"
}
}
},
{
"Sid": "Stmt1603604014855",
"Principal": "*",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::example",
"Condition": {
"StringEquals": {
"aws:Referer": "b4355bde-9c68-4410-83cf-058540d83491"
}
}
}
]
}
The s3:ListBucket policy is needed to get 404s to work properly. If that isn't in place, you'll get the standard S3 AccessDenied error pages.
Now each of my S3 Origins can have different error page behaviors that are configured on the S3 side of things (rather than in CloudFront).
Follow Up I don't recommend this approach for the following reasons:
It is not possible to have S3 host your static website on HTTPS so your Referer token will be sent in the clear. Granted this will likely be on the AWS network but it's still not what I was hoping for.
You only get one error document per S3 Bucket anyway so it's not that big of an improvement over the CloudFront behavior.
I've set up many static sites on AWS/S3 with other domain registrars; however, google domains is giving me some issues.
Steps I've taken:
-on S3/AWS:
created bucket domainname.org
enabled static website hosting by adding index.html
uploaded index.html and related documents to bucket
create bucket www.domainname.org to redirect to bucket domainname.org
created bucket policy for domainname.org as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::domainname.org/"
]
}
]
}
on google domains side
Created CNAME record with name www and data entry equal to the bucket's endpoint "rightsrequest.org.s3-website-us-west-2.amazonaws.com"
Set TTL to 60 so I can see changes.
I can see the site at that endpoint but it is not redirecting/mapping to the domainmae.org as expected. Usually, this set-up would be enough with other registrars
What am I missing? How do you properly set up static site hosting on S3/AWS while using google domains ?
Thank you for your help !
Just add a CNAME record pointing to s3.amazonaws.com. (yes, the dot included) ... like this:
I'm using an Amazon S3 bucket named images.example.com which successfully serves content through Cloudflare CDN using URLs like:
https://images.example.com/myfile.jpg
I would like to prevent hotlinking to images and other content buy limiting access to only the referring domain: example.com and possibly another domain which I use as a development server.
I've tried a bucket policy which both allows from specific domains and denies from any domains NOT the specific domains:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::images.example.com/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.example.com/*",
"http://example.com/*"
]
}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::images.example.com/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://www.example.com/*",
"http://example.com/*"
]
}
}
}
]
}
To test this. I uploaded a small webpage on a different server: www.notExample.com where I attempted to hotlink the image using:
<img src="https://images.example.com/myfile.jpg">
but the hotlinked image appears regardless.
I've also attempted the following CORS rule
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>http://www.example.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Neither of these has worked to prevent hotlinking. I've tried purging the cached files in CloudFlare, using combinations of bucket policy and CORS (one or the other plus both) and nothing works.
This seems to be a relatively simple thing to want to do. What Am I doing wrong?
Cloudflare is a Content Distribution Network that caches information closer to end users.
When a user accesses content via Cloudflare, the content will be served out of Cloudflare's cache. If the content is not in the cache, Cloudflare will retrieve the content, store it in the cache and serve it back to the original request.
Your Amazon S3 bucket policy will therefore not work with Cloudflare, since the page request is either coming from Cloudflare (not the user's browser that generates a Referrer), or being served directly from Cloudflare's cache (so the request never reaches S3).
You would need to configure Cloudflare with your referrer rules, rather than S3.
See: What does enabling CloudFlare Hotlink Protection do?
Some alternatives:
Use Amazon CloudFront - It has the ability to serve private content by using signed URLs -- your application can grant access by generating a special URL when creating the HTML page.
Serve the content directly from Amazon S3, which can use the referer rules. Amazon S3 also supports pre-signed URLs that your application can generate.
I researched this and other topics. I create a new bucket "ilya_test_1" on Amazon S3. I add permission to Everyone with Upload/Delete enabled. I leave my default private permission untouched. I upload image to the bucket root.
I try to browse via browser to my image and I get AccessDenied. Shouldn't everything in this bucket be public accessible?
What I do not understand is why do I need to set the below bucket policy if I have already granted access to Everyone?
NOTE: access works if I set permissions to Everyone AND this bucket policy.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::ilya_test_1/*"
}
]
}
You are giving permission to upload. But in the upload, the headers can set the file to public or private. Perhaps something is setting the header to private on upload?
(Also, it's generally a bad idea to have public write on S3. You should have a server that hands out signed S3 URLs that your client can PUT/POST to. Why? Because you can more easily prevent abuse (limit size of upload, limit number of uploads per IP, etc.)