Restrict Amazon S3 to CloudFront and http referrer - amazon-s3

I have an Amazon S3 REST endpoint for images and file assets. I want the S3 bucket only accessible by CloudFront and the website accessing the images (using http referrer).
This is my bucket policy so far:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<DOMAIN>/*",
"Condition":{
"StringLike":{"aws:Referer":["http://<DOMAIN>/*"]}
}
}
]
}
But once I apply the policy, the images are not accessible on the website.
Is this possible to do?

CloudFront strips Referer header by default so S3 will not see it.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html
You need to Whitelist the Referer header in CloudFront and invalidate the cache to see if it works.

I went about this a little bit differently instead of a whitelist. The method below only allows CloudFront to access the content and then you put firewall rules on CloudFront that only your website (refer) can access the cached content.
For the bucket policy, I blocked all access and cleared out the Bucket policy JSON:
In Cloudfront, create a Origins and Origin Group Policy:
Then choose your Bucket from the list in Origin Domain Name
Origin Path I left blank and Enable Origin Shield I left as no.
Restrict Bucket Access: Choose Yes
Choose Create a New Identity
Grant Read Permissions on Bucket: Yes or Create (This will update the block policy on the S3 bucket to allow only the CloudFront to get the content.
Everything else I left to default and saved.
Now to make sure I restricted the refer from my website, I went the AWS WAF Service.
From here I went to Regex pattern sets on the left menu:
Click on create regex pattern.
Name: I put DomainAccess_Only
Description: Use Waterever
Region: Important, choose Global (Cloudfront)
For the regular expressions, I put .+ and click create regex pattern set
Web ACL Details:
Name: Whatever you want, leave metric default
Resource type: CloudFront distributions
Add AWS Resources, click it and check your cloudfront domain and add it (click next)
Next Choose Rule builder
Choose whatever name for your rule and choose Regular rule
Then choose If a request Matches the statement (unless you have more than one domain)
Inspect: Header
Header field name: referrer
Match type: Starts with string
String to match: https://yourdomain.com (this needs to be exactly what your domain is)
Scroll down and choose Action: allow
Then Add rule
Once you have done that, make rule to go to Rules, and make sure the default rule is to Block.
If it's not set to block, click edit and change it.
Now your content can only be accessed by your website through cloudfront. Hotlink and Direct access to images will not work unless it's coming from your website.

Related

S3 web hosting endpoint produce "This site can’t be reached"

I am trying to host a static web page on AWS S3 but get "This site can’t be reached" when trying to reach my endpoint: http://iekdosha-test1.com.s3-website-eu-central-1.amazonaws.com/
This is my policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::iekdosha-test1.com/*"
}
]
}
The "Block public access" all set to "Off" both in bucket and in account settings
"Static website hosting" is enabled with my index.html page (test file with simple message)
My bucket name is iekdosha-test1.com
An orange "Public" tag appears under "permissions" in bucket page and on all buckets page i have the same orange "Public" tag on my bucket's "Access" column.
I have followed this guide and got stuck on step 2.8 (testing the endpoint)
What could possibly be the issue?
The eu-central-1 region does not use the old style ${bucket}.s3-website-${region}.amazonaws.com endoint convention for web site hosting endpoints. You need a dot . rather than a dash - after the word website in URLs for this region -- ${bucket}.s3-website.${region}.amazonaws.com.
This newer style actually works in any region, even though the Regions and Endpoints documentation still shows the old style endpoints for older regions.

Configuring Different Error Pages for different origins for the same Cloudfront distributions

We created a cloudfront distribution with 2 origins (1 s3 origin and 1 custom origin). We want the errors(5xx/4xx) from the custom origin to reach the client/user without modification but the error pages from s3 be served by cloudfront error pages configuration. Is this possible ? Currently Cloudfront does not support different custom error pages for different origin - if either of the origin returns an error, the same error page is served by cloudfront.
You can customize the error responses for your origins using Lambda#Edge.
You will need to associate an origin-response trigger to the behavior associated with your origin.
The origin-response is triggered after CloudFront receives the response from the origin:
This way you can add headers, issue redirects, dynamically generate a response or change the HTTP status code.
Depending on your use case, you may have to customize for both origins.
See also Lambda#Edge now Allows you to Customize Error responses From Your Origin.
I solved (and... ultimately abandoned my solution to) this problem by switching my S3 bucket to act as a static website that I use as a CloudFront custom origin (i.e. no Origin Access Identity). I used an aws:Referer bucket policy to restrict access to only requests that were coming through CloudFront.
NOTE A Referer header usually contains the request's URL. In this scenario, you are just overriding it with a unique, secret token that you share between CloudFront and S3.
This is described on this AWS Knowledge center page under "Using a website endpoint as the origin, with access restricted by a Referer header".
I eventually used a random UUID as my token and set that in my CloudFront Origin configuration.
With that same UUID, I ended up with a Bucket Policy like:
{
"Id": "Policy1603604021476",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1603604014855",
"Principal": "*",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::example/*",
"Condition": {
"StringEquals": {
"aws:Referer": "b4355bde-9c68-4410-83cf-058540d83491"
}
}
},
{
"Sid": "Stmt1603604014855",
"Principal": "*",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::example",
"Condition": {
"StringEquals": {
"aws:Referer": "b4355bde-9c68-4410-83cf-058540d83491"
}
}
}
]
}
The s3:ListBucket policy is needed to get 404s to work properly. If that isn't in place, you'll get the standard S3 AccessDenied error pages.
Now each of my S3 Origins can have different error page behaviors that are configured on the S3 side of things (rather than in CloudFront).
Follow Up I don't recommend this approach for the following reasons:
It is not possible to have S3 host your static website on HTTPS so your Referer token will be sent in the clear. Granted this will likely be on the AWS network but it's still not what I was hoping for.
You only get one error document per S3 Bucket anyway so it's not that big of an improvement over the CloudFront behavior.

Cloudfront restricting to IPs in bucket policy

I am working on a demo website using AWS S3 and have restricted to certain number of IPs using a bucket policy (e.g).
{
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-wicked-awesome-bucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "XX.XX.XX.XX/XX"
}
}
}
]
}
This works nicely. Now I want to use CloudFront to serve the website over HTTPS on a custom domain. I have created the distribution and the bucket policy has been modified (to allow CloudFront access) but I keep getting an access denied error when I try to access the CloudFront URL.
Is it possible to still use the bucket policy IP access list using CloudFront? If so, how do I do it?
You can remove the IP blacklisting/ whitelisting from S3 bucket policy and attach AWS WAF with the required access rules to the CloudFront distribution.
Note: Make sure when you make the S3 bucket private, to setup the Origin Access Identity User properly both in CloudFront and S3. Also if the bucket is in a different region than North Virginia, it can take some time for the DNS propagation.
A lambda function that changes the Bucket policy based on changes in the published list of IPs Here
The function can be invoked by SNS topic monitoring the list of IP's. Here is the documentation on that.
Here is the SNS topic for it.
arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged

S3 Buckets with CloudFlare: Prevent Hotlinking

I'm using an Amazon S3 bucket named images.example.com which successfully serves content through Cloudflare CDN using URLs like:
https://images.example.com/myfile.jpg
I would like to prevent hotlinking to images and other content buy limiting access to only the referring domain: example.com and possibly another domain which I use as a development server.
I've tried a bucket policy which both allows from specific domains and denies from any domains NOT the specific domains:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::images.example.com/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.example.com/*",
"http://example.com/*"
]
}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::images.example.com/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://www.example.com/*",
"http://example.com/*"
]
}
}
}
]
}
To test this. I uploaded a small webpage on a different server: www.notExample.com where I attempted to hotlink the image using:
<img src="https://images.example.com/myfile.jpg">
but the hotlinked image appears regardless.
I've also attempted the following CORS rule
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>http://www.example.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Neither of these has worked to prevent hotlinking. I've tried purging the cached files in CloudFlare, using combinations of bucket policy and CORS (one or the other plus both) and nothing works.
This seems to be a relatively simple thing to want to do. What Am I doing wrong?
Cloudflare is a Content Distribution Network that caches information closer to end users.
When a user accesses content via Cloudflare, the content will be served out of Cloudflare's cache. If the content is not in the cache, Cloudflare will retrieve the content, store it in the cache and serve it back to the original request.
Your Amazon S3 bucket policy will therefore not work with Cloudflare, since the page request is either coming from Cloudflare (not the user's browser that generates a Referrer), or being served directly from Cloudflare's cache (so the request never reaches S3).
You would need to configure Cloudflare with your referrer rules, rather than S3.
See: What does enabling CloudFlare Hotlink Protection do?
Some alternatives:
Use Amazon CloudFront - It has the ability to serve private content by using signed URLs -- your application can grant access by generating a special URL when creating the HTML page.
Serve the content directly from Amazon S3, which can use the referer rules. Amazon S3 also supports pre-signed URLs that your application can generate.

Correct S3 + Cloudfront CORS Configuration?

My application stores images on S3 and then proxies them through Cloudfront. I'm excited to use the new S3 CORS support so that I can use HTML5 canvas methods (which have a cross-origin policy) but can't seem to configure my S3 and Cloudfront correctly. Still running into "Uncaught Error: SECURITY_ERR: DOM Exception 18" when I try to convert an image to a canvas element.
Here's what I have so far:
S3
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>MY_WEBSITE_URL</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>MY_CLOUDFRONT_URL</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Cloudfront
Origins
Origin Protocol Policy: Match Viewer
HTTP Port: 80
HTTPS Port: 443
Behaviors
Origin: MY_WEBSITE_URL
Object Caching: Use Origin Cache Headers
Forward Cookies: None
Forward Query Strings: Yes
Is there something I'm missing here?
UPDATE :
Just tried changing the headers to
<AllowedHeader>Content-*</AllowedHeader>
<AllowedHeader>Host</AllowedHeader>
based on this question Amazon S3 CORS (Cross-Origin Resource Sharing) and Firefox cross-domain font loading
Still no go.
UPDATE: MORE INFO ON REQUEST
Request
URL:https://d1r5nr1emc2xy5.cloudfront.net/uploaded/BAhbBlsHOgZmSSImMjAxMi8wOS8xMC8xOC81NC80Mi85NC9ncmFzczMuanBnBjoGRVQ/32c0cee8
Request Method:GET
Status Code:200 OK (from cache)
UPDATE
I think maybe my request wasn't correct, so I tried enabling CORS with
img.crossOrigin = '';
but then the image doesn't load and I get the error: Cross-origin image load denied by Cross-Origin Resource Sharing policy.
On June 26, 2014 AWS released proper Vary: Origin behavior on CloudFront so now you just
Set a CORS Configuration for your S3 bucket including
<AllowedOrigin>*</AllowedOrigin>
In CloudFront -> Distribution -> Behaviors for this origin
Allowed HTTP Methods: +OPTIONS
Cached HTTP Methods +OPTIONS
Cache Based on Selected Request Headers: Whitelist the Origin header.
Wait for ~20 minutes while CloudFront propagates the new rule
Now your CloudFront distribution should cache different responses (with proper CORS headers) for different client Origin headers.
To complement #Brett's answer. There are AWS documentation pages detailing CORS on CloudFront and CORS on S3.
The steps detailed there are as follows:
In your S3 bucket go to Permissions -> CORS configuration
Add rules for CORS in the editor, the <AllowedOrigin> rule is the important one. Save the configuration.
In your CloudFront distribution go to Behavior -> choose a behavior -> Edit
Depending on whether you want OPTIONS responses cached or not, there are two ways according to AWS:
If you want OPTIONS responses to be cached, do the following:
Choose the options for default cache behavior settings that enable caching for OPTIONS responses.
Configure CloudFront to forward the following headers: Origin, Access-Control-Request-Headers, and Access-Control-Request-Method.
If you don't want OPTIONS responses to be cached, configure CloudFront
to forward the Origin header, together with any other headers required
by your origin
And with that CORS from CloudFront with S3 should work.
2022 answer:
Go to your S3 bucket -> Permissions
Scroll down to Cross-origin resource sharing (CORS)
Apply policy:
[
{
"AllowedHeaders": [],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
This will allow GET request from all origins. Modify according to your project's needs.
Go to your CloudFront distribution -> Behaviors -> Edit (in my case I had only one Behavior)
Scroll down to Cache key and origin requests
Select Cache policy and origin request policy (recommended)
Under Origin request policy - optional select CORS-CustomOrigin
Save Changes
Done!
UPDATE: this is no longer true with recent changes on CloudFront. Yippee! See the other responses for the details. I'm leaving this here for context/history.
Problem
CloudFront does not support CORS 100%. The problem is how CloudFront caches the response to the request. Any other request for the same URL after that will result in the cached request no matter the origin. The key part about that is that it includes the response headers from the origin.
First request before CloudFront has anything cached from Origin: http://example.com has a response header of:
Access-Control-Allow-Origin: http://example.com
Second request from Origin: https://example.com (note that it is HTTPS not HTTP) also has the response header of:
Access-Control-Allow-Origin: http://example.com
Because that is what CloudFront cached for the URL. This is invalid -- the browser console (in Chrome at least) will show a CORS violation message and things will break.
Workaround
The suggested work around is to use different URLs for different origins. The trick is to append a unique query string that is different so that there is one cached record per origin.
So our URLs would be something like:
http://.../some.png?http_mysite.com
https://.../some.png?https_mysite.com
This kind of works but anyone can make your site work poorly by swapping the querystrings. Is that likely? Probably not but debugging this issue is a huge hassle.
The right workaround is to not use CloudFront with CORS until they fully support CORS.
In Practice
If you use CloudFront for CORS, have a fallback to another method that will work when CORS does not. This isn't always an option but right now I'm dynamically loading fonts with JavaScript. If the CORS-based request to CloudFront fails, I fall back to a server-side proxy to the fonts (not cross origin). This way, things keep working even though CloudFront somehow got a bad cached record for the font.
As a completion on the previous answer, I would like to share AWS steps on how to enable CORS. I found it very useful, providing additional links: https://aws.amazon.com/premiumsupport/knowledge-center/no-access-control-allow-origin-error/
Also, something that you should consider when testing your changes, other than CloudFront deploy delay, is the browser cache. I suggest using different sessions for incognito when testing your changes.
Posting some of the non-trivial configurations that I did to make it work:
Assign custom domain to cloudfront such that the custom domain is a subdomain from where your app's frontend will run. In OP's case, he is using localhost:3000; most probably he is testing on his dev setup, but he must deploy this app at some domain: let's call this 'myapp.com'. So, he can assign a custom domain, say cdn.myapp.com to point to blah.cloudfront.net. You will need to create/import custom SSL certificate for the new custom domain; default cloudfront certificate won't work.
Refer to this: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html
In the Cloudfront Distribution snap, the first row is having no custom domain, hence an empty CNAMEs column. The second one is having a custom domain, hence we have that one printed over there. You can verify that your custom domain got pointed to the cloudfront distribution this way.
Cloudfront behaviour: I am assuming you have already set up trusted key group as at this point, you already have the signed cookie with you. HOWEVER, You will need to create custom Cache Policy and Origin Request Policy. See the following screenshots of the custom Cache Policy:and Origin Request Policy: The thing to notice is that you will need to whitelist these Headers: Origin, Access-Control-Request-Method, Access-Control-Allow-Origin, Access-Control-Request-Headers. (You might notice that Access-Control-Allow-Origin is not in the dropdown; just go ahead and type it!). Also, allow all cookies.
S3 CORS configuration: Go to the S3 bucket and click on the permissions tab. Scroll down to the CORS configuration. Disclaimer: I just pasted what worked for me. The rationale behind this was that this S3 was going to be accessed by either CDN or app in my scenario. I tried putting '*' being lenient, but CORS policy on Chrome complained that I cannot use a wildcard entry in AllowedOrigins!
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"GET",
"HEAD",
"DELETE"
],
"AllowedOrigins": [
"cdn.myapp.com",
"myapp.com",
"https://cdn.myapp.com",
"https://myapp.com"
],
"ExposeHeaders": [
"ETag"
]
}
]
react-player: I am using react-player like this (note forceHLS option being set, but it is again specific to my use case. I think this is not mandatory in general)
<ReactPlayer
className="react-player"
url={url}
controls={controls}
light={light}
config={
{
file: {
forceHLS: true,
hlsOptions: {
xhrSetup: function (xhr, url) {
xhr.withCredentials = true; // send cookies
},
},
},
}
}
playIcon={<PlayIcon />}
width="100%"
height="100%"
/>
I followed AWS documentation:
CloudFront- CORS
S3- CORS
Then I used aws cdk to do it for me. Full source here: https://github.com/quincycs/quincymitchell.com
const myBucket = new Bucket(this, 'bucket', {
bucketName: `prod-${domainName}`,
cors: [{
allowedMethods: [HttpMethods.GET],
allowedOrigins: ['*'],
allowedHeaders: ['*']
}],
enforceSSL: true,
blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
removalPolicy: RemovalPolicy.RETAIN
});
const mycert = Certificate.fromCertificateArn(this, 'certificate', ssmCertArn);
new Distribution(this, 'myDist', {
defaultBehavior: {
origin: new S3Origin(myBucket),
viewerProtocolPolicy: ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
originRequestPolicy: OriginRequestPolicy.CORS_S3_ORIGIN,
responseHeadersPolicy: ResponseHeadersPolicy.CORS_ALLOW_ALL_ORIGINS,
allowedMethods: AllowedMethods.ALLOW_GET_HEAD_OPTIONS, // needed for cors
cachedMethods: CachedMethods.CACHE_GET_HEAD_OPTIONS, // needed for cors
},
defaultRootObject: 'index.html',
domainNames: [domainName, `www.${domainName}`],
certificate: mycert
});
An additional reason for CORS errors could be the HTTP to HTTPS redirect configured in CloudFront.
According to documentation redirects to different origin are not allowed in CORS requests.
As an example, if you will try to access some URL http://example.com what has cloudfront rule to redirect HTTP to HTTPS, you will get a CORS error, since https://cloudfront.url is considered by the browser as a different origin.
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors/CORSExternalRedirectNotAllowed