Font requests to CloudFront are "(cancelled)" - amazon-s3

I have a CloudFront distribution at (static.example.com) which has an S3 bucket as Origin. I am using this distribution to store all the artifacts for client code (JavaScript files, stylesheets, images and fonts).
All the requests to JavaScript files, stylesheets and images succeed without any problem however, requests to font files have the status cancelled in Google Chrome.
Here is how I request those fonts:
#font-face {
font-family: Material Design Icons;
font-style: normal;
font-weight: 400;
src: url(https://static.example.com/5022976817.eot);
src: url(https://static.example.com/5022976817.eot) format("embedded-opentype"),
url(https://static.example.com/611c53b432.woff2) format("woff2"),
url(https://static.example.com/ee55b98c3c.woff) format("woff"),
url(https://static.example.com/cd8784a162.ttf) format("truetype"),
url(https://static.example.com/73424aa64e.svg) format("svg")
}
The request to the svg font file is ok, but the other ones are not ok.
What have I done wrong? Every file in the S3 bucket has public-read ACL.

It seems like this is an issue with CORS preventing to load the fonts from a different domain.
You need to:
Enable CORS on your S3 bucket:
Go to the s3 bucket and on the Permissions tab select CORS configuration, add the permission with your AllowedOrigin:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://static.example.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
You can add multiple AllowedOrigin:
<AllowedOrigin>https://static.example.com</AllowedOrigin>
<AllowedOrigin>http://static.example.com</AllowedOrigin>
<AllowedOrigin>https://example.com</AllowedOrigin>
<AllowedOrigin>http://otherexample.com</AllowedOrigin>
Or use a wildcard:
<AllowedOrigin>*.example.com</AllowedOrigin>
Whitelist the appropriate headers on CloudFront to be passed to S3:
Go to the Behaviors tab on your CloudFront distribution and select Create Behavior and add the patter you want:
Path Pattern: 5022976817.eot
Cache Based on Selected Request Headers: Whitelist
Add the following headers to the whitelisted headers:
Access-Control-Request-Headers
Access-Control-Request-Method
Origin
You can test that CORS is working properly with curl:
curl -X GET -H "Origin: https://static.example.com" -I https://static.example.com/5022976817.eot
Everything is ok if you get a response header like:
access-control-allow-origin: https://static.example.com

Related

Cannot Access S3 resources with CORS configuration

Good day,
I am using the following tutorial to create an S3 bucket to store a .csv file that is updated hourly from google drive via a Lambda routine:
https://labs.mapbox.com/education/impact-tools/sheetmapper-advanced/#cors-configuration
When I try to access the .csv from its S3 object URL by inserting it into the browser
https://mapbox-sheet-mapper-advanced-bucket.s3.amazonaws.com/SF+Food+Banks.csv
I get the following error
error image
The CORS permission given in the tutorial is in XML format:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I have tried to convert it into JSON format, as it seems the S3 console no longer supports CORS permissions in XML format:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
Any advice/help would be greatly appreciated!
Please make sure that you have you account permissions able to support public access to S3. There are four things that I ran into today while trying to make a public S3 resource.
Account settings for block public access has to be disabled. (MAKE SURE TO ENABLE IT FOR ANY PRIVATE BUCKETS OR OBJECTS)
Individual block public access has to be disabled. (As shown in your tutorial)
ACL must allow read access. You can find this under S3 - Buckets - your_bucket - permissions - Access Control list. Edit this for read access.
Go to the individual object and ensure that it also has permissions to be read from the public.

403 Error when using fetch to call Cloudfront S3 endpoint with custom domain and signed cookies

I'm trying to create a private endpoint for an S3 bucket via Cloudfront using signed cookies. I've been able to successfully create a signed cookie function in Lambda that adds a cookie for my root domain.
However, when I call the Cloudfront endpoint for the S3 file I'm trying to access, I am getting a 403 error. To make things weirder, I'm able to copy & paste the URL into the browser and can access the file.
We'll call my root domain example.com. My cookie domain is .example.com, my development app URL is test.app.example.com and my Cloudfront endpoint URL is tilesets.example.com
Upon inspection of the call, it seems that the cookies aren't being sent. This is strange because my fetch call has credentials: "include" and I'm calling a subdomain of the cookie domain.
Configuration below:
S3:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Cloudfront:
Not sure what I could be doing wrong here. It's especially weird that it works when I go directly to the link in the browser but not when I fetch, so guessing that's a CORS issue.
I've been logging the calls to Cloudfront, and as you can see, the cookies aren't being sent when using fetch in my main app:
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type cs-protocol-version fle-status fle-encrypted-fields
2019-09-13 22:38:40 IAD79-C3 369 <IP> GET <CLOUDFRONT ID>.cloudfront.net <PATH URL>/metadata.json 403 https://test.app.<ROOT DOMAIN>/ Mozilla/5.0%2520(Macintosh;%2520Intel%2520Mac%2520OS%2520X%252010_14_6)%2520AppleWebKit/537.36%2520(KHTML,%2520like%2520Gecko)%2520Chrome/76.0.3809.132%2520Safari/537.36 - - Error 5kPxZkH8n8dVO57quWHurLscLDyrOQ0L-M2e0q6X5MOe6K9Hr3wCwQ== tilesets.<ROOT DOMAIN> https 281 0.000 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Error HTTP/2.0 - -
Whereas when I go to the URL directly in the browser:
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type cs-protocol-version fle-status fle-encrypted-fields
2019-09-13 22:32:38 IAD79-C1 250294 <IP> GET <CLOUDFRONT ID>.cloudfront.net <PATH URL>/metadata.json 200 - Mozilla/5.0%2520(Macintosh;%2520Intel%2520Mac%2520OS%2520X%252010_14_6)%2520AppleWebKit/537.36%2520(KHTML,%2520like%2520Gecko)%2520Chrome/76.0.3809.132%2520Safari/537.36 - CloudFront-Signature=<SIGNATURE>;%2520CloudFront-Key-Pair-Id=<KEY PAIR>;%2520CloudFront-Policy=<POLICY> Miss gRkIRkKtVs3WIR-hI1fDSb_kTfwH_S2LsJhv9bmywxm_MhB7E7I8bw== tilesets.<ROOT DOMAIN> https 813 0.060 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Miss HTTP/2.0 - -
Any thoughts?
You have correctly diagnosed that the issue is that your cookies aren't being sent.
A cross-origin request won't include cookies with credentials: "include" unless the origin server also includes permission in its response headers:
Access-Control-Allow-Credentials: true
And the way to get S3 to allow that is not obvious, but I stumbled on the solution following a lead found in this answer.
Modify your bucket's CORS configuration to remove this:
<AllowedOrigin>*</AllowedOrigin>
...and add this, instead, specifically listing the origin you want to allow to access your bucket (from your description, this will be the parent domain):
<AllowedOrigin>https://example.com</AllowedOrigin>
(If you need http, that needs to be listed separately, and each domain you need to allow to access the bucket using CORS needs to be listed.)
This changes S3's behavior to include Access-Control-Allow-Credentials: true. It doesn't appear to be explicitly documented.
Do not use the following alternative, even though it would also work, without understanding the implications.
<AllowedOrigin>https://*</AllowedOrigin>
This also results in Access-Control-Allow-Credentials: true, so it "works" -- but it allows cross-origin from anywhere, which you likely do not want. With that said, do bear in mind that CORS is nothing more than a permissions mechanism that is applicable to well-behaved, non-malicious web browsers, only -- so setting the allowed origin settings to only allow the correct domain is important, but it does not magically secure your content against unauthorized access from elsewhere. I suspect you are aware of this, but it is important to keep in mind.
After these changes, you'll need to clear the browser cache and invalidate the CloudFront cache, and re-test. Once the CORS headers are being set correctly, your browser should send cookies, and the issue should be resolved.

Fine Uploader cannot draw thumbnail from amazon S3

I have a form with a Fine Uploader and I am loading an initial file list (as described here)
For the list of initial files I am also returning the thumbnailUrl which points to my files in Amazon's S3.
Now I see that Fine Uploader is actually making an HTTP request to S3 and gets a 200 OK but the thumbnail is not displayed and this is what I see in the console:
[Fine Uploader 5.1.3] Attempting to update thumbnail based on server response.
[Fine Uploader 5.1.3] Problem drawing thumbnail!
Response from my server:
{"name": 123, "uuid": "...", "thumbnailUrl": "...."}
Now Fine Uploader makes a GET request to S3 to the URL specified in the thumbnailUrl property. The request goes like this:
curl "HERE_IS_MY_URL" -H "Host: s3.eu-central-1.amazonaws.com" -H "User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0" -H "Accept: image/png,image/;q=0.8,/*;q=0.5" -H "Accept-Language: en-US,en;q=0.5" --compressed -H "DNT: 1" -H "Referer: http://localhost:9000/edititem/65" -H "Origin: http://localhost:9000" -H "Connection: keep-alive" -H "Cache-Control: max-age=0"
Response: 200 OK with Content-Type application/octet-stream
Is there any configuration option for Fine Uploader that I am missing? Could it be that this is a CORS-related issue?
Fine Uploader loads thumbnails at the URL returned by your initial file list endpoint using an ajax request (XMLHttpRequest) in modern browsers. It does this so it can scale and properly orient the image preview.
You'll need a CORS rule on your S3 bucket that allows JS access via a GET request. It will look something like this:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>http://example.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
</CORSRule>
</CORSConfiguration>
Of course, you may need to allow other origins/headers/methods depending on whatever else you are doing with S3.

Uploadifive and Amazon s3 - Origin is not allowed by Access-Control-Allow-Origin

I am trying to get Uploadifive (the HTML5 version of Uploadify) to work with Amazon S3. We already have Uploadify working, but a lost of our visitors use Safari without flash so we need Uploadifive as well.
I am looking to make a POST but the problem is that the pre-flight OPTIONS request that Uploadifive sends gets a "403 Origin is not allowed by Access-Control-Allow-Origin".
The CORS rules on Amazon are set to allow * origin, so I see no reason for Amazon to refuse the request (and note that it accepts the requests coming from Flash, although I don't know if Flash sends an OPTIONS request before the POST). If I haven't made some big mistake in my settings on Amazon I assume this has something to do with Uploadifive not being set up for cross-origin requests, but I can find no info on how to check/do this or even how to change the headers sent on the request.
Has anyone tried using Uploadifive with Amazon S3, and how have you gotten around this problem?
My S3 CORS setting:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
</CORSRule>
Edit:
After testing Chrome with the --disable-web-security flag I stopped getting 403:s, so it does seem like it is Uploadifive that is not setting cross domain headers properly. The question has now become, how do you modify the cross domain settings of Uploadifive?
Victory!
After banging my head against the wall for a few hours I found two errors.
1) Any headers (beyond the most basic ones) that you want to send to Amazon must be specified in the CORS settings via the AllowedHeader tag. So I changed the POST part of my settings on Amazon to this:
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
2) Uploadifive was adding the "file" field first in the formData, Amazon requires that it is the last field. So I modified the Uploadifive js to add the file field last. In 1.1.1 this was around line 393 and this is the change:
Before:
// Add the form data
formData.append(settings.fileObjName, file);
// Add the rest of the formData
for (var i in settings.formData) {
formData.append(i, settings.formData[i]);
}
After:
// Add the rest of the formData
for (var i in settings.formData) {
formData.append(i, settings.formData[i]);
}
// Add the form data
formData.append(settings.fileObjName, file);
This solved the issue for me. There might still be some work to do in order for Uploadify to understand the response, but the upload itself now works and returns a 201 Created as it should.

Does Amazon S3 need time to update CORS settings? How long?

Recently I enabled Amazon S3 + CloudFront to serve as CDN for my rails application. In order to use font assets and display them in Firefox or IE, I have to enable CORS on my S3 bucket.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Then I used curl -I https://small-read-staging-assets.s3.amazonaws.com/staging/assets/settings_settings-312b7230872a71a534812e770ec299bb.js.gz, I got:
HTTP/1.1 200 OK
x-amz-id-2: Ovs0D578kzW1J72ej0duCi17lnw+wZryGeTw722V2XOteXOC4RoThU8t+NcXksCb
x-amz-request-id: 52E934392E32679A
Date: Tue, 04 Jun 2013 02:34:50 GMT
Cache-Control: public, max-age=31557600
Content-Encoding: gzip
Expires: Wed, 04 Jun 2014 08:16:26 GMT
Last-Modified: Tue, 04 Jun 2013 02:16:26 GMT
ETag: "723791e0c993b691c442970e9718d001"
Accept-Ranges: bytes
Content-Type: text/javascript
Content-Length: 39140
Server: AmazonS3
Should I see 'Access-Control-Allow-Origin' some where? Does S3 take time to update CORS settings? Can I force expiring headers if its caching them?
Try sending the Origin header:
$ curl -v -H "Origin: http://example.com" -X GET https://small-read-staging-assets.s3.amazonaws.com/staging/assets/settings_settings-312b7230872a71a534812e770ec299bb.js.gz > /dev/null
The output should then show the CORS response headers you are looking for:
< Access-Control-Allow-Origin: http://example.com
< Access-Control-Allow-Methods: GET
< Access-Control-Allow-Credentials: true
< Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
Additional information about how to debug CORS requests with cURL can be found here:
How can you debug a CORS request with cURL?
Note that there are different types of CORS requests (simple and preflight), a nice tutorial about the differences can be found here:
http://www.html5rocks.com/en/tutorials/cors/
Hope this helps!
Try these:
Try to scope-down the domain names you want to allow access to. S3 doesn't like *.
CloudFront + S3 doesn't handle the CORS configuration correctly out of the box. A kludge is to append a query string containing the name of the referring domain, and explicitly enable support for query strings in your CloudFront distribution settings.
To answer the actual question in the title:
No, S3 does not seem to take any time to propagate the CORS settings. (as of 2019)
However, if you're using Chrome (and maybe others), then CORS settings may be cached by the browser so you won't necessarily see the changes you expect if you just do an ordinary browser refresh. Instead right click on the refresh button and choose "Empty Cache and Hard Reload" (as of Chrome 73). Then the new CORS settings will take effect within <~5 seconds of making the change in the AWS console. (It may be much faster than that. Haven't tested.) This applies to a plain S3 bucket. I don't know how CloudFront affects things.
(I realize this question is 6 years old and may have involved additional technical issues that other people have long since answered, but when you search for the simple question of propagation times for CORS changes, this question is what pops up first, so I think it deserves an answer that addresses that.)
You have a few problems with the way you test CORS.
Your CORS configuration does not have a HEAD method.
Your curl command does not have -H header.
I am able to get your data by using curl like following. However they dumped garbage on my screen because your data is compressed binary.
curl --request GET https://small-read-staging-assets.s3.amazonaws.com/staging/assets/settings_settings-312b7230872a71a534812e770ec299bb.js.gz -H "http://google.com"