Uploadifive and Amazon s3 - Origin is not allowed by Access-Control-Allow-Origin - amazon-s3

I am trying to get Uploadifive (the HTML5 version of Uploadify) to work with Amazon S3. We already have Uploadify working, but a lost of our visitors use Safari without flash so we need Uploadifive as well.
I am looking to make a POST but the problem is that the pre-flight OPTIONS request that Uploadifive sends gets a "403 Origin is not allowed by Access-Control-Allow-Origin".
The CORS rules on Amazon are set to allow * origin, so I see no reason for Amazon to refuse the request (and note that it accepts the requests coming from Flash, although I don't know if Flash sends an OPTIONS request before the POST). If I haven't made some big mistake in my settings on Amazon I assume this has something to do with Uploadifive not being set up for cross-origin requests, but I can find no info on how to check/do this or even how to change the headers sent on the request.
Has anyone tried using Uploadifive with Amazon S3, and how have you gotten around this problem?
My S3 CORS setting:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
</CORSRule>
Edit:
After testing Chrome with the --disable-web-security flag I stopped getting 403:s, so it does seem like it is Uploadifive that is not setting cross domain headers properly. The question has now become, how do you modify the cross domain settings of Uploadifive?

Victory!
After banging my head against the wall for a few hours I found two errors.
1) Any headers (beyond the most basic ones) that you want to send to Amazon must be specified in the CORS settings via the AllowedHeader tag. So I changed the POST part of my settings on Amazon to this:
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
2) Uploadifive was adding the "file" field first in the formData, Amazon requires that it is the last field. So I modified the Uploadifive js to add the file field last. In 1.1.1 this was around line 393 and this is the change:
Before:
// Add the form data
formData.append(settings.fileObjName, file);
// Add the rest of the formData
for (var i in settings.formData) {
formData.append(i, settings.formData[i]);
}
After:
// Add the rest of the formData
for (var i in settings.formData) {
formData.append(i, settings.formData[i]);
}
// Add the form data
formData.append(settings.fileObjName, file);
This solved the issue for me. There might still be some work to do in order for Uploadify to understand the response, but the upload itself now works and returns a 201 Created as it should.

Related

Amazon S3 Bucket Setup For Recording Sinch Calls

I have contacted Sinch support and they have informed me that my access key, secret key, and bucket name are ready to go from their point of view.
My next step (I believe) is to configure the Amazon S3 bucket itself.
I'm not sure if this is through
1. the bucket Public Access (I have "Block all Public Access" turned off for this bucket)
2. the CORS configuration
3. the ACL, or
4. the Bucket Control Policy
Any guidance would be greatly appreciated from the Amazon S3 bucket side of things.
I have set my bucket up to have the following CORS configuration and would like some insight into whether this is correct to work with Sinch. I don't know if AllowedOrigin for sinch is correct.
I have also pasted my callback ICE response in PHP if there are errors there for recording.
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://www.mywebsite.com</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Below is my ICE response. It does not connect calls
when I have it configured this way but I'm not sure
if it's due to an ICE response structure error or amazon bucket setup error.
I have been using this documentation link but find it somewhat confusing:
https://developers.sinch.com/docs/voice-rest-api-recording
$ice_response =
array(
"instructions"=>array(
array("name"=>"Say","text"=>"Hello listeners","locale"=>"en-US"),
array(
"name:=>"StartRecording",
"options"=>array(
"destinationUrl"=>"s3://mybucketname/test-file.mp3",
"credentials"=>"accesskey:secretkey:region_code",
"format"=>"mp3",
"notificationEvents"=>true
)
),
array("name"=>"Say","text"=>"Recording started","locale"=>"en-US")
),
"action"=>array(
"name" => "connectConf",
"conferenceId" => $post["to"]["endpoint"],
"record" => true
)
);
echo json_encode($ice_response);

403 Error when using fetch to call Cloudfront S3 endpoint with custom domain and signed cookies

I'm trying to create a private endpoint for an S3 bucket via Cloudfront using signed cookies. I've been able to successfully create a signed cookie function in Lambda that adds a cookie for my root domain.
However, when I call the Cloudfront endpoint for the S3 file I'm trying to access, I am getting a 403 error. To make things weirder, I'm able to copy & paste the URL into the browser and can access the file.
We'll call my root domain example.com. My cookie domain is .example.com, my development app URL is test.app.example.com and my Cloudfront endpoint URL is tilesets.example.com
Upon inspection of the call, it seems that the cookies aren't being sent. This is strange because my fetch call has credentials: "include" and I'm calling a subdomain of the cookie domain.
Configuration below:
S3:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Cloudfront:
Not sure what I could be doing wrong here. It's especially weird that it works when I go directly to the link in the browser but not when I fetch, so guessing that's a CORS issue.
I've been logging the calls to Cloudfront, and as you can see, the cookies aren't being sent when using fetch in my main app:
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type cs-protocol-version fle-status fle-encrypted-fields
2019-09-13 22:38:40 IAD79-C3 369 <IP> GET <CLOUDFRONT ID>.cloudfront.net <PATH URL>/metadata.json 403 https://test.app.<ROOT DOMAIN>/ Mozilla/5.0%2520(Macintosh;%2520Intel%2520Mac%2520OS%2520X%252010_14_6)%2520AppleWebKit/537.36%2520(KHTML,%2520like%2520Gecko)%2520Chrome/76.0.3809.132%2520Safari/537.36 - - Error 5kPxZkH8n8dVO57quWHurLscLDyrOQ0L-M2e0q6X5MOe6K9Hr3wCwQ== tilesets.<ROOT DOMAIN> https 281 0.000 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Error HTTP/2.0 - -
Whereas when I go to the URL directly in the browser:
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type cs-protocol-version fle-status fle-encrypted-fields
2019-09-13 22:32:38 IAD79-C1 250294 <IP> GET <CLOUDFRONT ID>.cloudfront.net <PATH URL>/metadata.json 200 - Mozilla/5.0%2520(Macintosh;%2520Intel%2520Mac%2520OS%2520X%252010_14_6)%2520AppleWebKit/537.36%2520(KHTML,%2520like%2520Gecko)%2520Chrome/76.0.3809.132%2520Safari/537.36 - CloudFront-Signature=<SIGNATURE>;%2520CloudFront-Key-Pair-Id=<KEY PAIR>;%2520CloudFront-Policy=<POLICY> Miss gRkIRkKtVs3WIR-hI1fDSb_kTfwH_S2LsJhv9bmywxm_MhB7E7I8bw== tilesets.<ROOT DOMAIN> https 813 0.060 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Miss HTTP/2.0 - -
Any thoughts?
You have correctly diagnosed that the issue is that your cookies aren't being sent.
A cross-origin request won't include cookies with credentials: "include" unless the origin server also includes permission in its response headers:
Access-Control-Allow-Credentials: true
And the way to get S3 to allow that is not obvious, but I stumbled on the solution following a lead found in this answer.
Modify your bucket's CORS configuration to remove this:
<AllowedOrigin>*</AllowedOrigin>
...and add this, instead, specifically listing the origin you want to allow to access your bucket (from your description, this will be the parent domain):
<AllowedOrigin>https://example.com</AllowedOrigin>
(If you need http, that needs to be listed separately, and each domain you need to allow to access the bucket using CORS needs to be listed.)
This changes S3's behavior to include Access-Control-Allow-Credentials: true. It doesn't appear to be explicitly documented.
Do not use the following alternative, even though it would also work, without understanding the implications.
<AllowedOrigin>https://*</AllowedOrigin>
This also results in Access-Control-Allow-Credentials: true, so it "works" -- but it allows cross-origin from anywhere, which you likely do not want. With that said, do bear in mind that CORS is nothing more than a permissions mechanism that is applicable to well-behaved, non-malicious web browsers, only -- so setting the allowed origin settings to only allow the correct domain is important, but it does not magically secure your content against unauthorized access from elsewhere. I suspect you are aware of this, but it is important to keep in mind.
After these changes, you'll need to clear the browser cache and invalidate the CloudFront cache, and re-test. Once the CORS headers are being set correctly, your browser should send cookies, and the issue should be resolved.

Font requests to CloudFront are "(cancelled)"

I have a CloudFront distribution at (static.example.com) which has an S3 bucket as Origin. I am using this distribution to store all the artifacts for client code (JavaScript files, stylesheets, images and fonts).
All the requests to JavaScript files, stylesheets and images succeed without any problem however, requests to font files have the status cancelled in Google Chrome.
Here is how I request those fonts:
#font-face {
font-family: Material Design Icons;
font-style: normal;
font-weight: 400;
src: url(https://static.example.com/5022976817.eot);
src: url(https://static.example.com/5022976817.eot) format("embedded-opentype"),
url(https://static.example.com/611c53b432.woff2) format("woff2"),
url(https://static.example.com/ee55b98c3c.woff) format("woff"),
url(https://static.example.com/cd8784a162.ttf) format("truetype"),
url(https://static.example.com/73424aa64e.svg) format("svg")
}
The request to the svg font file is ok, but the other ones are not ok.
What have I done wrong? Every file in the S3 bucket has public-read ACL.
It seems like this is an issue with CORS preventing to load the fonts from a different domain.
You need to:
Enable CORS on your S3 bucket:
Go to the s3 bucket and on the Permissions tab select CORS configuration, add the permission with your AllowedOrigin:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://static.example.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
You can add multiple AllowedOrigin:
<AllowedOrigin>https://static.example.com</AllowedOrigin>
<AllowedOrigin>http://static.example.com</AllowedOrigin>
<AllowedOrigin>https://example.com</AllowedOrigin>
<AllowedOrigin>http://otherexample.com</AllowedOrigin>
Or use a wildcard:
<AllowedOrigin>*.example.com</AllowedOrigin>
Whitelist the appropriate headers on CloudFront to be passed to S3:
Go to the Behaviors tab on your CloudFront distribution and select Create Behavior and add the patter you want:
Path Pattern: 5022976817.eot
Cache Based on Selected Request Headers: Whitelist
Add the following headers to the whitelisted headers:
Access-Control-Request-Headers
Access-Control-Request-Method
Origin
You can test that CORS is working properly with curl:
curl -X GET -H "Origin: https://static.example.com" -I https://static.example.com/5022976817.eot
Everything is ok if you get a response header like:
access-control-allow-origin: https://static.example.com

What is the recommended CORS configuration of hosting Javascript on S3/CF?

I have seen the answers to similar questions but I am wondering that in 2017 what is the best way of configure CORS for S3/CF if I would like to restrict the legitimate access to *.domain.tld. The Javascript is loading from CF and renders a web app using Ajax requests to api.domain.tld.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*.domain.tld</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>OPTIONS</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Is there anything else I could add to improve on the CORS settings?
The following are the general rules for making a CORS configuration:
1)A valid CORS configuration consists of 0 to 100 CORS rules.
2)Each rule must include at least one origin.
3)An origin may contain at most one wildcard *
4)Each rule must include at least one method.
5)The supported methods are: GET, HEAD, PUT, POST, DELETE.
6)Each rule may contain an identifying string of up to 255 characters.
7)Each rule may specify zero or more allowed request headers (which the client may include in the request).
8)Each rule may specify zero or more exposed response headers (which are sent back from the server to the client).
9)Each rule may specify a cache validity time of zero or more seconds. If not included, the client should supply their own default.
Recently I worked with one of JS/CF project and here is my CORS Configuration.
<CORSConfiguration>
<CORSRule>
<ID>example.com: Allow PUT & POST with AWS S3 JS
SDK</ID>
<AllowedOrigin>https://www.example.com</AllowedOrigin>
<AllowedOrigin>http://www.example.com</AllowedOrigin>
<AllowedOrigin>https://example.com</AllowedOrigin>
<AllowedOrigin>http://example.com</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedHeader>Origin</AllowedHeader>
<AllowedHeader>Content-Length</AllowedHeader>
<AllowedHeader>Content-Type</AllowedHeader>
<AllowedHeader>Content-MD5</AllowedHeader>
<AllowedHeader>X-Amz-User-Agent</AllowedHeader>
<AllowedHeader>X-Amz-Date</AllowedHeader>
<AllowedHeader>Authorization</AllowedHeader>
<ExposeHeader>ETag</ExposeHeader>
<MaxAgeSeconds>1800</MaxAgeSeconds>
</CORSRule>
<CORSRule>
<ID>example.com: Allow GET with AWS S3 JS SDK</ID>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
<ExposeHeader>ETag</ExposeHeader>
<MaxAgeSeconds>1800</MaxAgeSeconds>
</CORSRule>
</CORSConfiguration>
More details you can find here
Thanks

Fine Uploader S3: Refused to get unsafe header "ETag"

I'm trying to upload to S3 with the jQuery fineuploader (v 3.9.1) and have enabled debugging. All of the parts of the upload succeed but then I get an error "Problem asking Amazon to combine the parts!"
I've enabled debug on the console and get the errors [Refused to get unsafe header "ETag"] as well as this from Amazon:
Received response status 400 with body:
InvalidPartOne or more of the specified
parts could not be found. The part may not have been uploaded, or the
specified entity tag may not match the part's entity
tag.eTvPFvkXEm07T17tvZvFacR4vn95EUTqXyoPvlLh1a6AADlc94v7H9.a2jcmow1pjfN1xcdw_xMx60APpXn6rGwhHYtzE0NT90Bs0IVqrkaFHW75yRl5E4nfO3Od6rWZnull0CD2DC02D0870E61R4Kpfe66IDvL44Jx9Aoicxgh9Frqd4qr8ILWHbu5YhlqGomxIBOZvfkgy4R4VsYS1
It seems your Amazon S3 CORS XML configuration file is incorrect. Make sure you add <ExposeHeader>ETag</ExposeHeader> to the <CORSRule> section as detailed below,
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<ExposeHeader>ETag</ExposeHeader>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
More information in the documentation on Amazon S3 servers and the official blog post on the same thing.