403 Error when using fetch to call Cloudfront S3 endpoint with custom domain and signed cookies - amazon-s3

I'm trying to create a private endpoint for an S3 bucket via Cloudfront using signed cookies. I've been able to successfully create a signed cookie function in Lambda that adds a cookie for my root domain.
However, when I call the Cloudfront endpoint for the S3 file I'm trying to access, I am getting a 403 error. To make things weirder, I'm able to copy & paste the URL into the browser and can access the file.
We'll call my root domain example.com. My cookie domain is .example.com, my development app URL is test.app.example.com and my Cloudfront endpoint URL is tilesets.example.com
Upon inspection of the call, it seems that the cookies aren't being sent. This is strange because my fetch call has credentials: "include" and I'm calling a subdomain of the cookie domain.
Configuration below:
S3:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Cloudfront:
Not sure what I could be doing wrong here. It's especially weird that it works when I go directly to the link in the browser but not when I fetch, so guessing that's a CORS issue.
I've been logging the calls to Cloudfront, and as you can see, the cookies aren't being sent when using fetch in my main app:
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type cs-protocol-version fle-status fle-encrypted-fields
2019-09-13 22:38:40 IAD79-C3 369 <IP> GET <CLOUDFRONT ID>.cloudfront.net <PATH URL>/metadata.json 403 https://test.app.<ROOT DOMAIN>/ Mozilla/5.0%2520(Macintosh;%2520Intel%2520Mac%2520OS%2520X%252010_14_6)%2520AppleWebKit/537.36%2520(KHTML,%2520like%2520Gecko)%2520Chrome/76.0.3809.132%2520Safari/537.36 - - Error 5kPxZkH8n8dVO57quWHurLscLDyrOQ0L-M2e0q6X5MOe6K9Hr3wCwQ== tilesets.<ROOT DOMAIN> https 281 0.000 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Error HTTP/2.0 - -
Whereas when I go to the URL directly in the browser:
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type cs-protocol-version fle-status fle-encrypted-fields
2019-09-13 22:32:38 IAD79-C1 250294 <IP> GET <CLOUDFRONT ID>.cloudfront.net <PATH URL>/metadata.json 200 - Mozilla/5.0%2520(Macintosh;%2520Intel%2520Mac%2520OS%2520X%252010_14_6)%2520AppleWebKit/537.36%2520(KHTML,%2520like%2520Gecko)%2520Chrome/76.0.3809.132%2520Safari/537.36 - CloudFront-Signature=<SIGNATURE>;%2520CloudFront-Key-Pair-Id=<KEY PAIR>;%2520CloudFront-Policy=<POLICY> Miss gRkIRkKtVs3WIR-hI1fDSb_kTfwH_S2LsJhv9bmywxm_MhB7E7I8bw== tilesets.<ROOT DOMAIN> https 813 0.060 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Miss HTTP/2.0 - -
Any thoughts?

You have correctly diagnosed that the issue is that your cookies aren't being sent.
A cross-origin request won't include cookies with credentials: "include" unless the origin server also includes permission in its response headers:
Access-Control-Allow-Credentials: true
And the way to get S3 to allow that is not obvious, but I stumbled on the solution following a lead found in this answer.
Modify your bucket's CORS configuration to remove this:
<AllowedOrigin>*</AllowedOrigin>
...and add this, instead, specifically listing the origin you want to allow to access your bucket (from your description, this will be the parent domain):
<AllowedOrigin>https://example.com</AllowedOrigin>
(If you need http, that needs to be listed separately, and each domain you need to allow to access the bucket using CORS needs to be listed.)
This changes S3's behavior to include Access-Control-Allow-Credentials: true. It doesn't appear to be explicitly documented.
Do not use the following alternative, even though it would also work, without understanding the implications.
<AllowedOrigin>https://*</AllowedOrigin>
This also results in Access-Control-Allow-Credentials: true, so it "works" -- but it allows cross-origin from anywhere, which you likely do not want. With that said, do bear in mind that CORS is nothing more than a permissions mechanism that is applicable to well-behaved, non-malicious web browsers, only -- so setting the allowed origin settings to only allow the correct domain is important, but it does not magically secure your content against unauthorized access from elsewhere. I suspect you are aware of this, but it is important to keep in mind.
After these changes, you'll need to clear the browser cache and invalidate the CloudFront cache, and re-test. Once the CORS headers are being set correctly, your browser should send cookies, and the issue should be resolved.

Related

How do I configure apache-traffic-server to forward an http request to an https remote server?

I have an esp8266 which was directly sending http requests to http://fcm.googleapis.com/fcm/send but since google seems have stopped allowing requests to be send via http, I need to find a new solution.
I started down a path to have the esp8266 directly send the request via https and while it works on a small example the memory footprint required for the https request is to much in my full application and I end up crashing the esp8266. While there are still some avenues to explore that might allow me to continue to directly send messages to the server, I think I would like to solve this by sending the request via http to a local "server" raspberry pi, and have that send the request via https.
While I could run a small web server and some code to do handle the requests, it seems like this is exactly something traffic-server should be able to do for me.
I thought this should be a one liner. I added the following the the remap.config file.
redirect http://192.168.86.77/fcm/send https://fcm.googleapis.com/fcm/send
where 192.168.86.77 is the local address of my raspberry pi.
When I send requests to http://192.168.86.77/fcm/send:8080 I get back the following:
HTTP/1.1 404 Not Found
Date: Fri, 20 Sep 2019 16:22:14 GMT
Server: Apache/2.4.10 (Raspbian)
Content-Length: 288
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL /fcm/send:8080 was not found on this server.</p>
<hr>
<address>Apache/2.4.10 (Raspbian) Server at localhost Port 80</address>
</body></html>
I think 8080 is the right port.
I am guessing this is not the one liner I thought it should be.
Is this a good fit for apache-traffic-controller?
Can someone point me to what I am doing wrong and what is the right way to accomplish my goal?
Update:
Based on Miles Libbey answer below, I needed to make the following update to the Arduino/esp8266 code.
Change:
http_.begin("http://fcm.googleapis.com/fcm/send");
To:
http_.begin("192.168.86.77", 8080, "http://192.168.86.77/fcm/send");
where http_ is the instance of the HTTPClient
And after installing trafficserver on the my raspberry pi, I needed to add the following two lines to the /etc/trafficserver/remap.config
map http://192.168.86.77/fcm/send https://fcm.googleapis.com/fcm/send
reverse_map https://fcm.googleapis.com/fcm/send http://192.168.86.77/fcm/send
Note the reverse_map line is only needed if you want to get feedback from fcm, ie if the post was successful or not.
I would try a few changes:
- I'd use map:
map http://192.168.86.77/fcm/send https://fcm.googleapis.com/fcm/send instead of redirect. The redirect is meant to send your client a 301, and then your client would follow it, which sounds like it'd defeat your purpose. map should have ATS do the proxying.
- I think your curl may have been off -- the port usually goes after the domain part -- eg, curl "http://192.168.86.77:8080/fcm/send". (and probably better:
curl -x 192.168.86.77:8080 "http://192.168.86.77:8080/fcm/send", so that the port isn't part of the remapping.

HaProxy Transparent Proxy To AWS S3 Static Website Page

I am using haproxy to balance a cluster of servers. I am attempting to add a maintenance page to the haproxy configuration. I believe I can do this by defining a server declaration in the backend with the 'backup' modifier. Question I have is, how can I use a maintenance page hosted remotely on AWS S3 bucket (static website) without actually redirecting the user to that page (i.e. the haproxy server 'redir' definition).
If I have servers: a, b, c. All servers go down for maintenance then I want all requests to be resolved by server definition d (which is labeled with 'backup') to a static address on S3. Note, that I don't want paths to carry over and be evaluated on s3, it should always render the static maintenance page.
This is definitely possible.
First, declare a backup server, which will only be used if the non-backup servers are down.
server s3-fallback example.com.s3-website-us-east-1.amazonaws.com:80 backup
The following configuration entries are used to modify the request or the response only if we're using the alternate path. We're using two tests in the following examples:
# { nbsrv le 1 } -- if the number of servers in this backend is <= 1
# (and)
# { srv_is_up(s3-fallback) } -- if the server named "s3-fallback" is up; "server name" is the arbitrary name we gave the server in the config file
# (which would mean it's the "1" server that is up for this backend)
So, now that we have a backup back-end, we need a couple of other directives.
Force the path to / regardless of the request path.
http-request set-path / if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If you're using an essentially empty bucket with an error document, then this isn't really needed, since any request path would generate the same error.
Next, we need to set the Host: header in the outgoing request to match the name of the bucket. This isn't technically needed if the bucket is named the same as the Host: header that's already present in the request we received from the browser, but probably still a good idea. If the bucket name is different, it needs to go here.
http-request set-header host example.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If the bucket name is not a valid DNS name, then you should include the entire web site endpoint here. For a bucket called "example" --
http-request set-header host example.s3-website-us-east-1.amazonaws.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If your clients are sending you their cookies, there's no need to relay these to S3. If the clients are HTTPS and the S3 connection is HTTP, you definitely wat to strip these.
http-request del-header cookie if { nbsrv le 1 } { srv_is_up(s3-fallback) }
Now, handling the response...
You probably don't want browsers to cache the responses from this alternate back-end.
http-response set-header cache-control no-cache if { nbsrv le 1 } { srv_is_up(s3-fallback) }
You also probably don't want to return "200 OK" for these responses, since technically, you are displaying an error page, and you don't want search engines to try to index this stuff. Here, I've chosen "503 Service Unavailable" but any valid response code would work... 500 or 502, for example.
http-response set-status 503 if { nbsrv le 1 } { srv_is_up(s3-fallback) }
And, there you have it -- using an S3 bucket website endpoint as a backup backend, behaving no differently than any other backend. No browser redirect.
You could also configure the request to S3 to use HTTPS, but since you're just fetching static content, that seems unnecessary. If the browser is connecting to the proxy with HTTPS, that section of the connection will still be secure, although you do need to scrub anything sensitive from the browser's request, since it will be forwarded to S3 unencrypted (see "cookie," above).
This solution is tested on HAProxy 1.6.4.
Note that by default, the DNS lookup for the S3 endpoint will only be done when HAProxy is restarted. If that IP address changes, HAProxy will not see the change, without additional configuration -- which is outside the scope of this question, but see the resolvers section of the configuration manual.
I do use S3 as a back-end server behind HAProxy in several different systems, and I find this to be an excellent solution to a number of different issues.
However, there is a simpler way to have a custom error page for use when all the backends are down, if that's what you want.
errorfile 503 /etc/haproxy/errors/503.http
This directive is usually found in global configuration, but it's also valid in a backend -- so this raw file will be automatically returned by the proxy for any request that tries to use this back-end, if all of the servers in this back-end are unhealthy.
The file is a raw HTTP response. It's essentially just written out to the client as it exists on the disk, with zero processing, so you have to include the desired response headers, including Connection: close. Each line of the headers and the line after the headers must end with \r\n to be a valid HTTP response. You can also just copy one of the others, and modify it as needed.
These files are limited by the size of a response buffer, which I believe is tune.bufsize, which defaults to 16,384 bytes... so it's only really good for small files.
HTTP/1.0 503 Service Unavailable\r\n
Cache-Control: no-cache\r\n
Connection: close\r\n
Content-Type: text/plain\r\n
\r\n
This site is offline.
Finally, note that in spite of the fact that you're wanting to "transparently proxy a request," I don't think the phrase "transparent proxy" is the correct one for what you're trying to do, because a "transparent proxy" implies that either the client or the server or both would see each other's IP addresses on the connection and think they were communicating directly, with no proxy in between, because of some skullduggery done by the proxy and/or network infrastructure to conceal the proxy's existence in the path. This is not what you're looking for.

Cloudfront - cannot invalidate objects that used to return 403

The setting
I have an Amazon Cloudfront distribution that was originally set as secured. Objects in this distribution required a URL signing. For example, a valid URL used to be of the following format:
https://d1stsppuecoabc.cloudfront.net/images/TheImage.jpg?Expires=1413119282&Signature=NLLRTVVmzyTEzhm-ugpRymi~nM2v97vxoZV5K9sCd4d7~PhgWINoTUVBElkWehIWqLMIAq0S2HWU9ak5XIwNN9B57mwWlsuOleB~XBN1A-5kzwLr7pSM5UzGn4zn6GRiH-qb2zEoE2Fz9MnD9Zc5nMoh2XXwawMvWG7EYInK1m~X9LXfDvNaOO5iY7xY4HyIS-Q~xYHWUnt0TgcHJ8cE9xrSiwP1qX3B8lEUtMkvVbyLw__&Key-Pair-Id=APKAI7F5R77FFNFWGABC
The distribution points to an S3 bucket that also used to be secured (it only allowed access through the cloudfront).
What happened
At some point, the URL singing expired and would return a 403.
Since we no longer need to keep the same security level, I recently changed the setting of the cloudfront distribution and of the S3 bucket it is pointing to, both to be public.
I then tried to invalidate objects in this distribution. Invalidation did not throw any errors, however the invalidation did not seem to succeed. Requests to the same cloudfront URL (with or without the query string) still return 403.
The response header looks like:
HTTP/1.1 403 Forbidden
Server: CloudFront
Date: Mon, 18 Aug 2014 15:16:08 GMT
Content-Type: text/xml
Content-Length: 110
Connection: keep-alive
X-Cache: Error from cloudfront
Via: 1.1 3abf650c7bf73e47515000bddf3f04a0.cloudfront.net (CloudFront)
X-Amz-Cf-Id: j1CszSXz0DO-IxFvHWyqkDSdO462LwkfLY0muRDrULU7zT_W4HuZ2B==
Things I tried
I tried to set another cloudfront distribution that points to the same S3 as origin server. Requests to the same object in the new distribution were successful.
The question
Did anyone encounter the same situation where a cloudfront URL that returns 403 cannot be invalidated? Is there any reason why wouldn't the object get invalidated?
Thanks for your help!
First check that Invalidation is not in progress. If it is then wait till it is completed.
If you are accessing S3 Object through CloudFront using Public URL then you need to have public read permission on that S3 Object.
If you are trying to access S3 Object through CloudFront using Signed URL then make sure that time that are mention while generating sign url, must be greater then current time.

Uploadifive and Amazon s3 - Origin is not allowed by Access-Control-Allow-Origin

I am trying to get Uploadifive (the HTML5 version of Uploadify) to work with Amazon S3. We already have Uploadify working, but a lost of our visitors use Safari without flash so we need Uploadifive as well.
I am looking to make a POST but the problem is that the pre-flight OPTIONS request that Uploadifive sends gets a "403 Origin is not allowed by Access-Control-Allow-Origin".
The CORS rules on Amazon are set to allow * origin, so I see no reason for Amazon to refuse the request (and note that it accepts the requests coming from Flash, although I don't know if Flash sends an OPTIONS request before the POST). If I haven't made some big mistake in my settings on Amazon I assume this has something to do with Uploadifive not being set up for cross-origin requests, but I can find no info on how to check/do this or even how to change the headers sent on the request.
Has anyone tried using Uploadifive with Amazon S3, and how have you gotten around this problem?
My S3 CORS setting:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
</CORSRule>
Edit:
After testing Chrome with the --disable-web-security flag I stopped getting 403:s, so it does seem like it is Uploadifive that is not setting cross domain headers properly. The question has now become, how do you modify the cross domain settings of Uploadifive?
Victory!
After banging my head against the wall for a few hours I found two errors.
1) Any headers (beyond the most basic ones) that you want to send to Amazon must be specified in the CORS settings via the AllowedHeader tag. So I changed the POST part of my settings on Amazon to this:
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
2) Uploadifive was adding the "file" field first in the formData, Amazon requires that it is the last field. So I modified the Uploadifive js to add the file field last. In 1.1.1 this was around line 393 and this is the change:
Before:
// Add the form data
formData.append(settings.fileObjName, file);
// Add the rest of the formData
for (var i in settings.formData) {
formData.append(i, settings.formData[i]);
}
After:
// Add the rest of the formData
for (var i in settings.formData) {
formData.append(i, settings.formData[i]);
}
// Add the form data
formData.append(settings.fileObjName, file);
This solved the issue for me. There might still be some work to do in order for Uploadify to understand the response, but the upload itself now works and returns a 201 Created as it should.

Does Amazon S3 need time to update CORS settings? How long?

Recently I enabled Amazon S3 + CloudFront to serve as CDN for my rails application. In order to use font assets and display them in Firefox or IE, I have to enable CORS on my S3 bucket.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Then I used curl -I https://small-read-staging-assets.s3.amazonaws.com/staging/assets/settings_settings-312b7230872a71a534812e770ec299bb.js.gz, I got:
HTTP/1.1 200 OK
x-amz-id-2: Ovs0D578kzW1J72ej0duCi17lnw+wZryGeTw722V2XOteXOC4RoThU8t+NcXksCb
x-amz-request-id: 52E934392E32679A
Date: Tue, 04 Jun 2013 02:34:50 GMT
Cache-Control: public, max-age=31557600
Content-Encoding: gzip
Expires: Wed, 04 Jun 2014 08:16:26 GMT
Last-Modified: Tue, 04 Jun 2013 02:16:26 GMT
ETag: "723791e0c993b691c442970e9718d001"
Accept-Ranges: bytes
Content-Type: text/javascript
Content-Length: 39140
Server: AmazonS3
Should I see 'Access-Control-Allow-Origin' some where? Does S3 take time to update CORS settings? Can I force expiring headers if its caching them?
Try sending the Origin header:
$ curl -v -H "Origin: http://example.com" -X GET https://small-read-staging-assets.s3.amazonaws.com/staging/assets/settings_settings-312b7230872a71a534812e770ec299bb.js.gz > /dev/null
The output should then show the CORS response headers you are looking for:
< Access-Control-Allow-Origin: http://example.com
< Access-Control-Allow-Methods: GET
< Access-Control-Allow-Credentials: true
< Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
Additional information about how to debug CORS requests with cURL can be found here:
How can you debug a CORS request with cURL?
Note that there are different types of CORS requests (simple and preflight), a nice tutorial about the differences can be found here:
http://www.html5rocks.com/en/tutorials/cors/
Hope this helps!
Try these:
Try to scope-down the domain names you want to allow access to. S3 doesn't like *.
CloudFront + S3 doesn't handle the CORS configuration correctly out of the box. A kludge is to append a query string containing the name of the referring domain, and explicitly enable support for query strings in your CloudFront distribution settings.
To answer the actual question in the title:
No, S3 does not seem to take any time to propagate the CORS settings. (as of 2019)
However, if you're using Chrome (and maybe others), then CORS settings may be cached by the browser so you won't necessarily see the changes you expect if you just do an ordinary browser refresh. Instead right click on the refresh button and choose "Empty Cache and Hard Reload" (as of Chrome 73). Then the new CORS settings will take effect within <~5 seconds of making the change in the AWS console. (It may be much faster than that. Haven't tested.) This applies to a plain S3 bucket. I don't know how CloudFront affects things.
(I realize this question is 6 years old and may have involved additional technical issues that other people have long since answered, but when you search for the simple question of propagation times for CORS changes, this question is what pops up first, so I think it deserves an answer that addresses that.)
You have a few problems with the way you test CORS.
Your CORS configuration does not have a HEAD method.
Your curl command does not have -H header.
I am able to get your data by using curl like following. However they dumped garbage on my screen because your data is compressed binary.
curl --request GET https://small-read-staging-assets.s3.amazonaws.com/staging/assets/settings_settings-312b7230872a71a534812e770ec299bb.js.gz -H "http://google.com"