Cloudfront Reports CORS error after saving changes to Cognito user - amazon-cognito

My setup:
S3 bucket configured as a static site which is a webapp, distributed by Cloudfront. We'll say this domain is client.cloudfront.net
Elastic Load Balancer on top of ECS provisioning nodejs server containers, distributed by Cloudfront as well. Domain = server.cloudfront.net
In the client, a user can change their personal information. Upon save, this information is passed to my node server with a PATCH request to server.cloudfront.net/user. We update the Cognito user with this information.
After this occurs, any requests to server.cloudfront.net cause CORS errors:
Access to XMLHttpRequest at 'https://server.cloudfront.net/deals/1' from origin 'https://server.cloudfront.net' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
This last for only a minute or 2 and then everything starts working again. The only thing I could find was this:
Intermittent 403 CORS Errors (Access-Control-Allow-Origin) With Cloudfront Using Signed URLs To GET S3 Objects
I tried to implement the Lambda#Edge as suggested but that didn't resolve the issue. I honestly don't know how to even debug this...

Related

Firebase auth breaks with cross origin isolation (i.e. when using Cross-Origin-Resource-Policy)

I am trying to make a website cross origin isolated, and enabled the following headers on my site:
https://web.dev/cross-origin-isolation-guide/
Cross-Origin-Opener-Policy: same-origin
Cross-Origin-Embedder-Policy: require-corp
Firebase auth uses a call to:
https://<AUTH_DOMAIN>/__/auth/iframe?apiKey=<API_KEY>&appName=[DEFAULT]
This gets blocked if you and makes authentication fail.
Because your site has the Cross-Origin Embedder Policy (COEP) enabled, each resource must specify a suitable Cross-Origin Resource Policy (CORP). This behavior prevents a document from loading cross-origin resources which don’t explicitly grant permission to be loaded.
To solve this, add the following to the resource’s response header:
Cross-Origin-Resource-Policy: same-site if the resource and your site are served from the same site.
Cross-Origin-Resource-Policy: cross-origin if the resource is served from another location than your website. ⚠️If you set this header, any website can embed this resource.
How does one fix this? It seems like the root issue is that firebase needs to set a header on their side ?

AWS S3 Static hosing: Routing rules doesn't work with cloudfront

I am using AWS S3 static web hosting for my VueJs SPA app. I have setup routing rules in S3 and it works perfectly fine when I access it using S3 static hosting url. But, I also have configured CloudFront to use it with my custom domain. Since single page apps need to be routed via index.html, I have setup custom error page in cloudfront to redirect 404 errors to index.html. So now routing rules I have setup in S3 no longer works.
What is the best way to get S3 routing rules to work along with CloudFront custom error page setup for SPA?
I think I am a bit late but here goes anyway,
Apparently you can't do that if you are using S3 REST_API endpoints (example-bucket.s3.amazonaws.com) as your origin for your CloudFront distribution, you have to use the S3 website url provided by S3 as the origin (example-bucket.s3-website-[region].amazonaws.com). Also, objects must be public you can't lock your bucket to the distribution by origin policy.
So,
Objects must be public.
S3 bucket website option must be turned on.
Distribution origin has to come from the S3 website url, not the rest api endpoint.
EDIT:
I was mistaking, actually, you can do it with the REST_API endpoint too, you only have to create a Custom Error Response inside your CloudFront distribution, probably only for the 404 and 403 error codes, set the "Customize Error Response" option to "yes", Response Page Path to "/index.html" and HTTP Response Code to "200". You can find that option inside your distribution and the error pages tab if you are using the console.

How to use Akamai infront of S3 buckets?

I have a static website that is currently hosted in apache servers. I have an akamai server which routes requests to my site to those servers. I want to move my static websites to Amazon S3, to get away from having to host those static files in my servers.
I created a S3 bucket in amazon, gave it appropriate policies. I also set up my bucket for static website hosting. It told me that I can access the site at
http://my-site.s3-website-us-east-1.amazonaws.com
I modified my akamai properties to point to this url as my origin server. When I goto my website, I get Http 504 errors.
What am i missing here?
Thanks
K
S3 buckets don't support HTTPS?
Buckets support HTTPS, but not directly in conjunction with the static web site hosting feature.
See Website Endpoints in the S3 Developer Guide for discussion of the feature set differences between the REST endpoints and the web site hosting endpoints.
Note that if you try to directly connect to your web site hosting endpoint with your browser, you will get a timeout error.
The REST endpoint https://your-bucket.s3.amazonaws.com will work for providing HTTPS between bucket and CDN, as long as there are no dots in the name of your bucket
Or if you need the web site hosting features (index documents and redirects), you can place CloudFront between Akamai and S3, encrypting the traffic inside CloudFront as it left the AWS network on its way to Akamai (it would still be in the clear from S3 to CloudFront, but this is internal traffic on the AWS network). CloudFront automatically provides HTTPS support on the dddexample.cloudfront.net hostname it assigns to each distribution.
I admit, it sounds a bit silly, initially, to put CloudFront behind another CDN but it's really pretty sensible -- CloudFront was designed in part to augment the capabilities of S3. CloudFront also provides Lambda#Edge, which allows injection of logic at 4 trigger points in the request processing cycle (before and after the CloudFront cache, during the request and during the response) where you can modify request and response headers, generate dynamic responses, and make external network requests if needed to implement processing logic.
I faced this problem currently and as mentioned by Michael - sqlbot, putting the CloudFront between Akamai and S3 Bucket could be a workaround, but doing that you're using a CDN behind another CDN. I strongly recommend you to configure the redirects and also customize the response when origin error directly in Akamai (using REST API endpoint in your bucket). You'll need to create three rules, but first, go to CDN > Properties and select your property, Edit New Version based on the last one and click on Add Rule in Property Configuration Settings section. The first rule will be responsible for redirect empty paths to index.html, create it just like the image below:
builtin.AK_PATH is an Akamai's variable. The next step is responsible for redirect paths different from the static ones (html, ico, json, js, css, jpg, png, gif, etc) to \index.html:
The last step is responsible for customize an error response when origin throws an HTTP error code (just like the CloudFront Error Pages). When the origin returns 404 or 403 HTTP status code, the Akamai will call the Failover Hostname Edge Server (which is inside the Akamai network) with the /index.html path. This setup will be triggered when refreshing pages in the browser and when the application has redirection links (which opens new tabs for example). In the Property Hostnames section, add a new hostname that will work as the Failover Hostname Edge Server, the name should has less than 16 characters, then, add the -a.akamaihd.net suffix to it (that's the Akamai pattern). For example: failover-a.akamaihd.net:
Finally, create a new empty rule just like the image below (type the hostname that you just created in the Alternate Hostname in This Property section):
Since you are already using Akamai as a CDN, you could simply use their NetStorage product line to achieve this in a simplified manner.
All you would need to do is to move the content from s3 to Akamai and it would take care of the rest(hosting, distribution, scaling, security, redundancy).
The origin settings on Luna control panel could simply point to the Netstorage FTP location. This will also remove the network latency otherwise present when accessing the S3 bucket from the Akamai Network.

CrossDomain Access, HLS through CloudFront with Signed URL(JWplayer)

I am using HLS streaming with the Amazon S3 and Cloud Front using the JWplayer.(With Rails)
I used the Signed URL to encrypt the URL and created an Origin Access Identity as given in the Amazon Cloud Front documentation.
The Signed URL's are generated fine.
I also have a 'crossdomain.xml' file in my bucket which is allowing all the origins(I have given '*')
Now, when I am trying to play my Hls video files from my bucket, I am getting crossdomain access denied issue
I think JW Player is trying to access the 'crossdomain.xml' file without the signed hash. So, it's getting that error.
I have tested my file in demo JWplayer Stream tester and this is the error I am getting in console.
Fetch API cannot load http://xxxxxxxx.cloudfront.net/xxx/1/1m_test.ts.
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://demo.jwplayer.com' is therefore not allowed access.
The response had HTTP status code 403.
If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Here is the ScreenShot.
Please help me out. Thank You.
This is the link I followed to configure my CloudFront Distribution
I just had the same problem (but with the Flowplayer). I am not sure yet about security risks (and if all steps are needed), but I got it running with:
adding permissions on the crossdomain.xml for everyone to open/download
adding a behaviour in the cloudfront distribution only for crossdomain.xml without restricting access (above the behaviour for * with restricted access)
and then I noticed that in the bucket, the link to the crossdomain.xml was something like "https://some-server.amazonaws.com/bucket.name/%1Fcrossdomain.xml" (notice the weird %1F) and that when I went on rename of the crossdomain.xml, I could delete one invisible character on first position of the name (I didn't make the crossdomain.xml, so I am not sure how this happened)
Edit:
I had hlsjs also running with this and making the crossdomain.xml accessible somehow disabled the CORS request. I am still looking into this.

Amazon S3 - Bucket Policy

Having a heck of a time setting up a referer policy from my url to have access to an Amazon S3 bucket.
I can get it to work with my .myurl.com/ but anytime the request from a secure https request, my access is denied even with the wild card.
Thanks
Edit: Rushed my first initial post, here's more detail.
I have a bucket policy on amazon s3 that only allows access if it comes from my url(s).
"aws:Referer": [
"*.myurl.com/*",
"*.app.dev:3000/*" ]
This referer policy correctly only allows connections from my dev environment and also my staging url if accessed via http. However, if the user is located at https://www.myurl.com/* they are denied access from Amazon.
Is there a way to allow https connections to Amazon S3? Is it my Bucket Policy? I've tried hard coding the https url into the bucket policy, but this did not do the trick.
Sorry about being overly brief.
From the HTTP/1.1 RFC:
Clients SHOULD NOT include a
Referer header field in a (non-secure)
HTTP request if the referring page was
transferred with a secure protocol.