Meteor cache issue with Chromium browsers - amazon-s3

I currently have an application running on an old version of Meteor (1.10.1).
And I'm getting an error that seems to be a cache error.
When I upload some pictures with a specific user.
I have a request to S3 that fails with a CORS error, however the return code is 200.
console output
ERROR 200
There is no specific common point between the images except that these two images are images that have already been uploaded with the same account in the past.
However, when I clear the cache on Chrome for example, I can upload the picture again. But I still get the error on browsers whose cache has not been cleared.
BUT if I connect to the same account with two different computers, the error is the same no matter which computer I am on.
Could this be related to how Meteor handles a cache with S3?
I checked the CORS settings on my S3, everything seems to be in order and everything works correctly for 95% of the users.
I also tried with the CORS rules at the most flexible.
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"GET",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 10000
}
]
I also checked if there were similarities in the formats, in the sizes, etc...
It just seems that this is a photo that has already been uploaded earlier.
That's why I think it's a cache problem.

started to answer to you on your exact question then I changed my mind and deleted so I re-write in a different perspective.
Please read this: https://www.reddit.com/r/aws/comments/eh6vx1/s3_vs_cloudfront_costs/ There are 9 comments there and some refer to exact price comparison.
Now that the cost risk is addressed let me get into technicals. With AWS, S3 is storage and Cloudfront is the actual CDN that you are supposed to use.
When you create a Cloudfront domain in front your S3, Cloudfront takes care of the CORS and all the rest of security and cacheing. Ideally, you would create a subdomain to your domain and point it to your Cloudfront domain so that the origin of your website and the origin of your CDN are the same (no CORS troubles).
This discussion in the Meteor forum should help you with the setup of Cloudfront: https://forums.meteor.com/t/s3-file-upload-is-slingshot-still-usable-alternatives/54123/10
As far as I know, there are no Meteor specifics that affect cache. It is all related to the website/webapp CSP which in Meteor is done in general via a browser policy that runs server-side.

Related

Video.js - HLS => No 'Access-Control-Allow-Origin' header [S3, CloudFront]

I have a problem to play HLS videos using video.js plugin in my application.
I have an S3 storage of HLS videos(.m3u8, .ts) and its connected to cloudfront. Videos are working on safari, but they are not working on chrome properly. They work on chrome just when I hard reload the page(remove cache,cookies,...).
My configurations:
Video.JS:
videojs.Hls.xhr.beforeRequest = function (options) {
options.headers = {
"Access-Control-Allow-Origin": "*",
};
return options;
};
S3 bucket CORS:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"ETag",
"Access-Control-Allow-Origin",
"Connection",
"Content-Length"
],
"MaxAgeSeconds": 3000
}
]
CloudFront:
I faced a similar problem. In my case, some files were received successfully but others (in the same dir, uploaded at the same time by the same mechanism) threw CORS error.
After days of investigation, I fixed it (I hope). I leave what I figured out here for future researchers.
The CORS support is implemented in S3 and there is a lot of info on the Internet about how to configure it.
When the CloudFront link is requested AWS checks if there is a requested object in the CloudFront cache. If yes, CloudFront returns it. If not, CloudFront requests it from the origin (S3 in my case), caches it, and returns.
When the S3 link is requested and there is an origin header in the request S3 returns the file with access-control-allow-origin header, otherwise, access-control-allow-origin is not added to response headers.
When CloudFront requests a file from the origin (S3) it can transit request headers (that were sent with file request) to the origin (S3). That's why you have to add the origin header (and any others) to 'Choose which headers to include in the cache key.' In this case, if the request to CloudFront contains the origin header it will also be sent to S3. Otherwise, CloudFront will request a file from the S3 without the origin header, S3 will return a file without the access-control-allow-origin header, and a file without headers will be cached and returned to the browser (CORS error).
headers
Now there are 2 options under cache and origin settings: 'Cache policy and origin request policy (recommended)' and 'Legacy cache settings' (seems like earlier there weren't options and just settings from the 'Legacy cache settings' existed). Under 'Cache policy and origin request policy (recommended)' there are 'Cache policy' and 'Origin request policy - optional' sections. If the predefined recommended policies are set, the origin header (and others) are predefined for 'Origin request policy - optional' but not for 'Cache policy'. To be honest, I don't understand the exact meaning of each but seems like legacy 'Choose which headers to include in the cache key' is now divided into 2 sections. And you have to create a new Cache policy (duplicate of recommended) and add the headers (the same as in CORS-S3Origin policy) if you use 'Cache policy and origin request policy (recommended)' instead of 'Legacy cache settings'.
recomended settings cors-s3origin cachingoptimised
In my case, if the files were requested from a mobile app the first time, the requests didn't have the origin header. That's why S3 returned them without the access-control-allow-origin header and they were cached in CloudFront without headers. All next requests with the origin header (browser always add this header when you make a request from js) failed because of CORS error ("No 'Access-Control-Allow-Origin'...").
There is an ability to add custom headers to requests from CloudFront to S3 (Origins -> Edit particular origin -> Add custom header). If you don't care from where users request your files, you can add the origin header here and set it to any value. In this case, all requests to S3 will have the origin header.
custom header
There are a lot of CloudFront edge locations. Each of them has its own cache. The user will receive files from the nearest one. That's why it's possible that some users receive files successfully, but others get CORS errors.
There is an x-cache header in CloudFront response headers. The value can be either "Miss from cloudfront" (there is no request file in the cache) or "Hit from cloudfront" (the file returned from the cache). So, you can see if your request is the first to a particular edge location (disable browser cache in devtools if you want to try). But sometimes it behaves like randomly) and I don't know why.
Looks like even the same edge location can have different caches for different clients. I don't know what is it based on, but I've tried to experiment with browser, Postman, and curl, and have got the next results (I've tried many times with different files and different ordering - the request from the curl doesn't see the cache created for browser and Postman, and vice versa):
the request from the browser returns "Miss from cloudfront";
the request from the browser returns "Hit from cloudfront";
the request from the Postman returns "Hit from cloudfront";
the request from the curl returns "Miss from cloudfront";
the request from the curl returns "Hit from cloudfront".
As AWS docs are quite poor about this question and support just recommends reading the docs, I'm not sure about part of my conclusions. That's just what I think.

How is it even possible for an S3 bucket's global CORS policy to not apply to all files in the bucket?

I have an S3 bucket used for media file storage, most importantly for mp4 videos. There is a web front-end to all of that. (There's no CloudFront involved and I don't want to sign up for one due to extra cost.) When a browser encounters the raw URL to an mp4 video the default behavior is to play the mp4 file upon clicking (either in place or in a new tab depending on the target HTML tag). Then you could download the video from the native browser player but that takes many clicks and we are talking about sometimes 40 videos in a playlist and it'd be a huge productivity hit vs if the videos would just download.
So understandably a user need emerged for a download button which would allow the download of the video instead of playing it. After hours of research I stumbled upon StreamSaver.js which works great, and I can even give the downloaded video a meaningful name (with the athlete name, position, jersey#, etc) instead of the GUIDs, but there's a downside: I hit CORS policy errors like Access to fetch at 'https://sportsboardmedia.s3.amazonaws.com/uploads/video/{GUID}_{GUID}.mp4' from origin 'https://app.sportsboard.io' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
My bucket was accessible to the public from the get go. After some research I switched my bucket over to static website hosting mode (as per this SO entry), and I also applied a CORS policy advised by this SO entry.
The current situation is that some video files are downloadable now, but other files (we are talking about the same bucket) still throw CORS errors. How is that even possible? The CORS policy supposed to be global to the whole bucket and every file should be equal from CORS point of view as far as I know. Where do I even start to fix this?
I applied this CORS policy a few days ago and my bucket contains about 5TB of videos. I think that's not a big whop for Amazon. Here is an example athlete locker where some of the videos throw CORS error: https://app.sportsboard.io/playerlocker/media/event/6272C5BE-CE03-4344-BED1-29369306C831/5e267f70-f1de-11e7-b8d3-f23c91087063/videos/
I opened an AWS Forum post as well, but all I hear is crickets there too: https://forums.aws.amazon.co/thread.jspa?messageID=991094#991094
Now I opened an AWS IQ request as well: https://iq.aws.amazon.com/p/N3SFMRLKRH
I recorded a video too: https://youtu.be/Dy38JRI5-oU
Maybe I should record a TikTok video as well???

Upload to Google Cloud Storage using Signed URL returned origin not allowed on Safari only

Using the Node.js library to generate signed URL and have no problem uploading files from Chrome on local machine and production. But CORS issue appears when sending the PUT request from Safari, both desktop (v13.0.5) and iOS, so far no issue with Chrome on Mac. It says Origin https://website.com is not allowed by Acess-Control-Allow-Origin
I am pretty sure it is somewhere related to Safari sending the request. I have double checked my API to generate the url it has all proper params (matched content type with client) also it does work on Chrome as well. The PUT request is sent using fetch().
Have tried to update GCS cors config using gsutil but safari still complain origin not allowed, even now it does not work on Chrome without wildcard on origin and responseHeader. Someone mentioned on the internet Chrome fine with wildcard but Safari expects headers/origin to be explicit but can't figure out what are required headers. Have tried different variations for responseHeader such as Acess-Control-Allow-Origin, origin, Origin, x-goog-resumable.
[
{
"origin": ["https://website.com"],
"responseHeader": ["Content-Length", "Content-Type", "Date", "Server", "Transfer-Encoding", "X-GUploader-UploadID", "X-Google-Trace"],
"method": ["GET", "HEAD", "POST", "PUT"],
"maxAgeSeconds": 3000
}
]
I have other project that run same setup and has no problem, the only difference probably the #google-cloud/storage npm version and using version: 'v4' when generating getSignedUrl().
Found few on the internet saying to use https://bucket.storage.googleapis instead of https://storage.googleapis/bucket still no avail.

CrossDomain Access, HLS through CloudFront with Signed URL(JWplayer)

I am using HLS streaming with the Amazon S3 and Cloud Front using the JWplayer.(With Rails)
I used the Signed URL to encrypt the URL and created an Origin Access Identity as given in the Amazon Cloud Front documentation.
The Signed URL's are generated fine.
I also have a 'crossdomain.xml' file in my bucket which is allowing all the origins(I have given '*')
Now, when I am trying to play my Hls video files from my bucket, I am getting crossdomain access denied issue
I think JW Player is trying to access the 'crossdomain.xml' file without the signed hash. So, it's getting that error.
I have tested my file in demo JWplayer Stream tester and this is the error I am getting in console.
Fetch API cannot load http://xxxxxxxx.cloudfront.net/xxx/1/1m_test.ts.
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://demo.jwplayer.com' is therefore not allowed access.
The response had HTTP status code 403.
If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Here is the ScreenShot.
Please help me out. Thank You.
This is the link I followed to configure my CloudFront Distribution
I just had the same problem (but with the Flowplayer). I am not sure yet about security risks (and if all steps are needed), but I got it running with:
adding permissions on the crossdomain.xml for everyone to open/download
adding a behaviour in the cloudfront distribution only for crossdomain.xml without restricting access (above the behaviour for * with restricted access)
and then I noticed that in the bucket, the link to the crossdomain.xml was something like "https://some-server.amazonaws.com/bucket.name/%1Fcrossdomain.xml" (notice the weird %1F) and that when I went on rename of the crossdomain.xml, I could delete one invisible character on first position of the name (I didn't make the crossdomain.xml, so I am not sure how this happened)
Edit:
I had hlsjs also running with this and making the crossdomain.xml accessible somehow disabled the CORS request. I am still looking into this.

Serving Angular JS HTML templates from S3 and CloudFront - CORS problems

I'm having a doozy of a time trying to serve static HTML templates from Amazon CloudFront.
I can perform a jQuery.get on Firefox for my HTML hosted on S3 just fine. The same thing for CloudFront returns an OPTIONS 403 Forbidden. And I can't perform an ajax get for either S3 or CloudFront files on Chrome. I assume that Angular is having the same problem.
I don't know how it fetches remote templates, but it's returning the same error as a jQuery.get. My CORS config is fine according to Amazon tech support and as I said I can get the files directly from S3 on Firefox so it works in one case.
My question is, how do I get it working in all browsers and with CloudFront and with an Angular templateUrl?
For people coming from google, a bit more
Turns out Amazon actually does support CORS via SSL when the CORS settings are on an S3 bucket. The bad part comes in when cloudfront caches the headers for the CORS response. If you're fetching from an origin that could be mixed http & https you'll run into the case where the allowed origin from CloudFront will say http but you want https. That of course causes the browser to blow up. To make matters worse, CloudFront will cache slightly differing versions if you accept compressed content. Thus if you try to debug this with curl, you'll think all is well then find it isn't in the browser (try passing --compressed to curl).
One, admittedly frustrating, solution is just ditch the entire CloudFront thing and serve directly from the S3 bucket.
It looks like Amazon does not currently support SSL and CORS on CloudFront or S3, which is the crux of the problem. Other CDNs like Limelight or Akamai allow you to add your SSL cert to a CNAME which circumvents the problem, but Amazon does not allow that either and other CDNs are cost prohibitive. The best alternative seems to be serving the html from your own server on your domain. Here is a solution for Angular and Rails: https://stackoverflow.com/a/12180837/256066