Adding CORS headers when requesting .m3u8 files using reverse proxy - http-headers

I'm building a Chromecast app, where I want to stream .m3u8 files (HLS) from a streaming provider. The streaming provider does not add CORS headers to the HTTP headers, which is a requirement for building Chromecast apps.
Is there any way to route the requests through a proxy, and have the proxy add the necessary headers for .m3u8 files? AFAICS, the .m3u8 files further point to files for the different bandwith streams, so it would be necessary to have the proxy add appropriate CORS headers to the header for those files as well.
Here is an example of a link to a .m3u8 file that I want to be able to stream.

Hey I realise I'm a bit late but I thought I would post here in case other find it usefull. I had the same problem when developing a chromecast application. The simple solution I found was to include the TOMODOkorz library this will pass all http requests through it's proxy.
You could host your own proxy and change the library to point to yours relatively easily.

This is actually possible by rewriting the urls within Chromecast's Media Player Library and having these sub-playlists also proxy through a CORS proxy like http://www.corsproxy.com/.
To do this in your custom receiver, do not import the google-hosted library
<script type="text/javascript" src="//www.gstatic.com/cast/sdk/libs/mediaplayer/0.5.0/media_player.js"></script>
Instead, copy the obfuscated javascript directly into your receiver html page, and do the following:
Find+replace g.D.url=k with g.D.url='http://www.corsproxy.com/' + k.replace(/^(?:[a-z]+:)?\/\//i,'')
Find+replace url:k with url:('http://www.corsproxy.com/' + k.replace(/^(?:[a-z]+:)?\/\//i,''))
Now, if you send the initial contentId to Chromecast with the http://www.corsproxy.com/YOUR_M3U8_FILE_HERE you should have a fully functional HLS-playing Chromecast app.

Most providers have the ability to set CORS for their customers. Akamai certainly does.

I've been able to stream HLS to ChromeCast from an S3 bucket by adding a permissive CORS file to the permissions for the bucket.

To answer my own question:
This is not possible without rebroadcasting the streams. .m3u8 files are files containing links to other files, which in the end also contain the binaries. All of these, including the HTTP response containing the binary, needs the CORS headers for the Chromecast to display the contents.
If you're only looking to add CORS headers to textual responses corsproxy.com is a good alternative, a long with several available open source projects.

Related

How is it even possible for an S3 bucket's global CORS policy to not apply to all files in the bucket?

I have an S3 bucket used for media file storage, most importantly for mp4 videos. There is a web front-end to all of that. (There's no CloudFront involved and I don't want to sign up for one due to extra cost.) When a browser encounters the raw URL to an mp4 video the default behavior is to play the mp4 file upon clicking (either in place or in a new tab depending on the target HTML tag). Then you could download the video from the native browser player but that takes many clicks and we are talking about sometimes 40 videos in a playlist and it'd be a huge productivity hit vs if the videos would just download.
So understandably a user need emerged for a download button which would allow the download of the video instead of playing it. After hours of research I stumbled upon StreamSaver.js which works great, and I can even give the downloaded video a meaningful name (with the athlete name, position, jersey#, etc) instead of the GUIDs, but there's a downside: I hit CORS policy errors like Access to fetch at 'https://sportsboardmedia.s3.amazonaws.com/uploads/video/{GUID}_{GUID}.mp4' from origin 'https://app.sportsboard.io' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
My bucket was accessible to the public from the get go. After some research I switched my bucket over to static website hosting mode (as per this SO entry), and I also applied a CORS policy advised by this SO entry.
The current situation is that some video files are downloadable now, but other files (we are talking about the same bucket) still throw CORS errors. How is that even possible? The CORS policy supposed to be global to the whole bucket and every file should be equal from CORS point of view as far as I know. Where do I even start to fix this?
I applied this CORS policy a few days ago and my bucket contains about 5TB of videos. I think that's not a big whop for Amazon. Here is an example athlete locker where some of the videos throw CORS error: https://app.sportsboard.io/playerlocker/media/event/6272C5BE-CE03-4344-BED1-29369306C831/5e267f70-f1de-11e7-b8d3-f23c91087063/videos/
I opened an AWS Forum post as well, but all I hear is crickets there too: https://forums.aws.amazon.co/thread.jspa?messageID=991094#991094
Now I opened an AWS IQ request as well: https://iq.aws.amazon.com/p/N3SFMRLKRH
I recorded a video too: https://youtu.be/Dy38JRI5-oU
Maybe I should record a TikTok video as well???

CrossDomain Access, HLS through CloudFront with Signed URL(JWplayer)

I am using HLS streaming with the Amazon S3 and Cloud Front using the JWplayer.(With Rails)
I used the Signed URL to encrypt the URL and created an Origin Access Identity as given in the Amazon Cloud Front documentation.
The Signed URL's are generated fine.
I also have a 'crossdomain.xml' file in my bucket which is allowing all the origins(I have given '*')
Now, when I am trying to play my Hls video files from my bucket, I am getting crossdomain access denied issue
I think JW Player is trying to access the 'crossdomain.xml' file without the signed hash. So, it's getting that error.
I have tested my file in demo JWplayer Stream tester and this is the error I am getting in console.
Fetch API cannot load http://xxxxxxxx.cloudfront.net/xxx/1/1m_test.ts.
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://demo.jwplayer.com' is therefore not allowed access.
The response had HTTP status code 403.
If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Here is the ScreenShot.
Please help me out. Thank You.
This is the link I followed to configure my CloudFront Distribution
I just had the same problem (but with the Flowplayer). I am not sure yet about security risks (and if all steps are needed), but I got it running with:
adding permissions on the crossdomain.xml for everyone to open/download
adding a behaviour in the cloudfront distribution only for crossdomain.xml without restricting access (above the behaviour for * with restricted access)
and then I noticed that in the bucket, the link to the crossdomain.xml was something like "https://some-server.amazonaws.com/bucket.name/%1Fcrossdomain.xml" (notice the weird %1F) and that when I went on rename of the crossdomain.xml, I could delete one invisible character on first position of the name (I didn't make the crossdomain.xml, so I am not sure how this happened)
Edit:
I had hlsjs also running with this and making the crossdomain.xml accessible somehow disabled the CORS request. I am still looking into this.

using content-length when downloading a file using WCF Rest?

We are developing an application for Web. Inside that application, to download a file, I have created a WCF Rest service that will download the files based on this link Download using WCF Rest. The purpose is to check for user authentication before downloading. I used streaming concept to download the file. It is now that I have found out few things
When the user downloads the file, he is not able to determine what are the file size and the time remaining. I analyzed and found out that the reason is because, it’s using the “Transfer Encoding: chunked” in the header so that the file will be downloaded in chunks. One of the advantages is that the memory consumption is less in the server even when there are many users downloading a file. So I thought of adding “Content-Length” header, but I found out that you can use only either one of the headers not both. So I was thinking how Hotmail and Gmail were downloading attachments. From my investigation, I found out that Hotmail uses chunking header whereas Gmail uses Content-length header. Also in the case of Gmail, it is also checking if the session is active or not then downloads the file accordingly. I want to achieve the following
a) Like Gmail, I want to check if the session is active or not and then downloads the files accordingly. What will be the method for me to implement it?
b) When downloading the file, I want to use Content-Length header instead of Chunked header. Also the memory consumption should be less. Can we achieve it in WCF Rest? If so how?
c) Is it possible for me to add a header in WCF that will display the file size in the browser Downloads window?
d) When downloading an inline images from WCF, I found out that the image after loading is not cached in local machine. I was thinking that once an image is shown in an HTML page, it will get automatically cached and the next time user visits the page, the image will load from cache instead from server. I want to cache the inline images to cache, what is the option that I can use for it? Are there any headers that I need to specify when downloading an inline image from server?
e) When I download a zip file using WCF in IPhone Chrome browser, it’s not downloading at all. But the same link works in Android Chrome browser. What could be the problem? Am I missing header in WCF?
Are there any methods that will achieve the above?
Regards,
Jollyguy

Any reason not to add "Cache-Control: no-transform" header to every page?

We have recently fixed a nagging error on our website similar to the one described in How to stop javascript injection from vodafone proxy? - basically, the Vodafone mobile network was vandalizing our pages in transit, making edits to the JavaScript which broke viewmodels.
Adding a "Cache-Control: no-transform" header to the page that was experiencing the problem fixed it, which is great.
However, we are concerned that as we do more client-side development using JavaScript MVP techniques, we may see it again.
Is there any reason not to add this header to every page served up by our site?
Are there any useful transformations that this will prevent? Or is it basically just similar examples of carriers making ham-fisted attempts to minify things and potentially breaking them in the process?
The reasons not to add this header is speed performance and data transfer.
Some proxy / CDN services encode the media, so if your client is behind proxy or are you using a CDN service, the client may get higher speed and spend littler data transfer. This header actually orders proxy / CDN - not to encode the media , and leave the data as is.
So, if you don't care about this, or your app not use many files like images or music, or you don't want any encoding on your traffic, there is no reason not to do this (and the opposite, recommended to).
See the RFC here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.5
Google has recently incorporated the service googleweblight so if your pages has the "Cache-Control: no-transform" header directive you'll be opting-out from transcoding your page in case the connection comes from a mobile device with slow internet connection.
More info here:
https://support.google.com/webmasters/answer/6211428?hl=en

Serving Angular JS HTML templates from S3 and CloudFront - CORS problems

I'm having a doozy of a time trying to serve static HTML templates from Amazon CloudFront.
I can perform a jQuery.get on Firefox for my HTML hosted on S3 just fine. The same thing for CloudFront returns an OPTIONS 403 Forbidden. And I can't perform an ajax get for either S3 or CloudFront files on Chrome. I assume that Angular is having the same problem.
I don't know how it fetches remote templates, but it's returning the same error as a jQuery.get. My CORS config is fine according to Amazon tech support and as I said I can get the files directly from S3 on Firefox so it works in one case.
My question is, how do I get it working in all browsers and with CloudFront and with an Angular templateUrl?
For people coming from google, a bit more
Turns out Amazon actually does support CORS via SSL when the CORS settings are on an S3 bucket. The bad part comes in when cloudfront caches the headers for the CORS response. If you're fetching from an origin that could be mixed http & https you'll run into the case where the allowed origin from CloudFront will say http but you want https. That of course causes the browser to blow up. To make matters worse, CloudFront will cache slightly differing versions if you accept compressed content. Thus if you try to debug this with curl, you'll think all is well then find it isn't in the browser (try passing --compressed to curl).
One, admittedly frustrating, solution is just ditch the entire CloudFront thing and serve directly from the S3 bucket.
It looks like Amazon does not currently support SSL and CORS on CloudFront or S3, which is the crux of the problem. Other CDNs like Limelight or Akamai allow you to add your SSL cert to a CNAME which circumvents the problem, but Amazon does not allow that either and other CDNs are cost prohibitive. The best alternative seems to be serving the html from your own server on your domain. Here is a solution for Angular and Rails: https://stackoverflow.com/a/12180837/256066