Edgecast RTMP stream URLs - rtmp

Edgecast RTMP URLs are as follows:
rtmp://abc.edgecast.com/a_folder/b_folder/mp4:astream.mp4
Cloudfront though mandates it as follows:
rtmp://abc.cloudfront.net/cfx/st/mp4:a_folder/b_folder/a_stream.mp4
mp4 should precede the user folder structure.
Is this CDN specific? Can edgecast do:
rtmp://abc.edgecast.com/mp4:a_folder/b_folder/astream.mp4

No, media identifier should before the stream/file name. The first one is correct.
To my knowledge, it is not CDN specific, that's how the player(s) expect them.

Related

S3 Restriced URLs vs. Cloudfront Signed URLs

We are considering moving our file delivery to Cloudfront.
Currently, we generate "secure" URLs for our file delivery on a individual basis which look like:
http://downloads.xxxxx.com/1/2005-01-01_2006-01-01.csv?AWSAccessKeyId=012NFZM3D44FSG20CP82&Expires=1495287427&response-cache-control=No-cache&response-content-disposition=attachment%3B%20filename%3D2005-01-01_2006-01-01.csv&Signature=tWAeES3rhAlv2SQoZkqyYJEexH0%3D
Is there an easy way to apply Cloudfront to the URL above, or do we need to configure them from scratch using Signed URLs? Would the S3 authenticated URL "pass-through" using Cloudfront if I create a simple distribution there?
It is theoretically possible to configure CloudFront to pass-through an S3 signed URL, but doing so would defeat all caching, so... no, it's not a viable solution.
CloudFront uses an entirely different algorithm for signed URLs, so there's not a way to simply transform an existing S3 signed URL into a CloudFront signed URL.
Note also that you'll need to embed the existing response-* parameters in the URL before signing it. CloudFront should still pass them through so that S3 can modify its response as indicated.
Unrelated: one feature you might find interesting is that with CloudFront signed URLs, you actually have the optional ability to embed the client's IP address in the URL in such a way that the URL can only be used from a single IP address. This isn't something that can be done in a straightforward way with an S3 signed URL.

CrossDomain Access, HLS through CloudFront with Signed URL(JWplayer)

I am using HLS streaming with the Amazon S3 and Cloud Front using the JWplayer.(With Rails)
I used the Signed URL to encrypt the URL and created an Origin Access Identity as given in the Amazon Cloud Front documentation.
The Signed URL's are generated fine.
I also have a 'crossdomain.xml' file in my bucket which is allowing all the origins(I have given '*')
Now, when I am trying to play my Hls video files from my bucket, I am getting crossdomain access denied issue
I think JW Player is trying to access the 'crossdomain.xml' file without the signed hash. So, it's getting that error.
I have tested my file in demo JWplayer Stream tester and this is the error I am getting in console.
Fetch API cannot load http://xxxxxxxx.cloudfront.net/xxx/1/1m_test.ts.
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://demo.jwplayer.com' is therefore not allowed access.
The response had HTTP status code 403.
If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Here is the ScreenShot.
Please help me out. Thank You.
This is the link I followed to configure my CloudFront Distribution
I just had the same problem (but with the Flowplayer). I am not sure yet about security risks (and if all steps are needed), but I got it running with:
adding permissions on the crossdomain.xml for everyone to open/download
adding a behaviour in the cloudfront distribution only for crossdomain.xml without restricting access (above the behaviour for * with restricted access)
and then I noticed that in the bucket, the link to the crossdomain.xml was something like "https://some-server.amazonaws.com/bucket.name/%1Fcrossdomain.xml" (notice the weird %1F) and that when I went on rename of the crossdomain.xml, I could delete one invisible character on first position of the name (I didn't make the crossdomain.xml, so I am not sure how this happened)
Edit:
I had hlsjs also running with this and making the crossdomain.xml accessible somehow disabled the CORS request. I am still looking into this.

Adding CORS headers when requesting .m3u8 files using reverse proxy

I'm building a Chromecast app, where I want to stream .m3u8 files (HLS) from a streaming provider. The streaming provider does not add CORS headers to the HTTP headers, which is a requirement for building Chromecast apps.
Is there any way to route the requests through a proxy, and have the proxy add the necessary headers for .m3u8 files? AFAICS, the .m3u8 files further point to files for the different bandwith streams, so it would be necessary to have the proxy add appropriate CORS headers to the header for those files as well.
Here is an example of a link to a .m3u8 file that I want to be able to stream.
Hey I realise I'm a bit late but I thought I would post here in case other find it usefull. I had the same problem when developing a chromecast application. The simple solution I found was to include the TOMODOkorz library this will pass all http requests through it's proxy.
You could host your own proxy and change the library to point to yours relatively easily.
This is actually possible by rewriting the urls within Chromecast's Media Player Library and having these sub-playlists also proxy through a CORS proxy like http://www.corsproxy.com/.
To do this in your custom receiver, do not import the google-hosted library
<script type="text/javascript" src="//www.gstatic.com/cast/sdk/libs/mediaplayer/0.5.0/media_player.js"></script>
Instead, copy the obfuscated javascript directly into your receiver html page, and do the following:
Find+replace g.D.url=k with g.D.url='http://www.corsproxy.com/' + k.replace(/^(?:[a-z]+:)?\/\//i,'')
Find+replace url:k with url:('http://www.corsproxy.com/' + k.replace(/^(?:[a-z]+:)?\/\//i,''))
Now, if you send the initial contentId to Chromecast with the http://www.corsproxy.com/YOUR_M3U8_FILE_HERE you should have a fully functional HLS-playing Chromecast app.
Most providers have the ability to set CORS for their customers. Akamai certainly does.
I've been able to stream HLS to ChromeCast from an S3 bucket by adding a permissive CORS file to the permissions for the bucket.
To answer my own question:
This is not possible without rebroadcasting the streams. .m3u8 files are files containing links to other files, which in the end also contain the binaries. All of these, including the HTTP response containing the binary, needs the CORS headers for the Chromecast to display the contents.
If you're only looking to add CORS headers to textual responses corsproxy.com is a good alternative, a long with several available open source projects.

How can I hide a custom origin server from the public when using AWS CloudFront?

I am not sure if this exactly qualifies for StackOverflow, but since I need to do this programatically, and I figure lots of people on SO use CloudFront, I think it does... so here goes:
I want to hide public access to my custom origin server.
CloudFront pulls from the custom origin, however I cannot find documentation or any sort of example on preventing direct requests from users to my origin when proxied behind CloudFront unless my origin is S3... which isn't the case with a custom origin.
What technique can I use to identify/authenticate that a request is being proxied through CloudFront instead of being directly requested by the client?
The CloudFront documentation only covers this case when used with an S3 origin. The AWS forum post that lists CloudFront's IP addresses has a disclaimer that the list is not guaranteed to be current and should not be relied upon. See https://forums.aws.amazon.com/ann.jspa?annID=910
I assume that anyone using CloudFront has some sort of way to hide their custom origin from direct requests / crawlers. I would appreciate any sort of tip to get me started. Thanks.
I would suggest using something similar to facebook's robots.txt in order to prevent all crawlers from accessing all sensitive content in your website.
https://www.facebook.com/robots.txt (you may have to tweak it a bit)
After that, just point your app.. (eg. Rails) to be the custom origin server.
Now rewrite all the urls on your site to become absolute urls like :
https://d2d3cu3tt4cei5.cloudfront.net/hello.html
Basically all urls should point to your cloudfront distribution. Now if someone requests a file from https://d2d3cu3tt4cei5.cloudfront.net/hello.html and it does not have hello.html.. it can fetch it from your server (over an encrypted channel like https) and then serve it to the user.
so even if the user does a view source, they do not know your origin server... only know your cloudfront distribution.
more details on setting this up here:
http://blog.codeship.io/2012/05/18/Assets-Sprites-CDN.html
Create a custom CNAME that only CloudFront uses. On your own servers, block any request for static assets not coming from that CNAME.
For instance, if your site is http://abc.mydomain.net then set up a CNAME for http://xyz.mydomain.net that points to the exact same place and put that new domain in CloudFront as the origin pull server. Then, on requests, you can tell if it's from CloudFront or not and do whatever you want.
Downside is that this is security through obscurity. The client never sees the requests for http://xyzy.mydomain.net but that doesn't mean they won't have some way of figuring it out.
[I know this thread is old, but I'm answering it for people like me who see it months later.]
From what I've read and seen, CloudFront does not consistently identify itself in requests. But you can get around this problem by overriding robots.txt at the CloudFront distribution.
1) Create a new S3 bucket that only contains one file: robots.txt. That will be the robots.txt for your CloudFront domain.
2) Go to your distribution settings in the AWS Console and click Create Origin. Add the bucket.
3) Go to Behaviors and click Create Behavior:
Path Pattern: robots.txt
Origin: (your new bucket)
4) Set the robots.txt behavior at a higher precedence (lower number).
5) Go to invalidations and invalidate /robots.txt.
Now abc123.cloudfront.net/robots.txt will be served from the bucket and everything else will be served from your domain. You can choose to allow/disallow crawling at either level independently.
Another domain/subdomain will also work in place of a bucket, but why go to the trouble.

S3 File upload policy

I have a site which allows users to define their subdomains
xxx.mysite.com
I allow the users to upload their own avatars and logos - ideally, it should upload directly to s3.
1
I am able to generate the policy http://s3.amazonaws.com/doc/s3-example-code/post/post_sample.html here.
However, I do not know what to give
["starts-with", "$success_action_redirect", "http://xxx.mysite.com/"]
I can leave it out but I am not very comfortable allowing anyone to just upload easily. How can I add more restrictions?
I currently restrict the content-type to images
2 https and http
I know that I can force ssl on the entire site and use https://xxx.mysite.com for success_action_redirect.
Is there a regex I can use?
I'm answering my own question.
It is not possible to do a wildcard search. We can do only starts-with.
I just restricted it to https for now. The other way to do it is to have a common code for upload across subdomains and use a parent.xx browser functions. Since it is in the same domain, browser will not complain and the s3 policy will go through.