I've setup a cloudfront instance with download and streaming distributions. I set both to private with signed urls. I was able to get sample code working for the download distribution for images with signed urls. I'm now trying to get the streaming distribution working for JW Player with a signed URL but I'm having issues.
Here is my signed URL format:
rtmp://s1iq2cbtodqqky.cloudfront.net/2012-08-31_13-24-01_534.mp4?Expires=1359648770&Signature=Oi8RwL4Nf338NldW2uIsqFIv3zHnJkxXYbXIiVQh~J0Iq4kb00Ly5MLTgJw~87KmlUOmilmdRHy7p~UxeGYQxgkewPI11r27se0b~hTvpxq9y9Z5C-B-A58ZnngaCi9G2SHAujMzvss7ynLLEqUV3M6MVZl1qCxyfJbLdxCIEMY_&Key-Pair-Id=
Here is my JW Player code:
<script type="text/javascript" src="jwplayer/jwplayer.js"></script>
<div id="container">Loading the player ...</div>
<script type="text/javascript">
jwplayer("container").setup({
'flashplayer': 'jwplayer/jwplayer.flash.swf',
'file': '<?= $canned_policy_stream_name ?>',
'width': '480','height': '270',
'provider': 'rtmp',
'streamer': 'rtmp://s1iq2cbtodqqky.cloudfront.net/cfx/st/'
});
</script>
Anyone know what is wrong here? How can I test the url alone? Right now it's hard to tell if the problem is the url or the code for JW Player integration.
-J
There a lot of gotcha's here. Took me a while to work through them when I got into it. Here are some steps I think might help a lot of people.
First here was the technology stack I went with:
Rails 3.x
Zencoder for encoding
Paperclip for file upload
Jquery Uload for upload
JWPlayer
If that's not your platform you can fill in some of the blanks but a lot of the learning will still be useful to you.
There are a bunch of articles on how to upload content into S3 from a user so I will skip that part, the part that get's interesting is when you start the encoding process and that is really where the issues start in terms of getting signed, streamed content to play in jwplayer or flowplayer.
First, file formats - I found that MP4 and M4A were the file formats I had the most success with. With zencoder, I was able to use the out of the box mp4 and m4a export formats and get those outputs to play just fine.
Add you zencoder policy to the bucket before setting up the cloudfront distribution.
If you already have cloudfront configured, you should be careful in how you add your zencoder bucket policy into the bucket and make sure to merge it with whatever is there. Cloudfront also puts stuff in the bucket policy, you need both this and the zencoder bucket policy profile to work correctly.
Bucket policies only apply to files owned by the bucket owner so be sure to talk to your encoding provider to make sure they use your access key to put the files in cloudfront. If you don't have the user doing the signing the same as the user who owns the file in S3 it won't work and you will spend hours wondering why
Once you are sure you have your buckets setup correctly, before going further use this tool to help validate your files will actually stream (start out without signed urls and allowing cloudfront to stream files that aren't streamed. If that't not working you won't get far).
http://d1k5ny0m6d4zlj.cloudfront.net/diag/CFStreamingDiag.html
To use amazons tool, if your rtmp url was:
"rtmp://s3b78u0kbtx79q.cloudfront.net/cfx/st/content/myfile.png"
For the streaming url you want to enter:
s3b78u0kbtx79q.cloudfront.net
For the video file name you want to enter:
content/myfile.png (without the leading '/')
Once you can actually stream your file from amazon through the diagnostics tool, now go on to following whatever steps you have from jwplayer of flowplayer.
A note on setting up JWPlayer (I had the fewest problems with it) while streaming - also install a debug version of flash. Now during debugging, you can right click on the flash control and change logging to 'console'. Now you will get any load errors from the flash control appearing in Firebug - so maybe test first with Firefox. Typically, when reviewing those logs, you don't want any html control characters to be escaped, escaping will typically ruin your day.
I imagine a lot of people doing this will want their content secure and to use signed urls (so that dubious others don't just rip your rtmp path and embed it directly into their site getting the benefit while you pay the bandwidth).I can't stress this enough, before starting down that path, make sure you first get your RTMP stream playing on publicly streamed files out of your cloudfront bucket so you know the mechanism is working.
If you got this far you are in a good place but now is when all the bugs can hit you, if you followed my advice above it will be a shorter day than if you missed one of the steps (like making sure your bucket policy includes your cloudfront origin id and your encoding provider writes the files with your canonical id as the owner rather than theirs).
Now that you have you content streaming through RTMP to a player, next you will want to get it working with signed URL's (again, so other sites can't just copy your RTMP path and play it back through their own site with jwplayer attached). In rails at least, the best way to generate a signed url is to use this gem:
https://github.com/58bits/cloudfront-signer
Depending on how you embed the url, you will need to use different types of escaping. If your URL doesn't play, try some the following (not very precise but if you are here and losing your hair you will try anything, if you have already tried to get this working you probably already know what I mean and won't be needing a haircut for a while):
<%= AWS::CF::Signer.sign_path 'path/to/my/content', :expires => Time.now + 600 %>
<%=raw AWS::CF::Signer.sign_path 'path/to/my/content', :expires => Time.now + 600 %>
<%= AWS::CF::Signer.sign_path_safe 'path/to/my/content', :expires => Time.now + 600 %>
<%=raw AWS::CF::Signer.sign_path_safe 'path/to/my/content', :expires => Time.now + 600 %>
I probably screwed around for an hour one time before I found that using raw would fix all my problems.
Related
I'm using Google Cloud Storage and asp.net core 6.
To configure the Google cloud storage I used this article:
https://medium.com/net-core/using-google-cloud-storage-in-asp-net-core-74f9c5ee55f5
Uploading and deleting a file all work fine.
The only problem I'm experiencing is that when I delete a file, that file is still accessible. (when I look in the bucket I can see that the file is actually gone).
So I tried deleting it manually on the Google cloud platform, but I keep getting the same problem. The image is still visible with the link.
Even when I upload a new image with the same name as the old one, when displaying or downloading the image it's still the old image.
I tried with different browsers, in case it might have something to do with caching but that didn't help either.
I also checked object versioning was off.
I can also see that inside the bucket the 'created date' does in fact change.
This sounds like the object is being stored in cache (either Google's cache, your browsers or potentially any proxy in-between).
Publicly shared objects default to being cacheable for 1 hour. After this, changes should be visible. If that is not acceptable to you set Cache-Control header/metadata to a shorter time or no-store.
I'm using AjaXplorer to give access to my clients to a shared directory stored in Amazon S3. I installed the SD, configured the plugin (http://ajaxplorer.info/plugins/access/s3/) and could upload and download files but the upload size is limited to my host PHP limit which is 64MB.
Is there a way I can upload directly to S3 without going over my host to improve speed and have S3 limit, no PHP's?
Thanks
I think that is not possible, because the server will first climb to the PHP file and then make transfer to bucket.
Maybe
The only way around this is to use some JQuery or JS that can bypass your server/PHP entirely and stream directly into S3. This involves enabling CORS and creating a signed policy on the fly to allow your uploads, but it can be done!
I ran into just this issue with some inordinately large media files for our website users that I no longer wanted to host on the web servers themselves.
The best place to start, IMHO is here:
https://github.com/blueimp/jQuery-File-Upload
A demo is here:
https://blueimp.github.io/jQuery-File-Upload/
This was written to upload+write files to a variety of locations, including S3. The only tricky bits are getting your MIME type correct for each particular upload, and getting your bucket policy the way you need it.
I have an mp3 file on s3 (and have experienced with many other mp3 files) that is not playing in chrome (and other browsers as well: FF, safari, etc). The network dialog in chrome shows that there is a pending request that is seemingly never responded to by s3, however if I do a wget to the URL, I get an immediate response.
Additionally, if I serve the exact same file off of a server running nginx, I can access the URL in chrome as well instantaneously. I know that S3 does support byte range requests, so there should be no issue with chrome's byte range queries. Additionally, I've verified that the file is accessible, and that its content type is audio/mpeg.
Here is the file in question:
http://s3.amazonaws.com/josh-tmdbucket/23/talks/ffcc525a0761cd9e7023ab51c81edb781077377d.mp3
Here is a screenshot of the network requests in chrome for that URL:
I solved this by creating a CloudFront distribution. You need to create a distribution for your bucket. For example if you have a bucket named example-bucket, go to CloudFront and click on create distribution. Your bucket will appear in Origin Domain Name as example-bucket.s3.amazonaws.com
Now you can use example-bucket.s3.amazonaws.com url to load content.
This worked for me but I am not sure if it will work for others.
Had same exact issue with files.
Original url looked like this =>
https://my-bucket-name.s3-eu-west-1.amazonaws.com/EIR.mp4
Added CloudFront distribution and it solved all my issues.
Url changed only a bit:
https://my-bucket-name.s3.amazonaws.com/EIR.mp4
(but you can modify it a little while creating distribution / even setting your own DNS if you wish).
I am using red5 for streaming videos in my project and I am able to play the videos from the local system which are saved in default folder "streams".
Now i want to customize the path and want to get the videos from S3. How do i configure red5 to work with S3. Is this a good practice?
I've got code using the IStreamFilenameGenerator works with S3; I'll warn you now that it may not work with the latest jets3 library, but you'll get the point of how it works by looking through the source. One problem / issue that you must understand when using S3 is that you cannot "record" to the bucket on-the-fly; your flv files can only be transferred to S3 once the file is finalized; there is an example upload call in the Application.class. Whereas "play" from S3 will work as expected.
I added the S3 code to the red5-examples repo: https://github.com/Red5/red5-examples
Search for:
https://stackoverflow.com/search?q=IStreamFilenameGenerator
Or https://www.google.com.au/search?q=IStreamFilenameGenerator+example&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:de:official&client=firefox-a
and you will find some examples howto modify the path(s).
You could alternatively also of course simply mount some drive into the streams folder or I guess a symbolic link would even work. But it might be not that flexible as if you can do it with IStreamFilenameGenerator and generate really some string completely like you want it.
Sebastian
How do I configure Plupload properly so that it will upload files directly to Amazon S3?
In addition to condictions for bucket, key, and acl, the policy document must contain rules for name, Filename, and success_action_status. For instance:
["starts-with", "$name", ""],
["starts-with", "$Filename", ""],
["starts-with", "$success_action_status", ""],
Filename is a field that the Flash backend sends, but the HTML5 backend does not.
The multipart setting must be True, but that is the default these days.
The multipart_params setting must be a dictionary with the following fields:
key
AWSAccessKeyId
acl = 'private'
policy
signature
success_action_status = '201'
Setting success_action_status to 201 causes S3 to return an XML document with HTTP status code 201. This is necessary to make the flash backend work. (The flash upload stalls when the response is empty and the code is 200 or 204. It results in an I/O error if the response is a redirect.)
S3 does not understand chunks, so remove the chunk_size config option.
unique_names can be either True or False, both work.
Latest Plupload release has illustrative example included, that shows nicely how one might use Plupload to upload files to Amazon S3, using Flash and SilverLight runtimes.
Here is the fresh write-up: Upload to Amazon S3
The official Plupload tutorial, much more detailed than the answers here: https://github.com/moxiecode/plupload/wiki/Upload-to-Amazon-S3
If you are using Rails 3, please check out my sample projects:
Sample project using Rails 3, Flash and MooTools-based FancyUploader to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-FancyUploader
Sample project using Rails 3, Flash/Silverlight/GoogleGears/BrowserPlus and jQuery-based Plupload to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-Plupload
I want to notice, that don't forget to upload crossdomain.xml to your s3 host, and also if you have success_action_redirect url, you need to have crossdomain.xml file on that domain too. I spent 1 day fighting with that problem, and finally found what's wrong. So next time think how flash work inside.
Hope I save time for someone.