How do I configure Plupload properly so that it will upload files directly to Amazon S3?
In addition to condictions for bucket, key, and acl, the policy document must contain rules for name, Filename, and success_action_status. For instance:
["starts-with", "$name", ""],
["starts-with", "$Filename", ""],
["starts-with", "$success_action_status", ""],
Filename is a field that the Flash backend sends, but the HTML5 backend does not.
The multipart setting must be True, but that is the default these days.
The multipart_params setting must be a dictionary with the following fields:
key
AWSAccessKeyId
acl = 'private'
policy
signature
success_action_status = '201'
Setting success_action_status to 201 causes S3 to return an XML document with HTTP status code 201. This is necessary to make the flash backend work. (The flash upload stalls when the response is empty and the code is 200 or 204. It results in an I/O error if the response is a redirect.)
S3 does not understand chunks, so remove the chunk_size config option.
unique_names can be either True or False, both work.
Latest Plupload release has illustrative example included, that shows nicely how one might use Plupload to upload files to Amazon S3, using Flash and SilverLight runtimes.
Here is the fresh write-up: Upload to Amazon S3
The official Plupload tutorial, much more detailed than the answers here: https://github.com/moxiecode/plupload/wiki/Upload-to-Amazon-S3
If you are using Rails 3, please check out my sample projects:
Sample project using Rails 3, Flash and MooTools-based FancyUploader to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-FancyUploader
Sample project using Rails 3, Flash/Silverlight/GoogleGears/BrowserPlus and jQuery-based Plupload to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-Plupload
I want to notice, that don't forget to upload crossdomain.xml to your s3 host, and also if you have success_action_redirect url, you need to have crossdomain.xml file on that domain too. I spent 1 day fighting with that problem, and finally found what's wrong. So next time think how flash work inside.
Hope I save time for someone.
Related
I'm using AjaXplorer to give access to my clients to a shared directory stored in Amazon S3. I installed the SD, configured the plugin (http://ajaxplorer.info/plugins/access/s3/) and could upload and download files but the upload size is limited to my host PHP limit which is 64MB.
Is there a way I can upload directly to S3 without going over my host to improve speed and have S3 limit, no PHP's?
Thanks
I think that is not possible, because the server will first climb to the PHP file and then make transfer to bucket.
Maybe
The only way around this is to use some JQuery or JS that can bypass your server/PHP entirely and stream directly into S3. This involves enabling CORS and creating a signed policy on the fly to allow your uploads, but it can be done!
I ran into just this issue with some inordinately large media files for our website users that I no longer wanted to host on the web servers themselves.
The best place to start, IMHO is here:
https://github.com/blueimp/jQuery-File-Upload
A demo is here:
https://blueimp.github.io/jQuery-File-Upload/
This was written to upload+write files to a variety of locations, including S3. The only tricky bits are getting your MIME type correct for each particular upload, and getting your bucket policy the way you need it.
I've setup a cloudfront instance with download and streaming distributions. I set both to private with signed urls. I was able to get sample code working for the download distribution for images with signed urls. I'm now trying to get the streaming distribution working for JW Player with a signed URL but I'm having issues.
Here is my signed URL format:
rtmp://s1iq2cbtodqqky.cloudfront.net/2012-08-31_13-24-01_534.mp4?Expires=1359648770&Signature=Oi8RwL4Nf338NldW2uIsqFIv3zHnJkxXYbXIiVQh~J0Iq4kb00Ly5MLTgJw~87KmlUOmilmdRHy7p~UxeGYQxgkewPI11r27se0b~hTvpxq9y9Z5C-B-A58ZnngaCi9G2SHAujMzvss7ynLLEqUV3M6MVZl1qCxyfJbLdxCIEMY_&Key-Pair-Id=
Here is my JW Player code:
<script type="text/javascript" src="jwplayer/jwplayer.js"></script>
<div id="container">Loading the player ...</div>
<script type="text/javascript">
jwplayer("container").setup({
'flashplayer': 'jwplayer/jwplayer.flash.swf',
'file': '<?= $canned_policy_stream_name ?>',
'width': '480','height': '270',
'provider': 'rtmp',
'streamer': 'rtmp://s1iq2cbtodqqky.cloudfront.net/cfx/st/'
});
</script>
Anyone know what is wrong here? How can I test the url alone? Right now it's hard to tell if the problem is the url or the code for JW Player integration.
-J
There a lot of gotcha's here. Took me a while to work through them when I got into it. Here are some steps I think might help a lot of people.
First here was the technology stack I went with:
Rails 3.x
Zencoder for encoding
Paperclip for file upload
Jquery Uload for upload
JWPlayer
If that's not your platform you can fill in some of the blanks but a lot of the learning will still be useful to you.
There are a bunch of articles on how to upload content into S3 from a user so I will skip that part, the part that get's interesting is when you start the encoding process and that is really where the issues start in terms of getting signed, streamed content to play in jwplayer or flowplayer.
First, file formats - I found that MP4 and M4A were the file formats I had the most success with. With zencoder, I was able to use the out of the box mp4 and m4a export formats and get those outputs to play just fine.
Add you zencoder policy to the bucket before setting up the cloudfront distribution.
If you already have cloudfront configured, you should be careful in how you add your zencoder bucket policy into the bucket and make sure to merge it with whatever is there. Cloudfront also puts stuff in the bucket policy, you need both this and the zencoder bucket policy profile to work correctly.
Bucket policies only apply to files owned by the bucket owner so be sure to talk to your encoding provider to make sure they use your access key to put the files in cloudfront. If you don't have the user doing the signing the same as the user who owns the file in S3 it won't work and you will spend hours wondering why
Once you are sure you have your buckets setup correctly, before going further use this tool to help validate your files will actually stream (start out without signed urls and allowing cloudfront to stream files that aren't streamed. If that't not working you won't get far).
http://d1k5ny0m6d4zlj.cloudfront.net/diag/CFStreamingDiag.html
To use amazons tool, if your rtmp url was:
"rtmp://s3b78u0kbtx79q.cloudfront.net/cfx/st/content/myfile.png"
For the streaming url you want to enter:
s3b78u0kbtx79q.cloudfront.net
For the video file name you want to enter:
content/myfile.png (without the leading '/')
Once you can actually stream your file from amazon through the diagnostics tool, now go on to following whatever steps you have from jwplayer of flowplayer.
A note on setting up JWPlayer (I had the fewest problems with it) while streaming - also install a debug version of flash. Now during debugging, you can right click on the flash control and change logging to 'console'. Now you will get any load errors from the flash control appearing in Firebug - so maybe test first with Firefox. Typically, when reviewing those logs, you don't want any html control characters to be escaped, escaping will typically ruin your day.
I imagine a lot of people doing this will want their content secure and to use signed urls (so that dubious others don't just rip your rtmp path and embed it directly into their site getting the benefit while you pay the bandwidth).I can't stress this enough, before starting down that path, make sure you first get your RTMP stream playing on publicly streamed files out of your cloudfront bucket so you know the mechanism is working.
If you got this far you are in a good place but now is when all the bugs can hit you, if you followed my advice above it will be a shorter day than if you missed one of the steps (like making sure your bucket policy includes your cloudfront origin id and your encoding provider writes the files with your canonical id as the owner rather than theirs).
Now that you have you content streaming through RTMP to a player, next you will want to get it working with signed URL's (again, so other sites can't just copy your RTMP path and play it back through their own site with jwplayer attached). In rails at least, the best way to generate a signed url is to use this gem:
https://github.com/58bits/cloudfront-signer
Depending on how you embed the url, you will need to use different types of escaping. If your URL doesn't play, try some the following (not very precise but if you are here and losing your hair you will try anything, if you have already tried to get this working you probably already know what I mean and won't be needing a haircut for a while):
<%= AWS::CF::Signer.sign_path 'path/to/my/content', :expires => Time.now + 600 %>
<%=raw AWS::CF::Signer.sign_path 'path/to/my/content', :expires => Time.now + 600 %>
<%= AWS::CF::Signer.sign_path_safe 'path/to/my/content', :expires => Time.now + 600 %>
<%=raw AWS::CF::Signer.sign_path_safe 'path/to/my/content', :expires => Time.now + 600 %>
I probably screwed around for an hour one time before I found that using raw would fix all my problems.
I am using red5 for streaming videos in my project and I am able to play the videos from the local system which are saved in default folder "streams".
Now i want to customize the path and want to get the videos from S3. How do i configure red5 to work with S3. Is this a good practice?
I've got code using the IStreamFilenameGenerator works with S3; I'll warn you now that it may not work with the latest jets3 library, but you'll get the point of how it works by looking through the source. One problem / issue that you must understand when using S3 is that you cannot "record" to the bucket on-the-fly; your flv files can only be transferred to S3 once the file is finalized; there is an example upload call in the Application.class. Whereas "play" from S3 will work as expected.
I added the S3 code to the red5-examples repo: https://github.com/Red5/red5-examples
Search for:
https://stackoverflow.com/search?q=IStreamFilenameGenerator
Or https://www.google.com.au/search?q=IStreamFilenameGenerator+example&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:de:official&client=firefox-a
and you will find some examples howto modify the path(s).
You could alternatively also of course simply mount some drive into the streams folder or I guess a symbolic link would even work. But it might be not that flexible as if you can do it with IStreamFilenameGenerator and generate really some string completely like you want it.
Sebastian
I would like to upload a form from a web page and directly save the file to S3 without first saving it to disk. This node.js app will be deployed to Heroku, where there is no local disk to save the file to.
The node-formidable library provides a great way to upload files and save them to disk. I am not sure how to turn off formidable (or connect-form) from saving file first. The Knox library on the other hand provides a way to read a file from the disk and save it on Amazon S3.
1) Is there a way to hook into formidable's events (on Data) to send the stream to Knox's events, so that I can directly save the uploaded file in my Amazon S3 bucket?
2) Are there any libraries or code snippets that can allow me to directly take the uploaded file and save it Amazon S3 using node.js?
There is a similar question here but the answers there do not address NOT saving the file to disk.
It looks like there is no good way to do it. One reason might be that the node-formidable library saves the uploaded file to disk. I could not find any options to do otherwise. The knox library takes the saved file on the disk and using your Amazon S3 credentials uploads it to Amazon.
Since on Heroku I cannot save files locally, I ended up using transloadit service. Though their authentication docs have some learning curve, I found the service useful.
For those who want to use transloadit using node.js, the following code sample may help (transloadit page had only Ruby and PHP examples)
var crypto, signature;
crypto = require('crypto');
signature = crypto.createHmac("sha1", 'auth secret').
update('some string').
digest("hex")
console.log(signature);
this is Andy, creator of AwsSum:
https://github.com/appsattic/node-awssum/
I just released v0.2.0 of this library. It uploads the files that were created by Express' bodyParser() though as you say, this won't work on Heroku:
https://github.com/appsattic/connect-stream-s3
However, I shall be looking at adding the ability to stream from formidable directly to S3 in the next (v0.3.0) version. For the moment though, take a look and see if it can help. :)
I am trying to upload some static data to my aws s3 account.
I am using aws/s3 gem for this purpose.
I have a simple upload button on my webpage which hits the controller where it create the AWS connection and tries uploading data to AWS S3.
The connection to the AWS is successful, how-ever while trying to store data in S3, i get following error : Errno::EPIPE:Broken pipe" ...always.
I tried running the same piece of code from s3sh (S3 Shell) and i am able to execute all calls properly.
Am i missing something here?? its been quite some time now since i am facing this issue.
My config are : ruby 1.8, rails 3, mongrel, s3 bucket region us.
any help will be great.
I think the broken pipe error could mean a lot of things. I was experiencing it just now and it was because the bucket name in my s3.yml configuration file didn't match the name of the bucket I created on Amazon (typo).
So for people running into this answer in the future, it could be something as silly and simple as that.
In my case the problem was with the file size. S3 puts a limit of 5GB on single file uploads. Chopping up the file into several 500MB files worked for me.
I also had this issue uploading my application.css which had compiled file size > 1.1MB. I set the fog region with:
config.fog_region = 'us-west-2'
and that seems to have fixed the issue for me...