Parallel Download from S3 to EC2 - amazon-s3

I was reading this blog entry about parallel upload into S3 using boto. Near the end it suggests a few tools for downloading using multiple connections (axel, aria2, and lftp). How can I go about using these with S3? I don't know how to pass the authentication keys to Amazon to access the file. I can, however, make the file public temporarily, but this solution is non-optimal.

Generate a signed url using the AWS API and use that for your downloads. Only someone with the signed url (which expires in the given timeout) can download the file.

Related

s3 to ftp server without downloading to ec2

Is there is way (programmatically) to transfer file from s3 bucket to an external ftp server without downloading it to an ec2 instance ?
More details:
I have a Django server running on EC2 which serves an angular web app.
User uploads a file to S3 bucket using my web app and once the upload is complete the web app sends a POST request containing the file object s3 url.
The Django server upon receiving the POST request may need to copy the file (uploaded to s3) to an external ftp server. The target ftp server may be different depending upon the user who uploaded the file (each user group may have her own ftp server).
I understand that upon receiving POST request, Django server can download the file from s3 and then upload it to the appropriate target ftp server.
My question is: Can I reduce overhead on my EC2 instance in step 4 by somehow initiating a transfer from s3 to the target ftp server and get a callback/notification when that transfer completes (success or error).
Thanks.
You can create a lambda function to do the same.
A complete reference of the implementation is discussed here.
https://pythonvibes.wordpress.com/2016/12/09/ftp-and-sftp-through-lambda/
Hope it helps.

AWS FTP behavior

I'm having some issue on my AWS S3 bucket and vsftpd.
I've created a vsftpd instance and mount AWS S3 bucket. My issue is that everytime I upload a file and the connection was disrupted, it appends the existing file on the S3 bucket instead of override it when the FTP client retry. What should I set on the S3 bucket policy to have such behavior to override instead of append?
There are no Amazon S3 configuration settings that would impact this behaviour -- it is totally the result of the software you are using.
It's also worth mentioning that FTP is a rather old protocol and these days there are much better alternatives, such as uploads via the browser or Dropbox-like shared folders.
One of the easiest options is to have your users upload directly to Amazon S3 -- that way, you don't need to run any servers. This could be done by uploading via a browser, or by providing users with some software, such as Cloudberry Explorer or the AWS Command-Line Interface (CLI).
I highly encourage you to stop using FTP these days.

AWS S3 and AjaXplorer

I'm using AjaXplorer to give access to my clients to a shared directory stored in Amazon S3. I installed the SD, configured the plugin (http://ajaxplorer.info/plugins/access/s3/) and could upload and download files but the upload size is limited to my host PHP limit which is 64MB.
Is there a way I can upload directly to S3 without going over my host to improve speed and have S3 limit, no PHP's?
Thanks
I think that is not possible, because the server will first climb to the PHP file and then make transfer to bucket.
Maybe
The only way around this is to use some JQuery or JS that can bypass your server/PHP entirely and stream directly into S3. This involves enabling CORS and creating a signed policy on the fly to allow your uploads, but it can be done!
I ran into just this issue with some inordinately large media files for our website users that I no longer wanted to host on the web servers themselves.
The best place to start, IMHO is here:
https://github.com/blueimp/jQuery-File-Upload
A demo is here:
https://blueimp.github.io/jQuery-File-Upload/
This was written to upload+write files to a variety of locations, including S3. The only tricky bits are getting your MIME type correct for each particular upload, and getting your bucket policy the way you need it.

How to upload files directly to Amazon S3 from a remote server?

Is it possible to upload a file to S3 from a remote server?
The remote server is basically a URL based file server. Example, using http://example.com/1.jpg, it serves the image. It doesn't do anything else and can't run code on this server.
It is possible to have another server telling S3 to upload a file from http://example.com/1.jpg
upload from http://example.com/1.jpg
server -------------------------------------------> S3 <-----> example.com
If you can't run code on the server or execute requests then, no, you can't do this. You will have to download the file to a server or computer that you own and upload from there.
You can see the operations you can perform on amazon S3 at http://docs.amazonwebservices.com/AmazonS3/latest/API/APIRest.html
Checking the operations for both the REST and SOAP APIs you'll see there's no way to give Amazon S3 a remote URL and have it grab the object for you. All of the PUT requests require the object's data to be provided as part of the request. Meaning the server or computer that is initiating the web request needs to have the data.
I have had a similar problem in the past where I wanted to download my users' Facebook Thumbnails and upload them to S3 for use on my site. The way I did it was to download the image from Facebook into Memory on my server, then upload to Amazon S3 - the full thing took under 2 seconds. After the upload to S3 was complete, write the bucket/key to a database.
Unfortunately there's no other way to do it.
I think the suggestion provided is quite good, you can SCP the file to S3 Bucket. Giving the pem file will be a password less authentication, via PHP file you can validate the extensions. PHP file can pass the file, as argument to SCP command.
The only problem with this solution is, you must have your instance in AWS. You can't use this solution if your website is hosted in other Hosting Providers and you are trying to upload files straight to S3 Bucket.
Technically it's possible, using AWS Signature Version 4, Assuming your remote server is the customer in the image below, you could prepare a form in the main server, and send the form fields to the remote server, for it to curl it. Detailed example here.
you can use scp command from Terminal.
1)using terminal, go to the place where there is that file you want to transfer to the server
2) type this:
scp -i yourAmazonKeypairPath.pem fileNameThatYouWantToTransfer.php ec2-user#ec2-00-000-000-15.us-west-2.compute.amazonaws.com:
N.B. Add "ec2-user#" before your ec2blablbla stuffs that you got from the Ec2 website!! This is such a picky error!
3) your file will be uploaded and the progress will be shown. When it is 100%, you are done!

How to receive an uploaded file using node.js formidable library and save it to Amazon S3 using knox?

I would like to upload a form from a web page and directly save the file to S3 without first saving it to disk. This node.js app will be deployed to Heroku, where there is no local disk to save the file to.
The node-formidable library provides a great way to upload files and save them to disk. I am not sure how to turn off formidable (or connect-form) from saving file first. The Knox library on the other hand provides a way to read a file from the disk and save it on Amazon S3.
1) Is there a way to hook into formidable's events (on Data) to send the stream to Knox's events, so that I can directly save the uploaded file in my Amazon S3 bucket?
2) Are there any libraries or code snippets that can allow me to directly take the uploaded file and save it Amazon S3 using node.js?
There is a similar question here but the answers there do not address NOT saving the file to disk.
It looks like there is no good way to do it. One reason might be that the node-formidable library saves the uploaded file to disk. I could not find any options to do otherwise. The knox library takes the saved file on the disk and using your Amazon S3 credentials uploads it to Amazon.
Since on Heroku I cannot save files locally, I ended up using transloadit service. Though their authentication docs have some learning curve, I found the service useful.
For those who want to use transloadit using node.js, the following code sample may help (transloadit page had only Ruby and PHP examples)
var crypto, signature;
crypto = require('crypto');
signature = crypto.createHmac("sha1", 'auth secret').
update('some string').
digest("hex")
console.log(signature);
this is Andy, creator of AwsSum:
https://github.com/appsattic/node-awssum/
I just released v0.2.0 of this library. It uploads the files that were created by Express' bodyParser() though as you say, this won't work on Heroku:
https://github.com/appsattic/connect-stream-s3
However, I shall be looking at adding the ability to stream from formidable directly to S3 in the next (v0.3.0) version. For the moment though, take a look and see if it can help. :)