When to use S3 Presigned Url vs Upload through Backend - amazon-s3

I read Amazon s3: direct upload vs presigned url and was wondering when use a direct upload from the backend to s3 vs a presigned url.
I understand that the direct upload requires extra bandwidth (user -> server -> s3) but I believe its more secure. Does the savings in bandwidth with the presigned url justify the slight drawback with security (i.e. with stuff like user messages)?
I am also checking the file types on the backend (via magic numbers) which I think is incompatible with presigned urls. Should this reason alone result in not using urls?
In addition I have a file size limit of 5 MB (not sure if this is considered large?). Would there be a significant difference in terms of performance and scalability (i.e. thousands to millions of files sent per hour) between using presigned urls vs direct upload.

You question sounds like you're asking for opinion, so, mine is as follows:
It depends on how secure you need it to be and what you consider is safe. I was wondering about the same questions and I believe that in my case, in the end, it is all secured by SSL encryption anyway (which is enough for me), so I prefer to save my servers bandwidth and memory usage.
Once more it depends on your own system requirements. Anyway, if any upload fails, S3 will be returning an error cause after the request failure. If checking file type is a MUST and checking it on your backend is the only way to do it, you already have your answer.
In a scenario with millions of files (with close to 5MB each) being sent every hour, I would recommend direct upload, because that would be a lot of RAM usage to receive and resend every file.
There are a few more advantages of uploading directly to S3 as you can read here

Related

Why do I need Amazon S3 and Cloudfront?

I've read a lot of articles stating that I should be using Amazon S3 in conjunction with the CDN Cloudfront. I'm currently not doing this. I'm simply using Cloudfront with my standard shared hosting package.
Is it OK to use Cloudfront on its own with my standard shared hosting package? Surely there is no added benefit to using S3 also as the files are already located within Cloudfront.
Any enlightenment on this is much appreciated.
Leigh
S3 allows you to do things like static webhosting, with logging and redirection. I.E www.example.com redirects to example.com. You can then use Cloudfront to place your assets as close to the end user as possible ("nearest edge location"). An excellent guide on how to do this is in the AWS docs. Two main things are that S3 supports https, and changes to files in S3 are reflected instantly. Because Cloudfront is a CDN, you have to manually expire files if you change them, otherwise is could take up to 24 hours to reflect your changes.
http://docs.aws.amazon.com/gettingstarted/latest/swh/website-hosting-intro.html
A quick comparison between the two is given here:
http://www.bucketexplorer.com/documentation/cloudfront--amazon-s3-vs-amazon-cloudfront.html
There is no problem of using CloudFront against your own origin server comparing to a S3 server.
There are some benefits of using S3:
Data transfer is faster between S3 and CloudFront
Don't need to worry about the stability and maintenance of origin S3 server
Multiple origin regions
There are also benefits if you use your own server:
Cost saving of S3 hosting (this depends on whether you need to pay for your own server)
Easy for customization should you need it
Data storage location for company/country regulation
So it's all depending on your specific circumstances, such as how much you pay for your hosting package, do you need low-level configuration of your origin server, and how sensitivity your data is.
I would say for majority of the small/medium projects, S3 is a perfect place to store data.

Dropbox API requests performance issues

The issue we dealing with is when moving or copying files in Dropbox server to another folder in Dropbox server.
The API requires to send request for each file separately. That takes way too long.
Maybe You provide some kind of batch request so I could move more than one files per request?
I also know the ability to move all folder content, but it doesn't work on our case, cause we need only subset of files to move.
If we try to flush many request at once threw several connections, we get 'Server Unavailable' or 'File Locked' errors and need to repeat request again.
Tl;DR;
To move 1000 files that already are in Dropbox server it takes over 30 minutes.
What possible solutions You have to increase the performance?
The Dropbox API now provides a batch endpoint for moving files. You can find the documentation here:
https://www.dropbox.com/developers/documentation/http/documentation#files-move_batch

AWS S3 and AjaXplorer

I'm using AjaXplorer to give access to my clients to a shared directory stored in Amazon S3. I installed the SD, configured the plugin (http://ajaxplorer.info/plugins/access/s3/) and could upload and download files but the upload size is limited to my host PHP limit which is 64MB.
Is there a way I can upload directly to S3 without going over my host to improve speed and have S3 limit, no PHP's?
Thanks
I think that is not possible, because the server will first climb to the PHP file and then make transfer to bucket.
Maybe
The only way around this is to use some JQuery or JS that can bypass your server/PHP entirely and stream directly into S3. This involves enabling CORS and creating a signed policy on the fly to allow your uploads, but it can be done!
I ran into just this issue with some inordinately large media files for our website users that I no longer wanted to host on the web servers themselves.
The best place to start, IMHO is here:
https://github.com/blueimp/jQuery-File-Upload
A demo is here:
https://blueimp.github.io/jQuery-File-Upload/
This was written to upload+write files to a variety of locations, including S3. The only tricky bits are getting your MIME type correct for each particular upload, and getting your bucket policy the way you need it.

Correct Server Schema to upload pictures in Amazon Web Services

I want to upload pictures to the AWS s3 through the iPhone. Every user should be able to upload pictures but they must remain private for each one of them.
My question is very simple. Since I have no real experience with servers I was wondering which of the following two approaches is better.
1) Use some kind of token vending machine system to grant the user access to the AWS s3 database to upload directly.
2) Send the picture to the EC2 Servlet and have the virtual server place it on the S3 storage.
Edit: I would also need to retrieve, should i do it directly or through the servlet?
Thanks in advance.
Hey personally I don't think it's a good idea to use token vending machine to directly upload the data via the iPhone, because it's much harder to control the access privileges, etc. If you have a chance use ec2 and servlet, but that will add costs to your solution.
Also when dealing with S3 you need to take in consideration that some files are not available right after you save them. Look at this answer from S3 FAQ.
For retrieving data directly from S3 you will need to deal with the privileges issue again. Check the access model for S3, but again it's probably easier to manage the access for non public files via the servlet. The good news is that there is no data transfer charge for data transferred between EC2 and S3 within the same region.
Another important point to consider the latter solution
High performance in handling load and network speeds within amazon ecosystem. With direct uploads the client would have to handle complex asynchronous operations of multipart uploads etc instead of focusing on the presentation and rendering of the image.
The servlet hosted on EC2 would be way more powerful than what you can do on your phone.

Stop an in-progress query-string-authorized download on Amazon S3?

With Amazon S3, can I stop a query-string-authorized download that is in progress?
Are there other file download services that provide such a feature?
I'm not aware of a built in way to do this. If I understand your goal, you want to potentially stop an HTTP response mid-stream based on some custom rules you have. Is that right?
If so, perhaps you could write a very thin proxy to S3 that encapsulates this logic. If you ran the proxy on EC2 you wouldn't incur any additional bandwidth fees.
The downside is that you would have manage scaling the proxy (i.e. add more EC2 nodes based on traffic) so depending on your scaling requirements, this could require a bit of work. But the proxy script itself would probably be fairly trivial. Something like:
Make streaming HTTP request to S3 for object
for each x byte chunk in response from S3:
Check auth condition. Continue if valid. Break if not.
Send chunk to caller
I'm not aware of anyone that allows this. In general, the authentication is only checked once, when you begin downloading, but not thereafter.
Can you describe what you're trying to do more broadly?