Uploading file directly from a URL in Storage Blob - azure-storage

I have some large files (one of them is 10 GB), I want to store this file in Windows Azure, Storage (BLOB) directly, instead of downloading the same locally, and then uploading it.
Is there a way we could just mention the URL and the same gets uploaded in the Azure Storage ?
Any help would be really appreciated, if it is combination of services that also works fine :)

Yes, you can do this. Gaurav has a great post about copying from S3, but the same thing will work for any publicly-accessible URL: http://gauravmantri.com/2012/06/14/how-to-copy-an-object-from-amazon-s3-to-windows-azure-blob-storage-using-copy-blob/

Related

Red5 stream audio from Azure Storage or Amazon S3

I'm wondering if its possible to stream audio in Red5 from files stored in Azure? I am aware of how to manipulate the playback path via a custom file name generator IStreamFilenameGenerator, our legacy Red5 webapp uses it. It would seem to me though that this path needs to be on the local red5 server, is this correct?
I studied the example showing how to use Amazon S3 for file persistence and playback (https://goo.gl/7IIP28) and while the file recording + upload makes perfect sense, I'm just not seeing how the playback file name that is returned is streaming from S3. Tracing the StringBuilder appends/inserts, it looks like the filename is going to end up to be something like {BucketLocation}/{SessionID}/{FileKey} ... this lead me to believe that bucket.getLocation() on Line 111 was returning an HTTP/S endpoint URL, and Red5 would somehow be able to use it. I wrote a console app to test what bucket.getLocation() returned, and it only returns null for US servers, and EU for Europe. So, I'm not even sure where/how this accesses S3 for direct playback. Am I missing something?
Again, my goal is to access files stored in Azure, but I figured the above Amazon S3 example would have given me a hint.
I totally understand that you cannot record directly to Azure or S3, the store locally + upload makes sense. What I am failing to see is how to stream directly from a blob cloud storage. If anyone has suggestions, I would greatly appreciate it.
Have you tried using Azure Media Services? I believe looking at their documentation will be a good start for your scenario.

AWS S3 and AjaXplorer

I'm using AjaXplorer to give access to my clients to a shared directory stored in Amazon S3. I installed the SD, configured the plugin (http://ajaxplorer.info/plugins/access/s3/) and could upload and download files but the upload size is limited to my host PHP limit which is 64MB.
Is there a way I can upload directly to S3 without going over my host to improve speed and have S3 limit, no PHP's?
Thanks
I think that is not possible, because the server will first climb to the PHP file and then make transfer to bucket.
Maybe
The only way around this is to use some JQuery or JS that can bypass your server/PHP entirely and stream directly into S3. This involves enabling CORS and creating a signed policy on the fly to allow your uploads, but it can be done!
I ran into just this issue with some inordinately large media files for our website users that I no longer wanted to host on the web servers themselves.
The best place to start, IMHO is here:
https://github.com/blueimp/jQuery-File-Upload
A demo is here:
https://blueimp.github.io/jQuery-File-Upload/
This was written to upload+write files to a variety of locations, including S3. The only tricky bits are getting your MIME type correct for each particular upload, and getting your bucket policy the way you need it.

Allowing users to download files as a batch from AWS s3 or Cloudfront

I have a website that allows users to search for music tracks and download those they they select as mp3.
I have the site on my server and all of the mp3s on s3 and then distributed via cloudfront. So far so good.
The client now wishes for users to be able to select a number of music track and then download them all in bulk or as a batch instead of 1 at a time.
Usually I would place all the files in a zip and then present the user a link to that new zip file to download. In this case, as the files are on s3 that would require I first copy all the files from s3 to my webserver process them in to a zip and then download from my server.
Is there anyway i can create a zip on s3 or CF or is there someway to batch / group files in to a zip?
Maybe i could set up an EC2 instance to handle this?
I would greatly appreciate some direction.
Best
Joe
I am afraid you won't be able to create the batches w/o additional processing. firing up an EC2 instance might be an option to create a batch per user
I am facing the exact same problem. So far the only thing I was able to find is Amazon's s3sync tool:
https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
In my case, I am using Rails + its Paperclip addon which means that I have no way to easily download all of the user's images in one go, because the files are scattered in a lot of subdirectories.
However, if you can group your user's files in a better way, say like this:
/users/<ID>/images/...
/users/<ID>/songs/...
...etc., then you can solve your problem right away with:
aws s3 sync s3://<your_bucket_name>/users/<user_id>/songs /cache/<user_id>
Do have in mind you'll have to give your server the proper credentials so the S3 CLI tools can work without prompting for usernames/passwords.
And that should sort you.
Additional discussion here:
Downloading an entire S3 bucket?
s3 is single http request based.
So the answer is threads to achieve the same thing
Java api - uses TransferManager
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html
You can get great performance with multi threads.
There is no bulk download sorry.

How to receive an uploaded file using node.js formidable library and save it to Amazon S3 using knox?

I would like to upload a form from a web page and directly save the file to S3 without first saving it to disk. This node.js app will be deployed to Heroku, where there is no local disk to save the file to.
The node-formidable library provides a great way to upload files and save them to disk. I am not sure how to turn off formidable (or connect-form) from saving file first. The Knox library on the other hand provides a way to read a file from the disk and save it on Amazon S3.
1) Is there a way to hook into formidable's events (on Data) to send the stream to Knox's events, so that I can directly save the uploaded file in my Amazon S3 bucket?
2) Are there any libraries or code snippets that can allow me to directly take the uploaded file and save it Amazon S3 using node.js?
There is a similar question here but the answers there do not address NOT saving the file to disk.
It looks like there is no good way to do it. One reason might be that the node-formidable library saves the uploaded file to disk. I could not find any options to do otherwise. The knox library takes the saved file on the disk and using your Amazon S3 credentials uploads it to Amazon.
Since on Heroku I cannot save files locally, I ended up using transloadit service. Though their authentication docs have some learning curve, I found the service useful.
For those who want to use transloadit using node.js, the following code sample may help (transloadit page had only Ruby and PHP examples)
var crypto, signature;
crypto = require('crypto');
signature = crypto.createHmac("sha1", 'auth secret').
update('some string').
digest("hex")
console.log(signature);
this is Andy, creator of AwsSum:
https://github.com/appsattic/node-awssum/
I just released v0.2.0 of this library. It uploads the files that were created by Express' bodyParser() though as you say, this won't work on Heroku:
https://github.com/appsattic/connect-stream-s3
However, I shall be looking at adding the ability to stream from formidable directly to S3 in the next (v0.3.0) version. For the moment though, take a look and see if it can help. :)

Optimal storing structure for an EC2 instance?

I'm developing a website in EC2 and I have a lampp server in the original /opt/lampp folder. The thing is that I store all my images related to the website including users' profile images there(/opt/lampp/htdocs). I doubt this is the most efficient way? I have links to the images in my MySQL server.
I actually have no idea of what Amazon EBS and Amazon S3 is, how can I utulize it?
EBS is like an external usb hard drive. You can easily access content from filesystem (/mnt/).
S3 is more like an API based cloud storage. You'll have much more work to integrate it into your system.
You have a pretty good summary here :
http://www.differencebetween.net/technology/internet/difference-between-amazon-s3-and-amazon-ebs/
Google has a lot of infos about this.