Red5 stream audio from Azure Storage or Amazon S3 - amazon-s3

I'm wondering if its possible to stream audio in Red5 from files stored in Azure? I am aware of how to manipulate the playback path via a custom file name generator IStreamFilenameGenerator, our legacy Red5 webapp uses it. It would seem to me though that this path needs to be on the local red5 server, is this correct?
I studied the example showing how to use Amazon S3 for file persistence and playback (https://goo.gl/7IIP28) and while the file recording + upload makes perfect sense, I'm just not seeing how the playback file name that is returned is streaming from S3. Tracing the StringBuilder appends/inserts, it looks like the filename is going to end up to be something like {BucketLocation}/{SessionID}/{FileKey} ... this lead me to believe that bucket.getLocation() on Line 111 was returning an HTTP/S endpoint URL, and Red5 would somehow be able to use it. I wrote a console app to test what bucket.getLocation() returned, and it only returns null for US servers, and EU for Europe. So, I'm not even sure where/how this accesses S3 for direct playback. Am I missing something?
Again, my goal is to access files stored in Azure, but I figured the above Amazon S3 example would have given me a hint.
I totally understand that you cannot record directly to Azure or S3, the store locally + upload makes sense. What I am failing to see is how to stream directly from a blob cloud storage. If anyone has suggestions, I would greatly appreciate it.

Have you tried using Azure Media Services? I believe looking at their documentation will be a good start for your scenario.

Related

AWS S3 and AjaXplorer

I'm using AjaXplorer to give access to my clients to a shared directory stored in Amazon S3. I installed the SD, configured the plugin (http://ajaxplorer.info/plugins/access/s3/) and could upload and download files but the upload size is limited to my host PHP limit which is 64MB.
Is there a way I can upload directly to S3 without going over my host to improve speed and have S3 limit, no PHP's?
Thanks
I think that is not possible, because the server will first climb to the PHP file and then make transfer to bucket.
Maybe
The only way around this is to use some JQuery or JS that can bypass your server/PHP entirely and stream directly into S3. This involves enabling CORS and creating a signed policy on the fly to allow your uploads, but it can be done!
I ran into just this issue with some inordinately large media files for our website users that I no longer wanted to host on the web servers themselves.
The best place to start, IMHO is here:
https://github.com/blueimp/jQuery-File-Upload
A demo is here:
https://blueimp.github.io/jQuery-File-Upload/
This was written to upload+write files to a variety of locations, including S3. The only tricky bits are getting your MIME type correct for each particular upload, and getting your bucket policy the way you need it.

red5 with s3(i want to customize the path for streaming videos)

I am using red5 for streaming videos in my project and I am able to play the videos from the local system which are saved in default folder "streams".
Now i want to customize the path and want to get the videos from S3. How do i configure red5 to work with S3. Is this a good practice?
I've got code using the IStreamFilenameGenerator works with S3; I'll warn you now that it may not work with the latest jets3 library, but you'll get the point of how it works by looking through the source. One problem / issue that you must understand when using S3 is that you cannot "record" to the bucket on-the-fly; your flv files can only be transferred to S3 once the file is finalized; there is an example upload call in the Application.class. Whereas "play" from S3 will work as expected.
I added the S3 code to the red5-examples repo: https://github.com/Red5/red5-examples
Search for:
https://stackoverflow.com/search?q=IStreamFilenameGenerator
Or https://www.google.com.au/search?q=IStreamFilenameGenerator+example&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:de:official&client=firefox-a
and you will find some examples howto modify the path(s).
You could alternatively also of course simply mount some drive into the streams folder or I guess a symbolic link would even work. But it might be not that flexible as if you can do it with IStreamFilenameGenerator and generate really some string completely like you want it.
Sebastian

Uploading file directly from a URL in Storage Blob

I have some large files (one of them is 10 GB), I want to store this file in Windows Azure, Storage (BLOB) directly, instead of downloading the same locally, and then uploading it.
Is there a way we could just mention the URL and the same gets uploaded in the Azure Storage ?
Any help would be really appreciated, if it is combination of services that also works fine :)
Yes, you can do this. Gaurav has a great post about copying from S3, but the same thing will work for any publicly-accessible URL: http://gauravmantri.com/2012/06/14/how-to-copy-an-object-from-amazon-s3-to-windows-azure-blob-storage-using-copy-blob/

Correct Server Schema to upload pictures in Amazon Web Services

I want to upload pictures to the AWS s3 through the iPhone. Every user should be able to upload pictures but they must remain private for each one of them.
My question is very simple. Since I have no real experience with servers I was wondering which of the following two approaches is better.
1) Use some kind of token vending machine system to grant the user access to the AWS s3 database to upload directly.
2) Send the picture to the EC2 Servlet and have the virtual server place it on the S3 storage.
Edit: I would also need to retrieve, should i do it directly or through the servlet?
Thanks in advance.
Hey personally I don't think it's a good idea to use token vending machine to directly upload the data via the iPhone, because it's much harder to control the access privileges, etc. If you have a chance use ec2 and servlet, but that will add costs to your solution.
Also when dealing with S3 you need to take in consideration that some files are not available right after you save them. Look at this answer from S3 FAQ.
For retrieving data directly from S3 you will need to deal with the privileges issue again. Check the access model for S3, but again it's probably easier to manage the access for non public files via the servlet. The good news is that there is no data transfer charge for data transferred between EC2 and S3 within the same region.
Another important point to consider the latter solution
High performance in handling load and network speeds within amazon ecosystem. With direct uploads the client would have to handle complex asynchronous operations of multipart uploads etc instead of focusing on the presentation and rendering of the image.
The servlet hosted on EC2 would be way more powerful than what you can do on your phone.

Allowing users to download files as a batch from AWS s3 or Cloudfront

I have a website that allows users to search for music tracks and download those they they select as mp3.
I have the site on my server and all of the mp3s on s3 and then distributed via cloudfront. So far so good.
The client now wishes for users to be able to select a number of music track and then download them all in bulk or as a batch instead of 1 at a time.
Usually I would place all the files in a zip and then present the user a link to that new zip file to download. In this case, as the files are on s3 that would require I first copy all the files from s3 to my webserver process them in to a zip and then download from my server.
Is there anyway i can create a zip on s3 or CF or is there someway to batch / group files in to a zip?
Maybe i could set up an EC2 instance to handle this?
I would greatly appreciate some direction.
Best
Joe
I am afraid you won't be able to create the batches w/o additional processing. firing up an EC2 instance might be an option to create a batch per user
I am facing the exact same problem. So far the only thing I was able to find is Amazon's s3sync tool:
https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
In my case, I am using Rails + its Paperclip addon which means that I have no way to easily download all of the user's images in one go, because the files are scattered in a lot of subdirectories.
However, if you can group your user's files in a better way, say like this:
/users/<ID>/images/...
/users/<ID>/songs/...
...etc., then you can solve your problem right away with:
aws s3 sync s3://<your_bucket_name>/users/<user_id>/songs /cache/<user_id>
Do have in mind you'll have to give your server the proper credentials so the S3 CLI tools can work without prompting for usernames/passwords.
And that should sort you.
Additional discussion here:
Downloading an entire S3 bucket?
s3 is single http request based.
So the answer is threads to achieve the same thing
Java api - uses TransferManager
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html
You can get great performance with multi threads.
There is no bulk download sorry.