Is it advisable to use FFMpeg on my local server for video conversion? - amazon-s3

We are starting a video sharing website where users will be able to upload videos in their native formats. However, since video streaming on the web generally is in the FLV format, we need to convert the videos to FLV.
Also, the site will be hosted on Amazon EC2 and storage using S3.
Can i run FFMpeg on amazon EC2 ? Is this the best way to go ? Are there other alternatives to video encoding rather than doing conversion on our own server ? I also came across www.transloadit.com which seems to do the same but they are charging a bomb. Are there cheaper and more intelligent alternatives ?
We are planning to make this website as one of top 10 biggest niche video streaming websites on the internet.

EC2 instances are just virtual machines so you can do whatever you like on them, including running ffmpeg.
Only you can work out the costs/benefits of doing the conversion on EC2, another server or an encoding service like encoding.com(a google search will turn up more services).
Some thoughts:
EC2
pay by the hour and can easily add new servers (although you need to design your process to support multiple servers)
Fast (and free) transfer between EC2 and S3
Your own servers
You pay for hardware upfront
Not as easy to scale quickly if needed
You need to maintain the hardware
Bandwidth charges between EC2/S3 and your servers
In both DIY solutions you need to deal with the notoriously error prone process and converting videos of different formats.
Video Encoding Service
Probably more expensive (debatable if you factor in development time and support costs)
Easiest way to scale quickly
Up and running quickly
Let them deal with the difficult conversions

I'm going to talk about a slightly different part of your comment. You said:
However, since video streaming on the web generally is in the FLV format...
This is false. You'll get far more portability and bang for your buck if you encode in MPEG-4/H.264.
Flash Player can playback H.264 content, so you can still use a Flash-based player for your website if you want to. However, if you ever decide to open up to mobile devices (iPhone, iPad, Android, webOS, Blackberry 6), HTML5-compatible web browsers (Safari, Chrome, Firefox, Opera, IE9), or pretty much anything newer than 5 years ago, H.264 is definitely the way to go.
The site for Miro Video Converter even documents the FFmpeg setting they use, which may save you some time.

Converting video is a relatively processor-intensive process. Amazon charges for CPU time, and they also charge for data transfer. So it's more of a business tradeoff. Can EC2 run ffmpeg and do the video conversion? Yes, it can. But is it more cost-effective to pay for the CPU time on the EC2 instance or to convert on a local server and then transfer the data to the EC2? I don't know. The answer depends on the sizes of videos that you're working with, the cost of connectivity on your local server, and the pricing scheme on your EC2 virtual server instance.

Related

Is it possible to save a video stream between two peers in webrtc in the server, realtime?

Suppose I have 2 peers exchanging video with webRTC. Now I need both of the streams to be saved as video files in the central server. Is is possible to do it realtime? (Storing/Uploading the video from peers is not an option).
I thought of making a 3 node webRTC connection, with the 3rd node being the server. This way, I can screen record the 3rd node's stream or save it using some other way. But I am not sure about the reliability/feasibility of the implementation.
This is for a mobile application, and I would avoid any method that involves uploading/saving.
PS: I'm using Agora.io for the purpose of video-conference.
in my opinion
you can do it like the record demo:https://webrtc.github.io/samples/src/content/getusermedia/record/.
record each stream to blobs and push them to your server with websocket.
then convert the blobs to a webm file or just add in a video
Agora doesn't offer on-premise recording out of the box but they do provide thee code for you to be able to launch your own on-premise recording using your own server. Agora has the code and instructions to deploy on GitHub: https://github.com/AgoraIO/Basic-Recording
The way it works, once you have set up the Agora Recording SDK, the client would trigger the recording to start, via user interaction (button tap) or some other event (i.e. peer-joined or stream-subscribed) this will trigger the recording service to join the channel and record the streams. _The service outputs the video file once recording has stopped.
you need a WebRTC media server.
WebRTC media servers makes it possible to support more complex
scenarios WebRTC media servers are servers that act as WebRTC clients
but run on the server side. They are termination points for the media
where we’d like to take action. Popular tasks done on WebRTC media
servers include:
Group calling Recording Broadcast and live streaming Gateway to other
networks/protocols Server-side machine learning Cloud rendering
(gaming or 3D) The adventurous and strong hearted will go and develop
their own WebRTC media server. Most would pick a commercial service or
an open source one. For the latter, check out these tips for choosing
WebRTC open source media server framework.
In many cases, the thing developers are looking for is support for
group calling, something that almost always requires a media server.
In that case, you need to decide if you’d go with the classing (and
now somewhat old) MCU mixing model or with the more accepted and
modern SFU routing model. You will also need to think a lot about the
sizing of your WebRTC media server.
For recording WebRTC sessions, you can either do that on the client
side or the server side. In both cases you’ll be needing a server, but
what that server is and how it works will be very different in each
case.
If it is broadcasting you’re after, then you need to think about the
broadcast size of your WebRTC session.
link:https://bloggeek.me/webrtc-server/

Adobe Media Server Alternative for VideoChat

I currently have a video chat app working on web(Flash) and android via Adobe AIR, it uses Adobe Media Server (RTMP) as backend for video streaming and shared objects, my question is, if there is another server or solution that provides many to many live video broadcast maybe using H.264 codec from android and iOS, have some sort of user list and room list stored in a database or similar, I want to move away from Adobe as it has many limitations on mobile devices.
Live video is crucial in 1 to many broadcasts that will have hundreds of viewers at the same time.
Thanks for reading!
Ulex.fr created an RTMP connector for Asterisk (the free PBX platform).
Used with the Asterisk Vonference application, it allows you to create conference rooms for 1 to many configuration, with audio and video. The only one limitation is the power of your server. You can plan a scalable architecure in order to broadcast one video to many (many could be unlimited). We developp a specific protocol to connect and manage the connection based on the telephony events. I think we already done a direct RTMP connection that skip this protocol too.
All the project done by ulex.fr is free, OpenSource and GPL.
Get the full project here : https://github.com/voximal/asterisk-rtmp
(a live demo is available)
We already develop an RTMP stack for android with video (using the camera), this allows you to create your own application without using AIR.
You can check Adobe Cirrus, it's still in the beta stage (actually IMHO Adobe forgot about it), but it works on web, desktop and mobile too. Check this Video Phone example, it can handle chat applications without a problem.
http://labs.adobe.com/technologies/cirrus/samples/
You could take a look at Red5 Media Server, which is an open source solution. There are other options like the Wowza's solutions on AWS, but they come a higher cost...
Ok as today, we have decided that we can manage the users,rooms and messages via Google Firebase Real Time Database, and the live video stream using ANT Media Server

Streaming webcam and mic inputs through browser

Short version:
I need an in-browser solution to deliver the webcam and mic streams to a server.
Long version:
I'm trying to create a live streaming application. So far I've only managed to figure out this workflow:
Client creates stream (some transcoder is probably required here)
Client sends(publishes?) stream to server (basically hosts an RTMP/other stream that should be accessible by my server)
Server transcodes, transrates, etc. and publishes the stream to a CDN
Viewers watch published stream
Ideally, I'd like a browser-based solution that requires minimal setup from the client's end (a Flash plugin download might be acceptable) and streams the webcam and mic inputs to the server. I'm either unaware of the precise keywords or am looking for the wrong thing, but I can't find an apt solution.
Solutions that involve using ffmpeg or vlc to publish a stream aren't really what I'm looking for, since they require additional download and setup, and aren't restricted to just webcam and mic inputs. WebRTC probably won't serve the same quality but if all else fails, I think it can get the job done, at least for some browsers.
I'm using Ubuntu for development and have just activated a trial license for Wowza streaming server and cloud.
Is ffmpeg/vlc et. al. the only way out? Or is there something that can do the job in a single browser tab?
If you go the RTMP way, Adobe Flash Player supports H.264 encoding directly. Since you mentioned Wowza you can find an example and complete source code (including the fla) in the examples directory. There's also a demo here. There are many other open-source Flash capture plugins.
You can also use the aforementioned Flash recorder without Wowza. In this case you'll need a RTMP server, a notable example being the Nginx RTMP module which supports recording (to flv) and also offers callbacks that allow you to launch the transcoding once the recording is done.
With WebRTC you can record (getUserMedia, MediaStreamRecorder) small media chunks and send them to the server where they will get concatenated or using the peer-to-peer communications features of WebRTC (RTCPeerConnection). For a detailed overview see my answer here.
In both cases you'll have issues with devices/browsers that don't support Flash or WebRTC, eg. iPhones, Safari. Plus getUserMedia doesn't capture the same format across all browsers: Firefox audio/video in WebM and Chrome audio in wav and video in WebM.
For mobile devices you'll probably have to write apps.

Is Amazon S3 appropriate for serving videos? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm working on a website with a primary function of playing videos, typically one right after another.
Would it be appropriate to store the MP4 & WebW files on Amazon S3, then accomplish playback using HTML5/Flash?
Are there any speed repercussions with serving videos via Amazon S3? Or would I be better off serving the videos from the same Amazon EC2 server I'm using to run the site?
Really I'm looking for Pros/Cons. Thank you.
I cant imagine using Amazon for streaming. Honestly, their traffic costs are way too high for this kind of application.
Anyway, if you still want to use it, S3 doesnt seem to be good option, because it's cluster storage designed for e.g. archiving and not streaming, it has limitations of number of requests per second as well it's concurrency.
For streaming, you need the fastest possible storage, and any of the Amazon services is far away from that, definetely S3 and for EBS, it's not too fast either.
You can consider servers with SSD drives and normal bandwidth prices.
I have myself 10 streaming servers doing 100TB of traffic per day, each with 8x SSD drives and 10Gbps interface plus 64GB of RAM and 16 cores.
I've used Amazons CloudFront to stream content in the past without too much issue (http://aws.amazon.com/cloudfront/), but there are certainly faster methods out there.
However, I do believe it's a good place to start.
Amazon CloudFront supported streaming since December 2009:
We’ve designed Amazon CloudFront to make streaming accessible for
anyone with media content. Streaming with Amazon CloudFront is
exceptionally easy: with only a few clicks on the AWS Management
Console or a simple API call, you’ll be able to stream your content
using a world-wide network of edge locations running Adobe’s Flash®
Media Server. And, like all AWS services, Amazon CloudFront streaming
requires no up-front commitments or long-term contracts. There are no
additional charges for streaming with Amazon CloudFront; you simply
pay normal rates for the data that you transfer using the service.
Recently Amazon CloudFront introduced Live Smooth Streaming:
We are excited to announce the launch of Live Smooth Streaming for
Amazon CloudFront. Smooth Streaming is a feature of Internet
Information Services (IIS) Media Services that enables adaptive
streaming of live media to Microsoft Silverlight clients. You can also
use this solution to deliver your live stream to Apple’s iOS devices
using the Apple HTTP Live Streaming (HLS) format. And you can benefit
from the scale and low-latency offered by Amazon CloudFront when
delivering your live Smooth Streams.

Video stream testing

I've got some distance learning software, yesterday I've got near 1000 users watching video simultaneously. My client told me that server was down and memory was exceeded.
It asp.net website and server language is C#. I'm using Response.TransmitFile(...) for streaming videos. Is there any way to simulate 1000 simultaneously video stream situation to figure out what's going on, cause when I'm testing the website everything works fine.
Do not ever use your web server as video streaming server, there is no chance you can succeed.
I personally moved my videos to Amazon Web Services and I'm very satisfied with that.