Ant Media, Jitsi and Janus Which one is best to start live voice stream one to many in Mobile Applications? - webrtc

I want to build a mobile application where one of the participating users can broadcast audio at a time to other participants can only listen audio, Ant Media, Jitsi and Janus Which one is best to start live voice stream one to many in Mobile Applications?

In my opinion [as a stream engineer] Before trying to choose a webrtc sfu, decide why you need webrtc technology ?
For one to many stream, hls is better and cheaper, easy to use with 3rd party cdn.
If you need to publish stream via webrtc, still you dont have to force end user to watch it as webrtc. Because the webrtc has limitation,
for instance
wowza can handle
750 concurrent endpoint
kurento 200
jitsi 500
janus (I tried 2017 and at that time not stable )
ant media 1300.
If I were you I'll prefer ant media,
here is reason
ingest webrtc -> play webrtc ( ABR enabled )
ingest webrtc -> play hls ( without any transcoding, the stream will be published with h264 )
ingest rtmp -> play hls
ingest rtmp -> play webrtc
wowza can only do
webrtc ingest -> transcode ( vp8,9 to h264 ) -> hls
rtmp to webrtc not supported by wowza. Also ant media enterprise edition is cheaper than wowza.

Related

How to capture rtp stream from webrtc then convert it to hls to broadcast to client?

How to capture rtp stream from webrtc then convert it to hls to broadcast to client ?
I want to receive rtp from webrtc in browser via media server (eg kurento ... ) then convert it to hls stream. User can use hlsEndpoint to play.
WebRTC -> RTP -> HLS
What is the correct way?
My aim is to create a live stream app that supports push streams using webrtc , i'm working with rtmp , i want webrtc as an additional option.
Tks all.
Just use a media server to covert WebRTC to Live Streaming like RTMP, HTTP-FLV or HLS, please read this wiki.
Because the WebRTC is not only RTP, but also need to transcode the audio from opus to aac, and there is something like the jitter-buffer, NACK or packet out-of-order to handle.
For live streaming, the RTMP is the de-facto standard in live streaming industry, so if you covert WebRTC to RTMP, you got everything, like transcoding by FFmpeg, forwarding to YouTube, or DVR to file, etc.
If you need to convert WebRTC to HLS or RTMP you may check Ant Media Server
The community edition also provides this.

Streaming using media servers; what is the advantage of using RTMP vs WebRTC

We are about to start a stream project and we are considering options right now, one options we are considering is we use RTMP to stream in mobile Android (or iOS), broadcast in the backend media servers (either Antmedia or Janus) and stream/play it in mobile device using RTMP, But for web users they will stream it thru WebRTC (as RTMP support only works in flash).
What is the advantage of such approach, or are there any pros and cons of such approach?
This is an alternative to the full WebRTC approach wherein mobile devices broadcast and publish WebRTC to media servers, played and streamed in WebRTC for both mobile and web users.
Any advantage/disadvantage of both approach?
(Sorry kinda new to the streaming world and such questions are raised by managers)

Is it possible to deliver RTSP stream via Kurento. WebRTC to RTSP

I want to use Kurento as media server which takes WebRTC as an input and provides RTSP stream as url: rtsp://kurento/streamName
Is this possible?
I saw https://github.com/lulop-k/kurento-rtsp2webrtc/ project which does opposite thing.
My final goal is to deliver a stream to mobile browsers via JSMPEG.
This is not possible, as Kurento team says: "We can consume it, but not produce it."
Now, as a common solution for this, you could stream from Kurento to Wowza media server using an RTP endpoint, and then re-stream RTSP from Wowza. In KMS google group there is a lot of content related to the integration between the two of them.

Is using Kurento Media Server can provide me webcam broadcasting like Red5?

I am building a web-based project which has webcam one-way broadcasting part .(A user can open its own cam and some viewers can join its room to only view and listen).
So i have decided to use Kurento Media Server(KMS) because of not having any experince with flash.
My questions in my head:
Do i need anything extra beside KMS to make a user broadcast webcam?
Can Kurento provide me the live streaming to webpage?
And What is the difference using Red5 or Kurento?
Thanks in advance
Do i need anything extra beside KMS to make a user broadcast webcam?
You'll probably need a TURN server for users that have some port limitations
Can Kurento provide me the live streaming to webpage?
Sure! Check the tutorials and the documentation for a full list of features.
And What is the difference using Red5 or Kurento?
Kurento is more than just a media server. It is a pluggable platform that offers computer vision and augmented reality capabilities, on top of video and audio streaming, recording and playing. It also offers WebRTC out of the box, which is something Red5 can't do as of today.
Disclaimer: I'm part of the Kurento team.

What's the difference HLS(http live stream) and DSS(darwin streaming server)

I'm beginner developer.
And I don't speak engilsh very well. sorry
I want to broadcast live video from iPhone camera like iphone video call.
In this case, which do I choice better, hls or dss.
so, what's the functional difference HLS and DSS.
Can hls broadcast live video from iphone camera to another iphone?
Darwin Streaming Server is for RTSP streaming. HLS is a streaming technology based on using HTTP server for hosting the content.
iPhone to iPhone video isn't well served by either technology. It's possible to use an iPhone camera to capture video, upload it to a server, package it for HLS and serve it to the client viewers. This is very high latency (around 10-30 seconds), so it's likely not suitable for you.
If you want 1-to-1 messaging, you're probably better off using a real-time system like RTP, which is what's used by FaceTime and video calling programs.