How to keep streaming if initial client loses connection? - agora.io

I'm working on an app that streams out multiple presenters via the Agora Live Streaming protocol. Everything works great so long as the person who started the live stream stays connected, however if they lose internet, the stream stops, even if other presenters are still online.
Is there a way to tell the live stream to keep going until "stop live streaming" is called (or all presenters are offline)? My code can handle updating the transcoding config (e.g. video layout) when they go offline.

After multiple discussions with Agora Support, it appears the answer is no, if only using the web SDK, however they are introducing a new server side feature to make this possible.
It's currently in beta, so you'll have to ask Agora Support to enable it for your account, but once you've done so you can create and update an RTMP converter via their server side API instead of relying on the client SDK to manage the stream: https://docs-preprod.agora.io/en/Interactive%20Broadcast/streaming_restful

I'm assuming you're using startLiveStreaming method using the Agora Web SDK. You can attach event listeners on all hosts to listen for primary host's online status, in case the primary host (the host that calls the start method) goes offline - a secondary host can call the start (and transcode) method.
You can also use Agora RTM to signal this status.

Related

How does apps like Whatsapp or telegram listen to the incoming call/message events on Android?

I built a VoIP calling app which maintains a persistent connection with the server to listen to any incoming calls. I implemented a background service to do this.
But since Oreo, this running code is now broken because of the introduction of Background Execution Limits
After looking into forums, I found that some people are suggesting
Convert Service to JobService and let android schedule it
Doing so, my app won't be able to receive calls when it is stopped
Run your operations in foreground services
It is annoying for some users to see a constant notification in the notification bar. So these above-mentioned options aren't working for me to fix my code for Oreo.
How does WhatsApp get the incoming (VOIP) call in Android (Oreo onwards) working around the Background Execution Limits?
(Sticky) foreground services are not affected by the restrictions. So you could use one those as replacement for background services on Oreo.
But foreground services have two disadvantages: They are less likely killed by the system in order to reclaim resources compared to background services, and hence affects the Android system's self-healing capability. And they require you to display a permanent notification. But Users are able to suppress the notification, somewhat mitigating this disadvantage.
I am assuming that you are using SIP to establish the connection and initiate calls. Without a service constantly re-sending REGISTERs, the app doesn't receive INVITEs when the server sends them.
A workaround for this problem is what is called the "push notification strategy". It works as follows, when the server sends a INVITE, it also sends an FCM notification to your app, This wakes up your app which then sends a REGISTER to your server, which in return forks the call to your app. Here is a video that better explains this strategy
There are two options:
use platform push services (APNS or FCM)
maintain persistent socket connection and exclude application from battery optimisations.

Is it possible to save a video stream between two peers in webrtc in the server, realtime?

Suppose I have 2 peers exchanging video with webRTC. Now I need both of the streams to be saved as video files in the central server. Is is possible to do it realtime? (Storing/Uploading the video from peers is not an option).
I thought of making a 3 node webRTC connection, with the 3rd node being the server. This way, I can screen record the 3rd node's stream or save it using some other way. But I am not sure about the reliability/feasibility of the implementation.
This is for a mobile application, and I would avoid any method that involves uploading/saving.
PS: I'm using Agora.io for the purpose of video-conference.
in my opinion
you can do it like the record demo:https://webrtc.github.io/samples/src/content/getusermedia/record/.
record each stream to blobs and push them to your server with websocket.
then convert the blobs to a webm file or just add in a video
Agora doesn't offer on-premise recording out of the box but they do provide thee code for you to be able to launch your own on-premise recording using your own server. Agora has the code and instructions to deploy on GitHub: https://github.com/AgoraIO/Basic-Recording
The way it works, once you have set up the Agora Recording SDK, the client would trigger the recording to start, via user interaction (button tap) or some other event (i.e. peer-joined or stream-subscribed) this will trigger the recording service to join the channel and record the streams. _The service outputs the video file once recording has stopped.
you need a WebRTC media server.
WebRTC media servers makes it possible to support more complex
scenarios WebRTC media servers are servers that act as WebRTC clients
but run on the server side. They are termination points for the media
where we’d like to take action. Popular tasks done on WebRTC media
servers include:
Group calling Recording Broadcast and live streaming Gateway to other
networks/protocols Server-side machine learning Cloud rendering
(gaming or 3D) The adventurous and strong hearted will go and develop
their own WebRTC media server. Most would pick a commercial service or
an open source one. For the latter, check out these tips for choosing
WebRTC open source media server framework.
In many cases, the thing developers are looking for is support for
group calling, something that almost always requires a media server.
In that case, you need to decide if you’d go with the classing (and
now somewhat old) MCU mixing model or with the more accepted and
modern SFU routing model. You will also need to think a lot about the
sizing of your WebRTC media server.
For recording WebRTC sessions, you can either do that on the client
side or the server side. In both cases you’ll be needing a server, but
what that server is and how it works will be very different in each
case.
If it is broadcasting you’re after, then you need to think about the
broadcast size of your WebRTC session.
link:https://bloggeek.me/webrtc-server/

Adobe Media Server Alternative for VideoChat

I currently have a video chat app working on web(Flash) and android via Adobe AIR, it uses Adobe Media Server (RTMP) as backend for video streaming and shared objects, my question is, if there is another server or solution that provides many to many live video broadcast maybe using H.264 codec from android and iOS, have some sort of user list and room list stored in a database or similar, I want to move away from Adobe as it has many limitations on mobile devices.
Live video is crucial in 1 to many broadcasts that will have hundreds of viewers at the same time.
Thanks for reading!
Ulex.fr created an RTMP connector for Asterisk (the free PBX platform).
Used with the Asterisk Vonference application, it allows you to create conference rooms for 1 to many configuration, with audio and video. The only one limitation is the power of your server. You can plan a scalable architecure in order to broadcast one video to many (many could be unlimited). We developp a specific protocol to connect and manage the connection based on the telephony events. I think we already done a direct RTMP connection that skip this protocol too.
All the project done by ulex.fr is free, OpenSource and GPL.
Get the full project here : https://github.com/voximal/asterisk-rtmp
(a live demo is available)
We already develop an RTMP stack for android with video (using the camera), this allows you to create your own application without using AIR.
You can check Adobe Cirrus, it's still in the beta stage (actually IMHO Adobe forgot about it), but it works on web, desktop and mobile too. Check this Video Phone example, it can handle chat applications without a problem.
http://labs.adobe.com/technologies/cirrus/samples/
You could take a look at Red5 Media Server, which is an open source solution. There are other options like the Wowza's solutions on AWS, but they come a higher cost...
Ok as today, we have decided that we can manage the users,rooms and messages via Google Firebase Real Time Database, and the live video stream using ANT Media Server

VoIP App development in xamarin with Xmpp Server

I want to develop a VoIP app with Xamarin and Xmpp server.
So far the only things that I have found is the openfire and "jitsi meet" for the server side and matrix for the client side. But the matrix has nothing to do with voice streaming and is just for text messaging and "jitsi meet" doesn’t have any sdk for .net client side.
I also have found the red5pro but this has client sdks just for native android and ios development platform and has nothing for Mono.
So what Should I look for?!
First, let's clarify some basics:
openfire is a XMPP server. Basically, this is all you need on the server side for basic VoIP support.
Alternatives include ejabberd and Prosody.
jitsi meet essentialy already is a VoIP app, so if you want to develop your own, you don't really need that.
"Jitsi Videobridge" on the other hand can be used to provide a relay server for video conferences. For your first steps with a simple VoIP app, you wont need that either, but if you want your users to be able to create video conferences with many participants, then this helps.
(Explanation: Normally, when you create a P2P-Video conference, you
have two options: First, all users send their video data to all
participants (everybody needs lots of bandwidth), or you pick one
participant ("host") that receives the video streams of every
participant end sends them to every other participant. In the second
case, a normal participant only has to upload his stream once and
download n streams, whereas the host does most of the work - so only
that one user needs high bandwidth.
Jitsi Videobridge can run on a server and act as this conference host (usually a server has a much better bandwidth than a home user), so that none of the participants has to act as a host.
In simple VoIP applications (without video), this may not be neccessary, as audio streams are usually much smaller than video streams.)
Now, as I said above, in order to write a VoIP app, you basically only need a XMPP server (openfire, prosody and ejabberd should all be sufficient for this use case), a client library that supports Jingle and client libraries for the RTP media streams (transfer and display).
Jingle is the name of a XMPP protocol extension that enables the negotiation of P2P data streams as they are needed for a VoIP call.
The relevant protocol specifications:
XEP 0166: Jingle
XEP-0167: Jingle RTP Sessions
So what you need to find is a XMPP library with support for the jingle protocol. The C# Matrix XMPP SDK (not to be confused with the "Matrix protocol", which is a different protocol and has nothing to do with XMPP except for having a common goal) is one example of such a library. According to their web site, there is support for Jingle, but I couldn't find any documentation about it.
However, as I mentioned above, Jingle is only about how to negotiate data streams, not the data streams and VoIP itself.
So what that library probably helps you with is parsing of the Jingle XMPP messages that are needed to set up a RTP data stream.
For displaying and transfering the RTP stream, however, you need additional libraries. For that, have a look at the following SO questions and answers:
Open Source .net C# library for Real Time transport Protocol
Streaming Avi files from C# using RTP
I hope I could give you some useful hints...

Is it possible to use WebRTC to streaming video from Server to Client?

In WebRTC, I always see the implementation about peer-to-peer and how to get video streaming from one client to another client. How about server-to-client?
Is it possible for WebRTC to streaming video file from server-to-client?
(I am thinking about using WebRTC Native C++ API to create my own server application to connect to the current implementation on chrome or firefox browser client application.)
OK, if it is possible, will it be faster than many current video streaming services?
Yes it is possible as the server can be one of the peers in that peer-to-peer session.
If you respect the protocols and send the video in SRTP packets using VP8, the browser will play it. To help you build these components on other applications or servers, you can check this page and this project as a guide.
Now, comparing WebRTC with other streaming services... It will depend on several variables like the Codec or the protocol. But, for instance, comparing WebRTC (SRTP over UDP with VP8 Codec) against Flash (RTMP over TCP with H264 Codec), I would say that WebRTC wins.
The player will be Flash Player against the native <video> tag.
The transport would be TCP against UDP.
But of course, everything depends on what you are sending to the client.
I have written some apps and plugins using the native WebRTC API, and there isn't a lot of information out there yet, but here are a few useful resources to get you started:
QT Example: http://research.edm.uhasselt.be/jori/qtwebrtc
Native to Browser example: http://sourcey.com/webrtc-native-to-browser-video-streaming-example/
I started with the WebRTC Native C++ to Browser Video Streaming Example but it doesnot build anymore with the actual WebRTC Native Code.
Then I made modifications merging into a standalone process :
management of the peerConnection (the peerconnection_server)
access to Video4Linux capture (the peerconnection_client).
Removing the stream from browser to the WebRTC Native C++ client give a simple solution to access throught a WebRTC browser to a Video4Linux device that is available from GitHub webrtc-streamer.
Live Demo
We are attempting to replace MJPEGs with Webrtc for our server software and have a prototype module for doing this using a smattering of components tied to the Openwebrtc project. It has been an absolute bear to do, and we have frequent ICE negotiation errors (even over a simple LAN), but it mostly works.
We also built a prototype with the Google Webrtc module, but it had many dependencies. I find it easier to work with the Openwebrtc modules because Google's stuff is so tightly tied to general peer-to-peer scenarios on the browser.
I compiled the following from scratch:
libnice 0.1.14
gstreamer-sctp-1.0
usrsctp
Then I have to interact with libnice a bit directly to gather candidates. Also have to write out the SDP files by hand. But the amount of control--being able to control the source of the pipeline--makes it worthwhile. The resulting pipeline (with two clients off one server source) is below:
Of course. I'm writting a program using native WebRTC api which can join the conference as a peer and record both video and audio.
see: How to stream audio from browser to WebRTC native C++ application
and you can definitely streaming media from native app.
I'm sure you can use dummy_audio_file to streaming audio from local file, and you can find a way to access the video streaming progress by your own.
Yes it is. We have developed an load test tool to publish and play for Ant Media Server. This tool can broadcast media file. We used the same native WebRTC library used in Ant Media Server.
Sure it's possible, it allows covert live streaming to WebRTC, for example:
OBS/FFmpeg ---RTMP---> Server ---WebRTC--> Chrome/Client
For this scenario, it allows the ultra low latency live streaming, about 600~800ms, to play the live streaming by WebRTC. Please take a look at this demo.