Why does the birate decrease when streaming via WebRTCBin? - webrtc

I have been experimenting with WebRTC video streaming via Gstreamer's WebRTCBin element to a web browser (Google Chrome). I noticed that the birate decreases sometimes, to the point of reaching zero when the source video being streamed doesnt change.
In this example, between 60 and ~90 seconds the bitrate of the video being received falls down and stays close to 0. In this time window the video at the source is a loading screen of a game, that doesnt change until the loading bar moves. Also, when the game finishes loading and starts again, the bitrate goes back up and starts decreasing.
My pipeline uses NVENC as the encoder:
"dxgiscreencapsrc cursor=true ! capsfilter caps=\"video/x-raw,framerate=60/1\" ! queue ! nvh264enc bitrate=2250 rc-mode=vbr gop-size=-1 qos=true preset=low-latency-hq ! capsfilter caps=\"video/x-h264,profile=high\" ! queue ! rtph264pay ! capsfilter caps=\"application/x-rtp,media=video,encoding-name=H264,width=1280,height=720,payload=123\""
I was wondering if there is some kind of optimization, either in WebRTCBin, NVENC or the web browser, that prevents repeated bits to be sent over to the client browser. Is that correct? Who is the culprit?
I did some searching online (for WebRTC and WebRTCBin related stuff) and could not find anything that explains the data I have been seeing.

Related

Issues with WebRTC/Gstreamer video quality

Im pretty new in Streaming and RealTimeCommunication. I need to work on a service to playback camera feed from browser. (and probably use Gstreamer to process the video in future)
So I follow a helloworld example here: https://github.com/centricular/gstwebrtc-demos/blob/master/sendrecv/gst-java/src/main/java/WebrtcSendRecv.java
This looks so good and I got my camera video for first few 10 seconds. After 10 seconds, video quality start to become worse like this:
BTW, here is my current Gstreamer pipeine description (after WebRTCBin):
videoconvert ! queue max-size-buffers=1 leaky=downstream ! vp8enc deadline=1 ! rtpvp8pay mtu=1024 ! queue max-size-buffers=1 leaky=downstream ! capsfilter caps=application/x-rtp,media=video,encoding-name=VP8,payload=120
What could be reason to that in WebRTC? Could it be latency or just network congestion. Any clue is appreciate!

Reduce Freeswitch video conference latency

We're experimenting with a Freeswitch based multiparty video conferencing solution (Zoom like). The users are connecting via WebRTC (Verto clients) and the streams are all muxed and displayed on the canvas (mod_conference in mux mode). It works OK, but we notice high media latency for mixed output and this makes it very difficult to have a real-time dialogue. This is not load related, even with only 1 caller watching himself on the canvas (the mux conference output), it takes almost 1 second to see a local move being reflected on the screen (e.g. if I raise my hand I can see it happening on the screen after almost 1 second ). This is obviously the roundtrip delay, but after discarding the intrinsic network latency (measured to be about 100 ms roundtrip) there seem to be around 800-900 ms added latency. There's no TURN relaying involved. It seems this is being introduced along the buffering/ transcoding/ muxing pipeline. Any suggestions please what to try to reduce the latency? What sort of latency should we expect, what's your experience, has anyone deployed a Freeswitch video conferencing with acceptable latency for bidirectional, real time conversations? Ultimately I'm trying to understand if Freeswitch can be used for a multiparty real time video conversation or I should give up look for something else. Thanks!

Getting HLS livestream in sync across devices

We are currently using ExoPlayer for one of our applications, which is very similar to the HQ Trivia app, and we use HLS as the streaming protocol.
Due to the nature of the game, we are trying to keep all the viewers of this stream to have the same latency, basically to keep them in sync.
We noticed that with the current backend configuration the latency is somewhere between 6 and 10 seconds. Based on this fact, we assumed that it would be safe to “force” the player to play at a bigger delay (15 seconds, further off the live edge), this way achieving the same (constant) delay across all the devices.
We’re using EXT-X-PROGRAM-DATE-TIME tag to get the server time of the currently playing content and we also have a master clock with the current time (NTP). We’re constantly comparing the 2 clocks to check the current latency. We’re pausing the player until it reaches the desired delay, then we’re resuming the playback.
The problem with this solution is that the latency might get worse (accumulating delay) over time and we don’t have other choice than restarting the playback and redo the steps described above if the delay gets too big (steps over a specified threshold). Before restarting the player we’re also trying to slightly increase the playback speed until it reaches the specified delay.
The exoPlayer instance is setup with a DefaultLoadControl, DefaultRenderersFactory, DefaultTrackSelector and the media source uses a DefaultDataSourceFactory.
The server-side configuration is as follows:
cupertinoChunkDurationTarget: 2000 (default: 10000)
cupertinoMaxChunkCount: 31 (default: 10)
cupertinoPlaylistChunkCount: 15 (default: 3)
My first question would be if this is even achievable with a protocol like HLS? Why is the player drifting away accumulating more and more delay?
Is there a better setup for the exoPlayer instance considering our specific use case?
Is there a better way to achieve a constant playback delay across all the playing devices? How important are the parameters on the server side in trying to achieve such a behaviour?
I would really appreciate any kind of help because I have reached a dead-end. :)
Thanks!
The only sollution for this is provided by:
https://netinsight.net/product/sye/
Their sollution includes frame accurate sync with no drift and stateful ABR. This probably can’t be done with http based protocols hence their sollution is built upon UDP transport.

Realtime audio thread issues (iOS)

I have the following setup:
Core Audio Callback -> Circular Buffer -> libLAME -> Circular Buffer -> libShout
This all works fine until I start doing any intensive work at which point the thread doing the encoding (libLAME) and the thread doing the streaming (libShout) is throttled and things begin to go down hill very quickly (basically the server gets audio every 2-5 sec rather than every 200 msec or less like it should and the stream becomes garbled).
I have tried the following:
Encoding and Streaming on the one thread
Encoding and Streaming on their own threads
Setting the priority of the thread(s) to high
Setting the thread(s) to realtime threads (which appears to fix it for the most part except for the fact that everything else is then throttled way too much)
I am pretty much using the stock standard example code for libLAME and libShout.
i.e. Set them up for the audio format and server, then loop whilst data is available in the buffers.
What I don't understand is why the threads are being throttled when the CPU usage is maxing out at 80% and the threads aren't blocking whilst waiting on the other thread.
Any help with this would be greatly appreciated.

Streaming audio. How does 10 seconds of audio are loaded in 2 seconds?

Dunno if that the right place to post the question.
However out of curiosity, how does 10 seconds are loaded in 2 seconds? I could have understand if an audio being loaded to the fileserver and the client is loading it afterwards. However lifestream that comes from RTSP I have got two answers,
It's either loads played content
Or the internet lifestream is behind real stream...
Anyway I would like to hear your aswers and guidance on this topic. Thanks
It's the second option. If you would stream audio in "real time" without any delay, you would have serious problems when the connection is lost or data is delayed, for example, for 100ms. Than the user wouldn't hear anything for 100ms, which would be pretty annoying. This especially happens with mobile connections, which have much higher error rates and while you move have a hard time to keep a stable connection.
Usually the acutal playback is delayed and the next seconds are buffered. When the connection is down and comes back in the buffered time frame, than the user doesn't notice that the connection was lost. In you example the connection can be lost for up to 8 seconds without any problems.