Add delay with WebRTC? - webrtc

Can you add delay with WebRTC? I want to set up video-conferencing with a constant and consistent delay. For example, 4s each way. I'd want the client to buffer the video and audio so that if you say something the and the other person replies instantly on hearing it, then you hear their response exactly 4s after you spoke for the first time.

Buffering is possible in WebRTC - see Is buffering possible in WebRTC?. Unfortunately not to the level that you want.
It might be possible to do this as an effect with WebAudio, but then you won't have video.

Related

Reduce Freeswitch video conference latency

We're experimenting with a Freeswitch based multiparty video conferencing solution (Zoom like). The users are connecting via WebRTC (Verto clients) and the streams are all muxed and displayed on the canvas (mod_conference in mux mode). It works OK, but we notice high media latency for mixed output and this makes it very difficult to have a real-time dialogue. This is not load related, even with only 1 caller watching himself on the canvas (the mux conference output), it takes almost 1 second to see a local move being reflected on the screen (e.g. if I raise my hand I can see it happening on the screen after almost 1 second ). This is obviously the roundtrip delay, but after discarding the intrinsic network latency (measured to be about 100 ms roundtrip) there seem to be around 800-900 ms added latency. There's no TURN relaying involved. It seems this is being introduced along the buffering/ transcoding/ muxing pipeline. Any suggestions please what to try to reduce the latency? What sort of latency should we expect, what's your experience, has anyone deployed a Freeswitch video conferencing with acceptable latency for bidirectional, real time conversations? Ultimately I'm trying to understand if Freeswitch can be used for a multiparty real time video conversation or I should give up look for something else. Thanks!

Getting HLS livestream in sync across devices

We are currently using ExoPlayer for one of our applications, which is very similar to the HQ Trivia app, and we use HLS as the streaming protocol.
Due to the nature of the game, we are trying to keep all the viewers of this stream to have the same latency, basically to keep them in sync.
We noticed that with the current backend configuration the latency is somewhere between 6 and 10 seconds. Based on this fact, we assumed that it would be safe to “force” the player to play at a bigger delay (15 seconds, further off the live edge), this way achieving the same (constant) delay across all the devices.
We’re using EXT-X-PROGRAM-DATE-TIME tag to get the server time of the currently playing content and we also have a master clock with the current time (NTP). We’re constantly comparing the 2 clocks to check the current latency. We’re pausing the player until it reaches the desired delay, then we’re resuming the playback.
The problem with this solution is that the latency might get worse (accumulating delay) over time and we don’t have other choice than restarting the playback and redo the steps described above if the delay gets too big (steps over a specified threshold). Before restarting the player we’re also trying to slightly increase the playback speed until it reaches the specified delay.
The exoPlayer instance is setup with a DefaultLoadControl, DefaultRenderersFactory, DefaultTrackSelector and the media source uses a DefaultDataSourceFactory.
The server-side configuration is as follows:
cupertinoChunkDurationTarget: 2000 (default: 10000)
cupertinoMaxChunkCount: 31 (default: 10)
cupertinoPlaylistChunkCount: 15 (default: 3)
My first question would be if this is even achievable with a protocol like HLS? Why is the player drifting away accumulating more and more delay?
Is there a better setup for the exoPlayer instance considering our specific use case?
Is there a better way to achieve a constant playback delay across all the playing devices? How important are the parameters on the server side in trying to achieve such a behaviour?
I would really appreciate any kind of help because I have reached a dead-end. :)
Thanks!
The only sollution for this is provided by:
https://netinsight.net/product/sye/
Their sollution includes frame accurate sync with no drift and stateful ABR. This probably can’t be done with http based protocols hence their sollution is built upon UDP transport.

NAudio: Recording Audio-Card's Actual Output

I successfully use WasapiLoopbackCapture() for recording audio played on system, but I'm looking for a way to record what the user would actually hear through the speakers.
I'll explain: If a certain application plays music, WASAPI Loopback shall intercept music samples, even if Windows main volume-control is set to 0, meaning: even if no sound is actually heard through audio-card's output-jack (speakers/headphone/etc).
I'd like to intercept the audio actually "reaching" the output-jack (after ALL mixers on the audio-path have "done their job").
Is this possible using NAudio (or other infrastructure)?
A code-sample or a link to a such could come in handy.
Thanks much.
No, this is not directly possible. The loopback capture provided by WASAPI is the stream of data being sent to the audio hardware. It is the hardware that controls the actual output sound, and this is where the volume level is applied to change the output signal strength. Apart from some hardware- and driver-specific options - or some interesting hardware solutions like loopback cables or external ADC - there is no direct method to get the true output data.
One option is to get the volume level from the mixer and apply it as a scaling factor on any data you receive from the loopback stream. This is not a perfect solution, but possibly the best you can do without specific hardware support.

Streaming audio. How does 10 seconds of audio are loaded in 2 seconds?

Dunno if that the right place to post the question.
However out of curiosity, how does 10 seconds are loaded in 2 seconds? I could have understand if an audio being loaded to the fileserver and the client is loading it afterwards. However lifestream that comes from RTSP I have got two answers,
It's either loads played content
Or the internet lifestream is behind real stream...
Anyway I would like to hear your aswers and guidance on this topic. Thanks
It's the second option. If you would stream audio in "real time" without any delay, you would have serious problems when the connection is lost or data is delayed, for example, for 100ms. Than the user wouldn't hear anything for 100ms, which would be pretty annoying. This especially happens with mobile connections, which have much higher error rates and while you move have a hard time to keep a stable connection.
Usually the acutal playback is delayed and the next seconds are buffered. When the connection is down and comes back in the buffered time frame, than the user doesn't notice that the connection was lost. In you example the connection can be lost for up to 8 seconds without any problems.

Flash media server delayed streaming

I have a RED5 server I'm using to pass a live streaming between users' cameras.
What I need now is a way to create a delayed broadcast of the camera (intended delay) so that "super users" will be able to see it immediately and others will get it 10-15 seconds later.
If FMS is better for that, I will be happy to know why and how too.
Any help will be appreciated.
I found a way of doing this and posted it here:
http://rialog.blogspot.com/2010/04/delayed-stream-using-fms-for-semi-live.html
Plaese note that the code in this post is pseudo code only..
Decrease the bit rate of output video depends upon the speed of the connections.