increase webrtc frame rate to allow virtual desktop view - webrtc

Is it possible to increase the web RTC frame rate during screen sharing to allow useable viewing of a virtual desktop in VR? Current testing, with the request set for 30, shows the frame rate at about 20 fps for a desktop-to desktop connection, 17 in a-frame, and 13 when connecting to Oculus Quest 2. At those speeds, the mouse, controlled by the source computer, lags behind its actual position in the shared screen view just enough to make it very difficult to use. Here is the current code to try and set the frame rate:
var displayMediaOptions = {
video: {
frameRate: 30
}
};
window.displayMediaStream = await navigator.mediaDevices.getDisplayMedia(displayMediaOptions);
I also tried minFrameRate and increasing the bit rate in the peer connection, per other posts, to no effect. Most of the posts discuss how to reduce the bit rate, and some, for example https://github.com/ant-media/Ant-Media-Server/wiki/How-to-improve-WebRTC-bit-rate%3F recommend 10-20 as the optimal frame rate, but can it be forced higher if necessary without breaking everything, or is another solution needed? Other virtual desktop solutions require a native app and/or cable link to the source computer - is that the solution?

WebRTC implements congestion control: it dynamically probes the network, and determines a rate that is safe to use. If the probed rate is too low, it will reduce the frame rate, reduce the resolution, or reduce the video quality.
Short of using a faster network, there are three ways to increse framerate at the cost of resolution or quality:
you may decrese the capture resolution by passing the video.height and video.width constraints to getUserMedia;
you may request that the video be downsampled, by passing the scaleResolutionDownBy constraint to RTCRtpSender.setParameters;
you may request that the rate control prefer to sacrifice resolution and quality by passing a contentHint to RTCRtpSender.setParameters.

Related

Reduce Freeswitch video conference latency

We're experimenting with a Freeswitch based multiparty video conferencing solution (Zoom like). The users are connecting via WebRTC (Verto clients) and the streams are all muxed and displayed on the canvas (mod_conference in mux mode). It works OK, but we notice high media latency for mixed output and this makes it very difficult to have a real-time dialogue. This is not load related, even with only 1 caller watching himself on the canvas (the mux conference output), it takes almost 1 second to see a local move being reflected on the screen (e.g. if I raise my hand I can see it happening on the screen after almost 1 second ). This is obviously the roundtrip delay, but after discarding the intrinsic network latency (measured to be about 100 ms roundtrip) there seem to be around 800-900 ms added latency. There's no TURN relaying involved. It seems this is being introduced along the buffering/ transcoding/ muxing pipeline. Any suggestions please what to try to reduce the latency? What sort of latency should we expect, what's your experience, has anyone deployed a Freeswitch video conferencing with acceptable latency for bidirectional, real time conversations? Ultimately I'm trying to understand if Freeswitch can be used for a multiparty real time video conversation or I should give up look for something else. Thanks!

Stream html5 camera output

does anyone know how to stream html5 camera output to other users.
If that's possible should I use sockets, images and stream them to the users or other technology.
Is there any video tutorial where I can take a look about it.
Many thanks.
The two most common approaches now are most likely:
stream from the source to a server, and allow users connect to the server to stream to their devices, typically using some form of Adaptive Bit Rate streaming protocol (ABR - basically creates multiple bit rate versions of your content and chunks them, so the client can choose the next chunk from the best bit rate for the device and current network conditions).
Stream peer to peer, or via a conferencing hub, using WebRTC
In general, the latter is more focused towards real time, e.g. any delay should be below the threshold which would interfere with audio and video conferences, usually less than 200ms for audio for example. To achieve this it may have to sacrifice quality sometimes, especially video quality.
There are some good WebRTC samples available online (here at the time of writing): https://webrtc.github.io/samples/

Getting HLS livestream in sync across devices

We are currently using ExoPlayer for one of our applications, which is very similar to the HQ Trivia app, and we use HLS as the streaming protocol.
Due to the nature of the game, we are trying to keep all the viewers of this stream to have the same latency, basically to keep them in sync.
We noticed that with the current backend configuration the latency is somewhere between 6 and 10 seconds. Based on this fact, we assumed that it would be safe to “force” the player to play at a bigger delay (15 seconds, further off the live edge), this way achieving the same (constant) delay across all the devices.
We’re using EXT-X-PROGRAM-DATE-TIME tag to get the server time of the currently playing content and we also have a master clock with the current time (NTP). We’re constantly comparing the 2 clocks to check the current latency. We’re pausing the player until it reaches the desired delay, then we’re resuming the playback.
The problem with this solution is that the latency might get worse (accumulating delay) over time and we don’t have other choice than restarting the playback and redo the steps described above if the delay gets too big (steps over a specified threshold). Before restarting the player we’re also trying to slightly increase the playback speed until it reaches the specified delay.
The exoPlayer instance is setup with a DefaultLoadControl, DefaultRenderersFactory, DefaultTrackSelector and the media source uses a DefaultDataSourceFactory.
The server-side configuration is as follows:
cupertinoChunkDurationTarget: 2000 (default: 10000)
cupertinoMaxChunkCount: 31 (default: 10)
cupertinoPlaylistChunkCount: 15 (default: 3)
My first question would be if this is even achievable with a protocol like HLS? Why is the player drifting away accumulating more and more delay?
Is there a better setup for the exoPlayer instance considering our specific use case?
Is there a better way to achieve a constant playback delay across all the playing devices? How important are the parameters on the server side in trying to achieve such a behaviour?
I would really appreciate any kind of help because I have reached a dead-end. :)
Thanks!
The only sollution for this is provided by:
https://netinsight.net/product/sye/
Their sollution includes frame accurate sync with no drift and stateful ABR. This probably can’t be done with http based protocols hence their sollution is built upon UDP transport.

How to limit the frame rate in Vulkan

I know that the present mode of the swap chain can be used to sync the frame rate to the refresh rate of the screen (with VK_PRESENT_MODE_FIFO_KHR for example).
But is there a way of limiting the frame rate to a fraction of the monitor refresh rate? (eg. I want my application to run at 30 FPS instead of 60.)
In other words, is there a way of emulating what wglSwapIntervalEXT(2) does for OpenGL?
Vulkan is a low-level API. It tries to give you the tools you need to build the functionality you want.
As such, when you present an image, the API assumes that you want the image presented as soon as possible (within the restrictions of the swapchain). If you want to delay presentation, then you delay presentation. That is, you don't present the image until it's near the time to present a new image, based on your own CPU timings.

WebRTC video and photo at same time

I'm working on an application that transmits video in low quality using webrtc. Periodically I want to send from same camera single frame in high resolution.
When I try to acquire another stream using getUserMedia I get same low quality one and when I try to pass some constraints to force higher resolution then then operation fails with overconstrained error (even though normally when there is no other stream it works fine).
Is it even possible to have at the same time many streams with different parameters from same device? Or is it possible acquire high resolution image without requesting for a new stream?