Does WebRTC support Adaptive Bitrate Streaming for video? - webrtc

I am using WebRTC for developing one of my applications.
There is no clarity on whether WebRTC natively supports adaptive bitrate streaming of video packets? Does VP8 / VP9 have adaptive bitrate encoding support? Is bitrate_controller WebRTC's implementation of ABR?
Can anyone please throw more light on this? I find no conclusive evidence that WebRTC natively supports Adaptive streaming for Video.

Based on the WebRTC documentation found on this website: https://hpbn.co/webrtc/#audio-opus-and-video-vp8-bitrates I found this:
When requesting audio and video from the browser, pay careful
attention to the size and quality of the streams. While the hardware
may be capable of capturing HD quality streams, the CPU and bandwidth
must be able to keep up! Current WebRTC implementations use Opus and
VP8 codecs:
The Opus codec is used for audio and supports constant and variable bitrate encoding and requires 6–510 Kbit/s of bandwidth. The good
news is that the codec can switch seamlessly and adapt to variable
bandwidth.
The VP8 codec used for video encoding also requires 100–2,000+ Kbit/s of bandwidth, and the bitrate depends on the quality of the
streams: 720p at 30 FPS: 1.0~2.0 Mbps 360p at 30 FPS: 0.5~1.0 Mbps
180p at 30 FPS: 0.1~0.5 Mbps
As a result, a single-party HD call can require up to 2.5+ Mbps of
network bandwidth. Add a few more peers, and the quality must drop to
account for the extra bandwidth and CPU, GPU, and memory processing
requirements.
So as far as I understand it, both codec will adapt the audio and video stream to the available bandwidth. Hope this helps.

Related

Does UE4/5 Leverage GPU for Video Rendering?

I want to generate many hours of video of a 3D board game. I have a powerful RTX 3090, and I want to utilize it as much as possible.
Can UE4/5 effectively leverage GPU resources to render video? If not, is there a better engine to effectively leverage GPUs for rendering?
UnrealEngine provides a very advanced cinematic tools,including movie render queue module which allows rendering the UE view-port to image sequence or video (very limited number of formats). However, UE doesn't encode video on GPU in that specific case. I write "that specific" for a reason. UE does use Nvidia GPU encoder(if available) for fast and efficient encoding to H264 when Pixel Streamer feature is used to stream video with WebRTC. But this one is for interactive streaming of the engine,not for video encoding. So even though you can deploy UE in a remote render farm and try to encode a massive amount of video data, it won't be as fast as using dedicated hardware encoder, such as NVIDIA NVENC. Moreover, UE doesn't provide video encoding to H264 at all. You would have to encode into JPG/PNG/BMP image sequence,then use tool like FFMPEG to convert to video. I recently put on GitHub an MP4 encoder plugin for UnrealEngine, which I wrote for my needs, but this one uses also CPU to perform encoding.

How does playbackRate affect bandwidth consumption with videojs?

I'm using videojs to playback videos stored on AWS. My users will often play back the video at 4x, 8x, or 16x speed. I can control the playback speed using:
videojs('my-player', {playbackRates: [1, 4, 8, 16]})
How does this impact bandwidth usage? Does a video played at 4x speed consume 1/4 of the bandwidth?
Are there other web video frameworks that would be better suited to minimizing data transfer out when playback speed is increased?
Most (if not all) HTML5 video player libraries are just wrappers for native HTML5. So buffering etc is handled by browser according to standardized RFC protocols. On the other hand, HLS/DASH features requires custom implementation by player.

WebRTC OPUS codec : Minimum Bandwidth for good audio

In my WebRTC application, OPUS codec has been used to compress the audio stream and I was wondering what is the minimum viable bandwidth that should be allocated for audio stream without jitter?
For Opus voice encoding, mono 16KHz sample rate:
6Kbps is a minimum, when voice is still recognizible
16Kbps is a medium - good enough
32Kbps is a maximum - you wont see big difference if encode at higher bitrate (higher than 32)
From what I tested a few hundred Kbps (bits, not bytes), approximately 300-400Kbps should be enough for good audio quality, not only voice, but music too. But more important is the network latency, which should be under 20-25ms.
For decent voice audio a tenth (30-40Kbps) should be enough. But this is for one peer only. The latency can be much higher but you'll hear small skips now and then, which should acceptable for conversations.

Windows 8 low latency video streaming

My game is based on Flash and uses RTMP to deliver live video to players. Video should be streamed from single location to many clients, not between clients.
It's essential requirement that end-to-end video stream should have very low latency, less than 0.5s.
Using many tweaks on server and client, I was able to achieve approx. 0.2s latency with RTMP and Adobe Live Media Encoder in case of loopback network interface.
Now the problem is to port the project to Windows 8 store app. Natively Windows 8 offers smooth streaming extensions for IIS + http://playerframework.codeplex.com/ for player + video encoder compatible with live smooth streaming. As of encoder, now I tested only Microsoft Expression Encoder 4 that supports live smooth streaming.
Despite using msRealTime property on player side, the latency is huge and I was unable to make it less than 6-10 seconds by tweaking the encoder. Different sources state that smooth [live] streaming is not a choice for low-latency video streaming scenarios, and it seems that with Expression Encoder 4 it's impossible to achieve low latency with any combination of settings. There are hardware video encoders which support smooth streaming, like ones from envivio or digital rapids, however:
They are expensive
I'm not sure at all if they can significantly improve latency on encoder side, compared to Expression Encoder
Even if they can eliminate encoder's time, can the rest of smooth streaming (IIS side) support required speed.
Questions:
What technology could be used to stream to Win8 clients with subsecond latency, if any?
Do you know players compatible with win8 or easily portable to win8 which support rtmp?
Addition. Live translation of Build 2012 uses Rtmp and Smooth Streaming in Desktop mode. In Metro mode, it uses RTMP and Flash Player for Metro.
I can confirm that Smooth Streaming will not be your technology of choice here. Under the very best scenario with perfect conditions, the best you're going to get is a few seconds (absolute minimum latency would be the chunk length itself, even if everything else had 0 latency.)
I think most likely RTSP/RTMP or something similar using UDP is your best bet. I would be looking at Video Conferencing technologies more than wide audience streaming technologies. If I remember correctly there are a few .NET components out there to handle RTSP H.264 meant for video conferencing - if I can find them later I will post here.

What voice compression algorithm to use?

which algorithm used for voice compression mailing and decompression
One of the best voice codecs out there is G.729, which is currently owned and licensed by Digium (http://www.digium.com), makers of Asterisk. It is an extremely effective low-loss audio codec intended for vocal wavelengths (hence why it's used for VoIP telephony). It also handles jitter very well (the change in height of latency over time), and uses only 8kb/s.
These days everybody from Skype to Google to phone companies do it with mp3. The minimum bitrate is a bit high but you get excellent audio for that bitrate. And the tech is proven and solid.
But apart from mp3 there are other, typically old-school, low bandwidth codecs which have been used to encode the human voice. The standard in telephony used to be u-law (that's mu-law) and A-law. A fairy recent speech specific codec that looks interesting is Speex