I'm using videojs to playback videos stored on AWS. My users will often play back the video at 4x, 8x, or 16x speed. I can control the playback speed using:
videojs('my-player', {playbackRates: [1, 4, 8, 16]})
How does this impact bandwidth usage? Does a video played at 4x speed consume 1/4 of the bandwidth?
Are there other web video frameworks that would be better suited to minimizing data transfer out when playback speed is increased?
Most (if not all) HTML5 video player libraries are just wrappers for native HTML5. So buffering etc is handled by browser according to standardized RFC protocols. On the other hand, HLS/DASH features requires custom implementation by player.
Related
I want to generate many hours of video of a 3D board game. I have a powerful RTX 3090, and I want to utilize it as much as possible.
Can UE4/5 effectively leverage GPU resources to render video? If not, is there a better engine to effectively leverage GPUs for rendering?
UnrealEngine provides a very advanced cinematic tools,including movie render queue module which allows rendering the UE view-port to image sequence or video (very limited number of formats). However, UE doesn't encode video on GPU in that specific case. I write "that specific" for a reason. UE does use Nvidia GPU encoder(if available) for fast and efficient encoding to H264 when Pixel Streamer feature is used to stream video with WebRTC. But this one is for interactive streaming of the engine,not for video encoding. So even though you can deploy UE in a remote render farm and try to encode a massive amount of video data, it won't be as fast as using dedicated hardware encoder, such as NVIDIA NVENC. Moreover, UE doesn't provide video encoding to H264 at all. You would have to encode into JPG/PNG/BMP image sequence,then use tool like FFMPEG to convert to video. I recently put on GitHub an MP4 encoder plugin for UnrealEngine, which I wrote for my needs, but this one uses also CPU to perform encoding.
I've been working on a flutter based project where the scope is to build live streaming functionality. I have used the flutter agora-rtc plugin for this with
video Dimensions: 1080:1920 in the configuration.
It works decent on high speed internet. But when the speed drops the video starts to stutter and lag a lot.
Can you please suggest a dimension and video configuration which will work on 1mbps speed with a decent quality
Setting video configuration based on the internet speed that you mentioned is not the only factor over here. You need to consider the number of simultaneous streams that will be played. Generally, a live stream requires higher bandwidth than a video regular call. You can refer this guide which can help you select the configuration based on your internet speed -https://docs.agora.io/en/All/faq/video_profile
I am using WebRTC for developing one of my applications.
There is no clarity on whether WebRTC natively supports adaptive bitrate streaming of video packets? Does VP8 / VP9 have adaptive bitrate encoding support? Is bitrate_controller WebRTC's implementation of ABR?
Can anyone please throw more light on this? I find no conclusive evidence that WebRTC natively supports Adaptive streaming for Video.
Based on the WebRTC documentation found on this website: https://hpbn.co/webrtc/#audio-opus-and-video-vp8-bitrates I found this:
When requesting audio and video from the browser, pay careful
attention to the size and quality of the streams. While the hardware
may be capable of capturing HD quality streams, the CPU and bandwidth
must be able to keep up! Current WebRTC implementations use Opus and
VP8 codecs:
The Opus codec is used for audio and supports constant and variable bitrate encoding and requires 6–510 Kbit/s of bandwidth. The good
news is that the codec can switch seamlessly and adapt to variable
bandwidth.
The VP8 codec used for video encoding also requires 100–2,000+ Kbit/s of bandwidth, and the bitrate depends on the quality of the
streams: 720p at 30 FPS: 1.0~2.0 Mbps 360p at 30 FPS: 0.5~1.0 Mbps
180p at 30 FPS: 0.1~0.5 Mbps
As a result, a single-party HD call can require up to 2.5+ Mbps of
network bandwidth. Add a few more peers, and the quality must drop to
account for the extra bandwidth and CPU, GPU, and memory processing
requirements.
So as far as I understand it, both codec will adapt the audio and video stream to the available bandwidth. Hope this helps.
As I am developing a video chat application based on the vLine API (fantastic so far), I deal with a lot of high latency and lower bandwidth connections.
I know that a lot of this is abstracted to the browser doing the work behind the scenes, but I am trying to find out if one is able to prioritize audio over video, in regards to quality and bandwidth.
It's always better to be able to hear someone even if the video becomes poor. Are there any abilities to do this in WebRTC and vLine in particular? Ideally, I would like to implement a slider control or checkbox with pre-defined constraints.
Currently there is no way to prioritize audio over video via constraints in a video call. The browser will try to do the 'right thing' in this scenario.
My game is based on Flash and uses RTMP to deliver live video to players. Video should be streamed from single location to many clients, not between clients.
It's essential requirement that end-to-end video stream should have very low latency, less than 0.5s.
Using many tweaks on server and client, I was able to achieve approx. 0.2s latency with RTMP and Adobe Live Media Encoder in case of loopback network interface.
Now the problem is to port the project to Windows 8 store app. Natively Windows 8 offers smooth streaming extensions for IIS + http://playerframework.codeplex.com/ for player + video encoder compatible with live smooth streaming. As of encoder, now I tested only Microsoft Expression Encoder 4 that supports live smooth streaming.
Despite using msRealTime property on player side, the latency is huge and I was unable to make it less than 6-10 seconds by tweaking the encoder. Different sources state that smooth [live] streaming is not a choice for low-latency video streaming scenarios, and it seems that with Expression Encoder 4 it's impossible to achieve low latency with any combination of settings. There are hardware video encoders which support smooth streaming, like ones from envivio or digital rapids, however:
They are expensive
I'm not sure at all if they can significantly improve latency on encoder side, compared to Expression Encoder
Even if they can eliminate encoder's time, can the rest of smooth streaming (IIS side) support required speed.
Questions:
What technology could be used to stream to Win8 clients with subsecond latency, if any?
Do you know players compatible with win8 or easily portable to win8 which support rtmp?
Addition. Live translation of Build 2012 uses Rtmp and Smooth Streaming in Desktop mode. In Metro mode, it uses RTMP and Flash Player for Metro.
I can confirm that Smooth Streaming will not be your technology of choice here. Under the very best scenario with perfect conditions, the best you're going to get is a few seconds (absolute minimum latency would be the chunk length itself, even if everything else had 0 latency.)
I think most likely RTSP/RTMP or something similar using UDP is your best bet. I would be looking at Video Conferencing technologies more than wide audience streaming technologies. If I remember correctly there are a few .NET components out there to handle RTSP H.264 meant for video conferencing - if I can find them later I will post here.