Is the bit rate of black screen shown when video is muted same as the original video's bit rate or is it significantly less because it is just a black screen?
It is significantly less. Since there is essentially no video information being sent to the remote party. How much depends on a lot of factors (connection quality etc).
I just did a quick test and the outgoing bit rate at 640x480 # 27fps was around 900 kbps to 1 mbps. Disabling the video track of the stream resulted in an outgoing bitrate of 30 kbps.
Please keep in mind that this was only a simple test I did. You can get this kind of information yourself by evaluating the reports in peerConnection.getStats
Some documentation and libraries for getStats
https://www.w3.org/TR/webrtc-stats
https://github.com/muaz-khan/getStats
https://www.callstats.io/2015/07/06/basics-webrtc-getstats-api/
Came across chrome://webrtc-internals, which has inbuilt tracking for bit rates and has other good features.
As seen in graph, bit rate before video was muted was ~150k which reduces to ~30k on muting the video.
Related
I want to know why I normally transmit a 320240 resolution video and default my uplink traffic is at 1.5MB;
when I modify the SDP bandwidth limit, for example at 500kbps/s my video width and height are still 320240 and the frame rate is not reduced;
So what exactly is this reduced upside traffic?
= =
WebRTC uses so-called "lossy perceptual video compression." That is, the video is capable of being compressed into bit streams of various bandwidths... in your case 1.5mbps and 0.5mbps. It's like JPEG's quality parameter: in JPEG, adjusting that parameter changes the size of the image file. In video compression, instead of a quality parameter you request a bitrate.
When a lower-bitrate video stream is decompressed, it's a less faithful representation of the original. If you know what to look for you can see various compression artifacts in the decompressed imagery" blockiness, "mosquitoes" around objects, and so forth.
Streaming video and DVD video programs (cinema) use high bandwidth to minimize these effects below the threshold of perception at 1080p or 4K resolution.
In your SIF (320x240) resolution case, your decoded 0.5mbps video has more artifacts in it than your 1.5mbps video. But, because the resolution is relatively low, it will take some looking to find those artifacts. If they don't annoy you or your users, you can conclude that 0.5mbps is fine for your application. Long experience suggests that you should succeed just fine with that bitrate and resolution. You can even try 250kbps.
Reducing the frame rate doesn't proportionally save bandwidth; most compressed video frames represent differences from the previous frame.
Lower bitrates are better for mobile devices; they save power and your users' data plans.
If you want to see exaggerated compression artifacts and what they look like, set the bitrate down to 125kbps or lower.
We would like to be able to play music in another tab (say YouTube, Spotify, Soundcloud, etc) and then stream that over a WebRTC connection to other peers.
We are doing this through the screenshare and it's mostly working, but the music will sometimes cut in and out for the listeners, giving it a choppy sound. In other words, it sounds smooth to the person sending it (ie sharing it from the originating URL), but it sounds choppy to the on the receiving side of the WebRTC connection.
Any thoughts on what might be causing this? Is this a buffering issue? If so, is it more likely buffering on the sending or the receiving side?
Thanks so much for any help!
WebRTC favors low latency over quality, with the goal of ensuring you can have normal speech communication. To do this, a lot of things happen to your audio:
Playback rate is constantly changed. If playback gets behind, the rate speeds up. If it's too far ahead, it slows down.
There is a very small buffer, creating more opportunities for the playback buffer run dry.
If packets are lost, the audio for their time is simply discarded... skipped over. Playback isn't likely to buffer a bit and then continue.
When audio is lost, a bit of a trail-off is synthesized. This is fine for speech, but sounds bad for music.
On the media capture end, there are also audio "enhancements" designed for dealing with bad webcam microphones which can sometimes be applied to other mediastreams if configured incorrectly. These include:
Echo cancellation
Noise reduction
Automatic gain control
Finally, it's usually the case that audio bitrates are quite low by default. You'll usually have to munge the SDP if you want stereo high quality audio.
All this to say, WebRTC might not be the right choice for you if you are concerned with quality. I often resort to the MediaRecorder API.
I am using the objective-c framework for WebRTC for building a screensharing app. The video is captured using CGDisplayStream. I have a working demo but at 2580x1080 I get only 3-4 fps. My googAvgEncodeMs is around 100-300ms (should be >10ms ideally) which explains why the screensharing is far from being fluid (30fsp+). I also switched between codecs (h264/vp8/vp9) but with all of them I get the same slow experience. The contentType in webRTC is set to screen (values: [screen,realtime]).
The cpu usage of my mac is then between 80-100%. My guess is that there is some major optimisation (qpMax, hardware-acceleration etc...) in the c++ code of the codecs that I have missed. Unfortunately my knowledge on codecs is limited.
Also interesting: Even when I lower the resolution to 320x240 the googAvgEncodeMs is still in the range of 30-60ms.
I am running this on a MacBook Pro 15 inch from 2018. When running a random webrtc inside Chrome/Firefox etc I get smoother results than with the vanilla webrtc framework.
WebRTC uses software encoding and that is the real culprit. Also encoding 2580 x 1080 in software is not going to be practical. Try reducing H and V resolution in half and it will improve performance with some loss in quality. Also if you are doing screen sharing and video support is not critical, you can drop frame rate to 10 frames per second. Logical solution is to figure out how to incorporate h/w acceleration.
I'm not sure if it's allowed ask these questions here, but looks important for us webdevelopers (even bad dev like me :p ).
The question is about export setting videons on Premiere. I'm looking for a background video 30s like airbnb or paypal. Yesterday, I check paypal size and it's only 10/15 Mb for more than a minute. How did they do?
Obviously you want a low average bit rate. Things that can help with that are: keep the resolution low (you can scale it up a bit on the client); use H.264 High Profile (for the H.264 version); use 2-pass encoding; use variable bit rate. You can try increasing the GOP length too.
I assume there's no audio, so that shouldn't be an issue. (Can't remember if Adobe has an option for no sound track, but you can set the audio to a very low bit rate, or post-process it with ffmpeg or something to remove the audio track.)
If you have any control over the video content, you can try to keep it compressible. For example, avoid video with lots of detail or rapid motion. You might be able to selectively blur parts in a way that doesn't look bad. If it doesn't move too fast, you might be able to decrease the frame rate.
If you really want to optimize, you'll probably need to experiment a lot.
I got a question of image capture with a PC camera(integrated note book camera or web cam). While I am developing a computer vision system in which high quality image capture is the key issue, most of the current method is use VFW or directShow to capture video stream and snap one frame as an image.
However, this method could not get high quality image ( or using up the full capacity of the camera). For example, I got a 5 mega pixel web cam. but the video stream is maximum 720P(USB bandwidth problem?). Video streaming is wasting some of the camera sensors.
Could I video streaming and taking picture independently? like inputing video with a 640*480 video stream and render on the stream. then take a picture of 1280*720 from the same cam? I guess this would be a hardware problem? the new HTC one X camera?
In short, it's there a way for a PC system to take a picture ,full use of the sensor capacity, without video streaming and capture one frame. Is this a hardware related problem? Does common web cam support this? Or a Software problem, I should learn DirectShow things?
Thanks a lot.
I vaguely remember (some) video sources offer both a capture and still pin, the latter I assume would offer you higher quality. You can easily test this in GraphEdit. If it works then yes, you'll have to learn DirectShow. Or pay someone to code this for you.