How to choose the video resolution, frame rate and bitrate using agora SDK? - agora.io

I'm using Agora video sdk to build a video communication application. How do i change the video profile such as video resolution, frame rate and bitrate?

Video parameters vary on a case-by-case basis. For example:
In a one-to-one online class, the video windows of the teacher and student are both large, which requires higher resolutions, frame rates, and bitrates.
In a one-to-four online class, the video windows of the teacher and students are smaller, so lower resolutions, frame rates, and bitrates are used to accommodate the downward bandwidth.
The recommended parameters for different cases are as follows:
One-to-one video call: 240p (320 x 240, 15 fps, 200 Kbps) or 360p (640 x 360, 15 fps, 400 Kbps).
One-to-many video call: 120p (160 x 120, 15 fps, 65 Kbps), 180p (320 x 180, 15 fps, 140 Kbps), or 240p (320 x 240, 15 fps, 200 Kbps).
You can also call the setVideoEncoderConfiguration method to set the video encoding parameters, such as by increasing the bitrate to ensure the video quality. Higher bitrates, frame rates, and resolutions improve the video quality but may cause jitter and increase costs.
Generally speaking, a live stream requires a higher bitrate to ensure high-video quality. Agora recommends setting the bitrate of live interactive streaming to two times that of a voice/video call. See Set the bitrate.

Related

WebRtc Bandwidth / resolution / frame rate control

I want to know why I normally transmit a 320240 resolution video and default my uplink traffic is at 1.5MB;
when I modify the SDP bandwidth limit, for example at 500kbps/s my video width and height are still 320240 and the frame rate is not reduced;
So what exactly is this reduced upside traffic?
= =
WebRTC uses so-called "lossy perceptual video compression." That is, the video is capable of being compressed into bit streams of various bandwidths... in your case 1.5mbps and 0.5mbps. It's like JPEG's quality parameter: in JPEG, adjusting that parameter changes the size of the image file. In video compression, instead of a quality parameter you request a bitrate.
When a lower-bitrate video stream is decompressed, it's a less faithful representation of the original. If you know what to look for you can see various compression artifacts in the decompressed imagery" blockiness, "mosquitoes" around objects, and so forth.
Streaming video and DVD video programs (cinema) use high bandwidth to minimize these effects below the threshold of perception at 1080p or 4K resolution.
In your SIF (320x240) resolution case, your decoded 0.5mbps video has more artifacts in it than your 1.5mbps video. But, because the resolution is relatively low, it will take some looking to find those artifacts. If they don't annoy you or your users, you can conclude that 0.5mbps is fine for your application. Long experience suggests that you should succeed just fine with that bitrate and resolution. You can even try 250kbps.
Reducing the frame rate doesn't proportionally save bandwidth; most compressed video frames represent differences from the previous frame.
Lower bitrates are better for mobile devices; they save power and your users' data plans.
If you want to see exaggerated compression artifacts and what they look like, set the bitrate down to 125kbps or lower.

How to stop pixelated artifacts from mediarecorder?

We are using the browser's MediaRecorder to record the user's camera. I can't figure out why some cameras record artifacts somewhat frequently. It doesn't happen with all cameras.
Here are some screenshots of what happens...
Here is the normal image from the video
Then for a split second it gets pixelated and then goes back to looking like previous image.
Here is an excerpt of a video that has this artifact
https://drive.google.com/file/d/1SYMBRjMlvOTO-LnlG5HLX0ncN3NyUXcc/view?usp=sharing
recorder = new MediaRecorder(local_media_stream, {
mimeType: encoding_options,
audioBitsPerSecond: 96000,
videoBitsPerSecond: 8000000,
});
recorder.ondataavailable = function(e) {
that.save_blob(e.data, blob_index);
blob_index++;
}
recorder.start(15000)
For the local_media_stream Im just grabbing the local audio and video
navigator.mediaDevices.getUserMedia({ audio: {
deviceId: { exact: currentMic.deviceId},
channelCount: 1,
noiseSuppression: options.echo_cancellation,
echoCancellation: options.echo_cancellation,
},
video: {
width: { ideal: $("#subscription_id").data("max-width") },
height: { ideal: $("#subscription_id").data("max-height") },
frameRate: 30,
deviceId: { exact: currentCam.deviceId }
}
})
It's usually not the camera that creates these video compression artifacts. They're in the nature of video compression, and somewhat dependent on the computer's power. Most frames of video are so-called "interframes", holding the compressed difference between one frame and the next. To decode an interframe requires decoding all the frames that came before it.
Those sequences of interframes start with "intraframes", also known as Instantaneous Decoder Refresh (IDR) frames. Intraframes can be decoded standing alone. But there's a tradeoff: they require more compressed video data.
When you ask for 30 frames/second you give an implicit limit on how much data can fit in each frame. And sometimes the intraframe needs to have its quality reduced so its data fits in the time interval.
My point: the blockiness you see are the intraframes.
What can you do about this? Honestly, MediaRecorder doesn't give much control over the way the video encoder works. So, the things you can tweak are fairly minor. They are:
Frame size: consider making it smaller. 1080p video (1920x1080) is probably too big. Try something smaller. Try 960x540, or even smaller than that.
Frame rate: a slower frame rate, like 15, allocates more bandwidth to each frame, so the intraframes get higher quality. Reduce your frame rate.
Bitrate. You're using 8 megabits/sec. That's very large for the browser-based encoder., possibly too large. It's faster than the maximum upload rate of many internet service providers. Try something smaller. Start at 1 megabit. (Your high bitrate request may be overwhelming the compression hardware in some computers.)
Keep in mind that people who compress video for online streaming use big fat servers, and that it can ten minutes or more on those servers to compress one minute of video. They often use ffmpeg, which has hundreds of ways of tweaking the compression. MediaRecorder does it in real time.
Also, webcam hardware is quite variable in quality. Bad webcams have Coke-bottle shards for lenses and crummy sensors. Good ones are very good. None are up to the standard of professional video equipment. But that equipment costs thousands.
And, keep in mind that compressed video is more JPEG-like than PNG-like. It's not perfect for encoding high-contrast static pictures.
High quality digital video recording is a well-developed subspeciality of computing. If you want "the best audio and video possible" you won't be using a web browser / Javascript / getUserMedia / MediaRecorder. You'll be using a native (downloaded and installed) app like OBS Studio. It gives you a lot more control over the capture and compression process.
If you use capture / compression technology embedded in browsers in early 2021 you'll be, practically, limited to a couple of megabits a second and some resolution constraints.

Why the AudioUnit collect 940bytes per frame under the earphone with lightning plug

I am a iOS developer from China, and I developed a recording application based on AudioUnit. When I tested it on my iPhone6s using the 3.5 mm plug earphone,it worded well and it collected 1024 bytes per frame. But when I tested it on those iPhone which don't have 3.5mm plug, the AudioUnit collected 940 bytes per frame and it reported error.
I tried to test my app on my iPhone 6s using the lightning plug earphone, and it also worked well.
The Remote iOS Audio Unit can change the size of the data in the callback from the requested duration, depending on OS state and audio hardware used.
The difference between 1024 and 940 samples in the callback buffer usually means the Audio Unit is resampling the data from a sample rate of 48000 to a sample rate of 44100 sps. That can happen because the hardware sample rate is different on different iPhone models (48000 on newer ones, 44100 on early models), and your app might be requesting a sample rate different from the device hardware sample rate.

WebRTC : Video black screen bitrate

Is the bit rate of black screen shown when video is muted same as the original video's bit rate or is it significantly less because it is just a black screen?
It is significantly less. Since there is essentially no video information being sent to the remote party. How much depends on a lot of factors (connection quality etc).
I just did a quick test and the outgoing bit rate at 640x480 # 27fps was around 900 kbps to 1 mbps. Disabling the video track of the stream resulted in an outgoing bitrate of 30 kbps.
Please keep in mind that this was only a simple test I did. You can get this kind of information yourself by evaluating the reports in peerConnection.getStats
Some documentation and libraries for getStats
https://www.w3.org/TR/webrtc-stats
https://github.com/muaz-khan/getStats
https://www.callstats.io/2015/07/06/basics-webrtc-getstats-api/
Came across chrome://webrtc-internals, which has inbuilt tracking for bit rates and has other good features.
As seen in graph, bit rate before video was muted was ~150k which reduces to ~30k on muting the video.

HTML5 Video File Size

I'm wondering what size file is ideal to use with the HTML5 Video Tag. I'm building a site for a friend and they have given me a file which is 80mb.. is this to large? What is a good size to aim for?
Thanks in advance.
A broadband speed of 2.5 Mbit/s or more is recommended for streaming movies, for example to an Apple TV, Google TV or a Sony TV Blu-ray Disc Player, 10 Mbit/s for High Definition content.
One hour of video encoded at 300 kbit/s (this is a typical broadband video as of 2005 and it is usually encoded in a 320 × 240 pixels window size) will be:
(3,600 s × 300,000 bit/s) / (8×1024×1024) requires around 128 MB of storage.
Hope this helps. :)
I DO NOT think that 80 MB is too big. Note that this totally depends on your internet speed (the average 2.5 Mbit/s). Like mine is 3 Mbit/s, so if most web streaming videos will play quite nicely.
It depends if your video is HD or not. It doesn't always matter what size of the file, because it buffers while the video is playing. It mainly matters what size the video is. As said above, of the video's SD, it will work well on most internet connections (the average is 2.5 Mbit/s, and some people have 3 Mbit/s like me), and more HD videos will require a much faster connection, something more like 10 Mbit/s (yeah, that's a lot!). Note that emerging technologies like the fiber-optics found in Google's new product, Google Fiber, would allow much higher definition videos to play with a lot of fluidity. (Google Fiber's connection speed is about 800 Mbit/s, I believe! Correct me if I'm wrong)
Tell me if this helps. :)
And click the gray check next to my answer if it does. :)