Is buffering possible in WebRTC? - webrtc

Is it possible to buffer the video/audio in WebRTC (of course, having then a delay on the other side) to improve the quality?

WebRtc does buffering automatically when it is necessary. You don't have to think about it.

There's no exact way to fully control this, but there are some settings in the Opus codec that can influence it, like, minptime, ptime, maxptime. Please check https://datatracker.ietf.org/doc/html/draft-spittka-payload-rtp-opus-03#page-12 for more info.

Related

Streaming music over WebRTC cutting in and out

We would like to be able to play music in another tab (say YouTube, Spotify, Soundcloud, etc) and then stream that over a WebRTC connection to other peers.
We are doing this through the screenshare and it's mostly working, but the music will sometimes cut in and out for the listeners, giving it a choppy sound. In other words, it sounds smooth to the person sending it (ie sharing it from the originating URL), but it sounds choppy to the on the receiving side of the WebRTC connection.
Any thoughts on what might be causing this? Is this a buffering issue? If so, is it more likely buffering on the sending or the receiving side?
Thanks so much for any help!
WebRTC favors low latency over quality, with the goal of ensuring you can have normal speech communication. To do this, a lot of things happen to your audio:
Playback rate is constantly changed. If playback gets behind, the rate speeds up. If it's too far ahead, it slows down.
There is a very small buffer, creating more opportunities for the playback buffer run dry.
If packets are lost, the audio for their time is simply discarded... skipped over. Playback isn't likely to buffer a bit and then continue.
When audio is lost, a bit of a trail-off is synthesized. This is fine for speech, but sounds bad for music.
On the media capture end, there are also audio "enhancements" designed for dealing with bad webcam microphones which can sometimes be applied to other mediastreams if configured incorrectly. These include:
Echo cancellation
Noise reduction
Automatic gain control
Finally, it's usually the case that audio bitrates are quite low by default. You'll usually have to munge the SDP if you want stereo high quality audio.
All this to say, WebRTC might not be the right choice for you if you are concerned with quality. I often resort to the MediaRecorder API.

Premiere export settings for background video

I'm not sure if it's allowed ask these questions here, but looks important for us webdevelopers (even bad dev like me :p ).
The question is about export setting videons on Premiere. I'm looking for a background video 30s like airbnb or paypal. Yesterday, I check paypal size and it's only 10/15 Mb for more than a minute. How did they do?
Obviously you want a low average bit rate. Things that can help with that are: keep the resolution low (you can scale it up a bit on the client); use H.264 High Profile (for the H.264 version); use 2-pass encoding; use variable bit rate. You can try increasing the GOP length too.
I assume there's no audio, so that shouldn't be an issue. (Can't remember if Adobe has an option for no sound track, but you can set the audio to a very low bit rate, or post-process it with ffmpeg or something to remove the audio track.)
If you have any control over the video content, you can try to keep it compressible. For example, avoid video with lots of detail or rapid motion. You might be able to selectively blur parts in a way that doesn't look bad. If it doesn't move too fast, you might be able to decrease the frame rate.
If you really want to optimize, you'll probably need to experiment a lot.

WebRTC - Peerconnection constraints

I've been working on a WebRTC videoconferencing app which is working great, taking into account the current state of WebRTC.
However, I have been exploring the possibilities to add constraints to the video and audio streams being send over by PeerConnection.
More specific in improving the performance of the video.
When videoconferencing on old (slow) laptops, we noticed that the quality of the image is really high but the frame per second is low. The stream is hacky.
About the audio quality, we give it a 8,5 for Chrome but only a 5,5 to 6 for Firefox.
I am not really interested in applying constraints to getUserMedia since this stream is being shown to the user aswell, and we don't want to change anything about this local output. (Unless there isn't another way)
I have found alot of information on W3G's drafts about MediaStreams and WebRTC itself.
These define certain constraints like default fps, minfps, minwidth and minheight of the image. On webrtc.org is also alot of information available like choosing codec etc.
But these settings can only be made "under the hood". It seems these settings cannot be addressed from RTCPeerConnection API level?
Certain examples on the net manipulate the SDP strings in the Offer / Answer part of the WebRTC handshake, is this the way to go ?
TL;DR : How to apply - and What is the best way to apply - constraints on WebRTC like minfps, maxfps, default fps, minwidth, maxwidth, dpi of image, bandwidth of video and audio, audio KHz and any other way to improve performance or quality of the stream(s).
Big thanks in advance !
Right now, most of those can't be set in Firefox or Chrome. A few can be adjusted (with care/pain) in the SDP, but even if there's an SDP option defined for something it doesn't mean that the browsers look at it.
Both Mozilla and Google are looking to improve CPU overload detection and reaction (reduce frame size dynamically, etc). Right now, this effectively isn't being done. Upcoming releases of FF (FF24) will adapt to the capture resolution (as a maximum), but we don't have constraints for that yet, just about:config prefs (see media.*). That would allow you to set a different default resolution for Firefox.

normalize volume in audiofile

Now I'm creating video with FFMPEG library. I'm receiving audiofile by recording user's voice.
How can I normalize sound. Maybe ffmpeg have tools. Or can you recommend you some algorithm for normalization?
The canonical tool for normalizing audio signals is called... wait for it... normalize. I recommend you use it, either by calling it or by studying its source and doing something similar yourself. Normalizing isn't difficult, you just decide on a maximum safe amplitude and then scale every sample according to that.

Is there a way to stream audio from MIC and play that stream in Silverlight

So I want to stream the audio from a mic using NAudio and then pass that stream to WCF which a Siverlight app can consume to broadcast the live audio sound. I want the latency to be as low as possible.
Any suggestions or if some one has already done it please point the source. Thanks in advance
what you are asking is certainly possible, but will be a fair amount of work to do.
NAudio can handle to capturing microphone audio.
At the Silverlight end you can play custom audio formats (in this case PCM) using a custom media element streaming source. See this one: http://code.msdn.microsoft.com/wavmss
I suspect latency would not be very good. You can reduce it by keeping the buffer sizes small. Also bear in mind that WAV is not a very efficient format to be sending over the network.
To have low latency as possible, you should use the netTcpBinding and stream your audio in binary format. I would use MemoryStream for this and try to play with the buffersize to figure out what the best performance is. Also, try checking audio formats for best performance. This also depends of the audio quality you expect.