WebRTC - Reduce streamed audio volume - webrtc

Suppose we get an audio stream by using getUserMedia(gUM) API. I am broadcasting this stream to other users using WebRTC.
Is it possible to reduce the volume of the audio getting streamed.
Note : I am not looking to reduce device mic volume, because I understand we cannot control through browser and I dont want to

Try with GainNode from WebAudio. But the best thing will be to do this on the receiving end.

Related

How does audio and video in a webrtc peerconnection stay in sync?

How does audio and video in a webrtc peerconnection stay in sync? I am using an API which publishes audio and video (I assume as one peer connection) to a media server. The audio can occasionally go out of sync up to 200ms. I am attributing this to the possibility that the audio and video are separate streams and this accounts for the why the sync can be out.
In addition to Sean's answer:
WebRTC player in browsers has a very low tolerance for timestamp difference between arriving audio and video samples. Your audio and video streams must be aligned (interleaved) precisely, i.e. the timestamp of last audio sample received from network, should be +- 200ms or so comparing to timestamp of last video frame received from network. Otherwise WebRTC player will stop using NTP Timestamps and will play streams individually. This is because WebRTC player tries to keep latency at a minimum. Not sure it's good decision from WebRTC team. If your bandwidth is not sufficient, or if live encoder provides streams not timestamp-aligned - then you will have out of sync playback. In my opinion, WebRTC player could have a setting - whether to use that tolerance value or always play in sync, using NTP Timestamps, at the expense of latency.
RTP/RTCP (which WebRTC uses) traditionally uses the RTCP Sender Report. That allows each SSRC stream to be synced on a NTP Timestamp. Browsers do use them today, so things should work.
Are you doing any protocol bridging or anything that could be RTP only? What Media Server are you using?

Stream html5 camera output

does anyone know how to stream html5 camera output to other users.
If that's possible should I use sockets, images and stream them to the users or other technology.
Is there any video tutorial where I can take a look about it.
Many thanks.
The two most common approaches now are most likely:
stream from the source to a server, and allow users connect to the server to stream to their devices, typically using some form of Adaptive Bit Rate streaming protocol (ABR - basically creates multiple bit rate versions of your content and chunks them, so the client can choose the next chunk from the best bit rate for the device and current network conditions).
Stream peer to peer, or via a conferencing hub, using WebRTC
In general, the latter is more focused towards real time, e.g. any delay should be below the threshold which would interfere with audio and video conferences, usually less than 200ms for audio for example. To achieve this it may have to sacrifice quality sometimes, especially video quality.
There are some good WebRTC samples available online (here at the time of writing): https://webrtc.github.io/samples/

WebRtc stream without loss of quality

My web application records video streams on the server side using webRtc and kurento media server. Its just writing the raw stream received from the client to disk. But I was faced with the fact that the quality of the video falls dramatically. All because of codecs and compression. Is it possible to send video without compression at all? The number of FPS is not important to me. 5 FPS for my purpose is pretty enough. The main criterion is 100% quality, or close to it. How to achieve this? Is there any codec that compresses without loss of video quality?
Server side of my app is written in Spring java

Is it possible to access the raw audio data in webRTC?

I am looking for a way to use the audio channel to deliver some random data that will then be processed on the client side which would be possible only if I can access the raw audio. Is there a way to do it?
I know that the alternative and proper way to do this is to use the data channel, but I think that the latency would be higher than when using the audio channel.

WebRTC video and photo at same time

I'm working on an application that transmits video in low quality using webrtc. Periodically I want to send from same camera single frame in high resolution.
When I try to acquire another stream using getUserMedia I get same low quality one and when I try to pass some constraints to force higher resolution then then operation fails with overconstrained error (even though normally when there is no other stream it works fine).
Is it even possible to have at the same time many streams with different parameters from same device? Or is it possible acquire high resolution image without requesting for a new stream?