Reuse ICE candidates during webrtc re-negotiation - webrtc

This is regarding the delay in the webrtc call setup due to ICE negotiations. I am trying out an audio call after which I enable video. A new video stream is added to the existing Peer Connection and ICE candidates are gathered for both audio and video tracks. Re-negotiation starts. SDP in the RE-INVITE has new ICE candidates for audio and video tracks.
Why can't we use the ICE candidate for audio track which existed already during initial audio call? Why gather ICE candidates again?

compare to this sample:
https://webrtc.github.io/samples/src/content/peerconnection/upgrade/
Do you get new candidates for sdpMLineIndex 0 (audio)? If you just get new ones for sdpMLineIndex 1 (video) and support BUNDLE, set the bundle policy to 'max-bundle' when constructing the peerconnection.

Related

Record remote video stream with audio using webrtc for Mac

I need a way to record the audio and video of remote peer connections.
We're using the native version of webrtc for macOS.
In the current API of webrtc, there is no way to access the audio of remote connections.
I didn't try anything in particular because this is something that I can't find any reference for.

Disabling Local stream on Remote Side after Call is connected via WebRtc in Android

I'm trying to isolate video and audio and am able to control the video feed from the caller side, however, unable to turn off the local video stream on the remote side since its an audio call. Any suggestions on how to isolate the video and audio feeds. It doesn't work just by removing the streams by getting the getStream.

Live streaming audio with WebRTC browser => server

I'm trying to sent some audio stream from my browser to some server(udp, also try websockets).
I'm recording audio stream with webrtc , but I have problems with transmitting data from a nodeJS client to the my server.
Any idea? is it possible to send audio stream to the server using webrtc(openwebrtc)?
To get audio from the browser to the server, you have a few different possibilities.
Web Sockets
Simply send the audio data over a binary web socket to your server. You can use the Web Audio API with a ScriptProcessorNode to capture raw PCM and send it losslessly. Or, you can use the MediaRecorder to record the MediaStream and encode it with a codec like Opus, which you can then stream over the Web Socket.
There is a sample for doing this with video over on Facebook's GitHub repo. Streaming audio only is conceptually the same thing, so you should be able to adapt the example.
HTTP (future)
In the near future, you'll be able to use a WritableStream as the request body with the Fetch API, allowing you to make a normal HTTP PUT with a stream source from a browser. This is essentially the same as what you would do with a Web Socket, just without the Web Socket layer.
WebRTC (data channel)
With a WebRTC connection and the server as a "peer", you can open a data channel and send that exact same PCM or encoded audio that you would have sent over Web Sockets or HTTP.
There's a ton of complexity added to this with no real benefit. Don't use this method.
WebRTC (media streams)
WebRTC calls support direct handling of MediaStreams. You can attach a stream and let the WebRTC stack take care of negotiating a codec, adapting for bandwidth changes, dropping data that doesn't arrive, maintaining synchronization, and negotiating connectivity around restrictive firewall environments. While this makes things easier on the surface, that's a lot of complexity as well. There aren't any packages for Node.js that expose the MediaStreams to you, so you're stuck dealing with other software... none of it as easy to integrate as it could be.
Most folks going this route will execute gstreamer as an RTP server to handle the media component. I'm not convinced this is the best way, but it's the best way I know of at the moment.

kurento media server not recording remote audio

I have extended tutorial one to one call for recording.
Original http://doc-kurento.readthedocs.io/en/stable/tutorials.html#webrtc-one-to-one-video-call
Extended https://github.com/gaikwad411/kurento-tutorial-node
Everything is fine but recording the remote audio.
When caller and callee videos are recorded, in the caller video recording callee voice is absent and vica versa.
I have searched kurento docs and mailing lists but did not find solution.
The workarounds I have in mind
1. Use ffmpeg to combine two videos
2. Use composite recording, I will also need to combine remote audio stream.
My questions are
1) Why it is happening, because I can hear the remote audio in ongoing call, but not in recording. In recording I can hear my own voice only.
2) Is there another solution apart from composite recording.
This is perfectly normal behaviour. When you connect a WebRtcEndpoint to a RecorderEndpoint, you only get the media that the endpoint is pushing into the pipeline. As the endpoint is one peer of a WebRTC connection between the browser and the media server, the media that the endpoint pushes into the pipeline is whatever it receives from the browser that has negotiated that WebRTC connection.
The only options that you have, as you have states already, are post-processing or composite mixing.

How does webRTC implement synchronization of the their audio and video streams from remote?

webRTC is implemented PeerConnection as per https://apprtc.appspot.com/
How does webRTC implement synchronization of the their audio and video streams from remote?
Normal RTP a/v sync done using RTCP SR/RR reports and the timestamps in each SRTP packet.
See any VoIP application.