An issue to share streams with multiple peers in WebRTC? - webrtc

Using latest Chrome. As far as I can tell, everything sets up correctly. Offer/Answer..Candidates..I expected...
However, one strange issue I noticed..and I googled...found that exactly same issue I am currently noticing...
https://stackoverflow.com/questions/44157738/webrtc-sharing-one-stream-with-multiple-peers
I also have three peers. What I want is that A sees B and C, B sees A and C, and C sees A and B.
Only one peer can see the other two peers, but the other two peers only see one...
BTW, I confirmed that each peer got notified onaddstream event two times, which is correct.
Here is what I did..
Once I get the stream, I stored this to my window.localStream...
Whenever a peer connection(since I support multiple peers, I managed this with dictionary) comes in, I added this localStream by peerConnection.addStream(window.localStream)
I set up the stream in video tag in peerConnection.onaddstream ...
Once the MediaStream is active and being transmitted then, this same stream cannot be transmitted to another peer at the same time?
Any help would be greatly appreciated.
Thanks,

Sending the same stream to multiple peers should work. Compare your code to https://webrtc.github.io/samples/src/content/peerconnection/multiple/ which shows how to achieve this. Your issue sounds like you might not set the answer on the right peerconnection. Inspecting eachs connection signalingState and iceConnectionState may provide further insight.

I meet the same problem, finally find out that It's because the sdp and ice candidate of the third client always be covered, only remain single peer of third client work.

Related

Should I send a WebRTC answer before my side's localMedia tracks were added?

I'm building a video calling app using WebRTC which allows one peer to call another by selecting someone in the lobby. When peer A sends a call request, the other peer B can accept. At this point, WebRTC signaling starts:
Both peers get their local media using MediaDevices.getUserMedia()
Both peers create an RTCPeerConnection and attach event listeners
Both peers calls RTCPeerConnection.addTrack() to add their local media
One peer A (the impolite user) creates an offer, calls RTCPeerConnection.setLocalDescription() to set that offer as the local description, and sends it to the WebSocket server, which forwards it to the other peer B.
The other peer B receives this offer and adds calls RTCPeerConnection.setRemoteDescription() to record it as the remote description
The other peer B then creates an answer and transmits it again to the first peer A.
(Steps based on https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API/Connectivity)
This flow is almost working well. In 1 out of 10 calls, I receive no video/audio from one of the peers (while both peers have working local video). In such a case, I have noticed that the answer SDP contains a=recvonly while this should be a=sendrecv under normal circumstances.
I have further determined that by the time the other peer receives the offer and needs to reply with an answer, the localMedia of this side has sometimes not yet been added, because MediaDevices.getUserMedia can take a while to complete. I have also confirmed this order of operations by logging and observing that the offer sometimes arrives before local tracks were added.
I'm assuming that I shouldn't send an answer before the local media has been added?
I'm thinking of two ways to fix this, but I am not sure which option is best, if any:
Create the RTCPeerConnection only after MediaDevices.getUserMedia() completes. In the meantime, when receiving an offer, there is no peer connection yet, so we save offers in a buffer to process them later once the RTCPeerConnection is created.
When receiving an offer, and there are no localMedia tracks yet, hold off on creating the answer until the localMedia tracks have been added.
I am having difficulties to decide which solution (or another) matches best with the "Perfect Negotiation" pattern.
Thanks in advance!
Yes, it is good to add the stream before creating an offer if you do it 'statically', but the best way to do it is to do it in the onnegotiationneeded event because the addtrack event triggers an onnegotiationneeded event. So you should add the stream and then use the createoffer inside onnegotiationneeded. As far as the answer you can do it before with no issues, but remember that a well-established connection will let you add/remove tracks with no problems (even after the SDP has been set). You didn't post any code but remember that you also MUST exchange ice candidates.
The last piece of advice, remember that all of the above IS asynchronous! so you should use promises, and await until the description is set, only THEN create an offer/answer.
Hopefully, this will help

In peer connection negotiation phase, is there ever a counter offer?

So far, everything I've read on webrtc peer connections says that an "offer" is sent, and it is responded to with an "answer". Then the connection starts and all is well.
In my understanding, the offer is like "Hey, let's use this codec and encryption". Given that the answer always leads to a connection, it seems the answer is always "okay, let's use that!". Can there be a counter offer like "No, let's use this codec instead!". Who ultimately decides which settings are used?
The offer contains a list of one side's acceptable codecs (prioritzed).
The answer contains the subset of those codecs, listing only the ones that both sides can do - possibly in a different order.
So: No the answer shouldn't contain a codec that wasn't in the offer.
But... Once Offer/Answer has happened, either side can send a second offer (this is typically used to add video to an existing audio-only session) and receive a new answer.
This means you could send an answer with no codecs and then send an second offer with a different set of codecs, but there is no reason to expect that the other side will change it's mind (unless there was some resource exhaustion)

Ice connection state , Completed vs Connected

Can someone please clarify the difference between iceConnectionstate:completed vs iceConnectionstate:connected.
When I connect to browsers with webrtc I am able to exchange data using datachannel but for some reason the the iceConnectionstate on browser that made the offer reaming completed wheres the browser that accepted the offers changes to connected.
Any idea if this is normal?
In short:
connected: Found a working candidate pair, but still performing connectivity checks to find a better one.
completed: Found a working candidate pair and done performing connectivity checks.
For most purposes, you can probably treat the connected/completed states as the same thing.
Note that, as mentioned by Ajay, there are some notable difference between how the standard defines the states and how they're implemented in Chrome. The main ones that come to mind:
There's no "end-of-candidates" signaling, so none of those parts of the candidate state definitions are implemented. This means if a remote candidate arrives late, it's possible to go from "completed" back to "connected" without an ICE restart. Though I assume this is rare in practice.
The ICE state is actually a combination ICE+DTLS state (see: https://bugs.chromium.org/p/webrtc/issues/detail?id=6145). This is because it was implemented before there was such thing as "RTCPeerConnectionState". This can lead to confusion if there's actually a DTLS-level issue, since the only way to really notice is to look in a native Chrome log.
We definitely plan on fixing all the discrepancies. But for a while we held off on it because the standard was still in flux. And right now our priority is more on implementing unified plan SDP and the RtpSender/RtpReceiver APIs.
ICE Connection state transition is a bit tricky, with below flow diagram you can get clear idea on possible transitions.
In simple words:
new/checking: Not at connected
connected/completed: Media path is available
disconnected/failed: Media path is not available (Whatever data you are sending on data channel won't reach other end)
Read full summary here
Still WebRTC team is working hard to make it stable & spec compliant.
Current chrome behavior is confusing so i filed a bug, you can star it to get notified.

WebRTC: Why "CreateAnswer can't be called before SetRemoteDescription"?

Browser: Chrome
I am trying to debug a webRTC application which works fine on three out of four fronts! I cannot get video from the receiver to the caller. I can get video and audio from the caller to the receiver and audio from the receiver to the caller. The problem is that the receiver does not fire a video (sdpMid="video") ICE candidate. While desperately trying to solve this problem, I tried to use pc.CreateAnswer before setting pc.remoteDescription and it gave the error quoted in the title.
My question is to understand the reason behind this. An answer SDP would just be the SDP based upon the getUserMedia settings/constraints. So, why do we have to wait for setting remoteDescription. I thought that a createAnswer would start firing the gathering of ICE candidates and this can be done earlier without waiting to set remoteDescription. That is not the case. Why?
Offers and answers aren't independent, they're part of an inherently asymmetric exchange.
An answer is a direct response to a specific offer (hence the name "answer"). Therefore the peer cannot answer before it has an offer, which you set with setRemoteDescription.
An offer contains specific limitations, or envelope (like m-lines), that an answer has to abide by/answer to/stay within. Another way to say it is that the answer is an iteration of the offer.
For instance, an offer created with offer options offerToReceiveVideo: false can only be answered with recvonly for video (meaning receive video from offerer to answerer only), never sendrecv.

WebRTC iceGatheringChanged with state 'complete' takes far too long to fire when using TURN (~minute)

Scenario:
I'm using WebRTC (Google's libjingle) on iOS and PeerConnection is setup using a TURN server and I'm waiting for all candidates to gather before I send them to the peer (I'm using SIP). The problem is that although all candidates are gathered in around 1-3 seconds (I can see it in the logs) the iceGatheringChanged() callback is not called with state GatheringComplete until after around a whole minute!
Any idea why that happens?
After analyzing the traffic using Google's AppRTCDemo for iOS it seems that for GatheringComplete to fire, the client needs ​to already have received the candidates from the remote side​, and that because it seems to need to setup TURN Allocations and add Permissions on the new allocation so that data can be exchanged with the peer. Is that the case? If so why?
Best regards
Are you exchanging the candidates for both party in real time? You are right, TURN client requires the other party candidates to create permission in TURN server and also to make check lists to start ICE processing.