WebRTC: Why "CreateAnswer can't be called before SetRemoteDescription"? - webrtc

Browser: Chrome
I am trying to debug a webRTC application which works fine on three out of four fronts! I cannot get video from the receiver to the caller. I can get video and audio from the caller to the receiver and audio from the receiver to the caller. The problem is that the receiver does not fire a video (sdpMid="video") ICE candidate. While desperately trying to solve this problem, I tried to use pc.CreateAnswer before setting pc.remoteDescription and it gave the error quoted in the title.
My question is to understand the reason behind this. An answer SDP would just be the SDP based upon the getUserMedia settings/constraints. So, why do we have to wait for setting remoteDescription. I thought that a createAnswer would start firing the gathering of ICE candidates and this can be done earlier without waiting to set remoteDescription. That is not the case. Why?

Offers and answers aren't independent, they're part of an inherently asymmetric exchange.
An answer is a direct response to a specific offer (hence the name "answer"). Therefore the peer cannot answer before it has an offer, which you set with setRemoteDescription.
An offer contains specific limitations, or envelope (like m-lines), that an answer has to abide by/answer to/stay within. Another way to say it is that the answer is an iteration of the offer.
For instance, an offer created with offer options offerToReceiveVideo: false can only be answered with recvonly for video (meaning receive video from offerer to answerer only), never sendrecv.

Related

Should I send a WebRTC answer before my side's localMedia tracks were added?

I'm building a video calling app using WebRTC which allows one peer to call another by selecting someone in the lobby. When peer A sends a call request, the other peer B can accept. At this point, WebRTC signaling starts:
Both peers get their local media using MediaDevices.getUserMedia()
Both peers create an RTCPeerConnection and attach event listeners
Both peers calls RTCPeerConnection.addTrack() to add their local media
One peer A (the impolite user) creates an offer, calls RTCPeerConnection.setLocalDescription() to set that offer as the local description, and sends it to the WebSocket server, which forwards it to the other peer B.
The other peer B receives this offer and adds calls RTCPeerConnection.setRemoteDescription() to record it as the remote description
The other peer B then creates an answer and transmits it again to the first peer A.
(Steps based on https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API/Connectivity)
This flow is almost working well. In 1 out of 10 calls, I receive no video/audio from one of the peers (while both peers have working local video). In such a case, I have noticed that the answer SDP contains a=recvonly while this should be a=sendrecv under normal circumstances.
I have further determined that by the time the other peer receives the offer and needs to reply with an answer, the localMedia of this side has sometimes not yet been added, because MediaDevices.getUserMedia can take a while to complete. I have also confirmed this order of operations by logging and observing that the offer sometimes arrives before local tracks were added.
I'm assuming that I shouldn't send an answer before the local media has been added?
I'm thinking of two ways to fix this, but I am not sure which option is best, if any:
Create the RTCPeerConnection only after MediaDevices.getUserMedia() completes. In the meantime, when receiving an offer, there is no peer connection yet, so we save offers in a buffer to process them later once the RTCPeerConnection is created.
When receiving an offer, and there are no localMedia tracks yet, hold off on creating the answer until the localMedia tracks have been added.
I am having difficulties to decide which solution (or another) matches best with the "Perfect Negotiation" pattern.
Thanks in advance!
Yes, it is good to add the stream before creating an offer if you do it 'statically', but the best way to do it is to do it in the onnegotiationneeded event because the addtrack event triggers an onnegotiationneeded event. So you should add the stream and then use the createoffer inside onnegotiationneeded. As far as the answer you can do it before with no issues, but remember that a well-established connection will let you add/remove tracks with no problems (even after the SDP has been set). You didn't post any code but remember that you also MUST exchange ice candidates.
The last piece of advice, remember that all of the above IS asynchronous! so you should use promises, and await until the description is set, only THEN create an offer/answer.
Hopefully, this will help

In peer connection negotiation phase, is there ever a counter offer?

So far, everything I've read on webrtc peer connections says that an "offer" is sent, and it is responded to with an "answer". Then the connection starts and all is well.
In my understanding, the offer is like "Hey, let's use this codec and encryption". Given that the answer always leads to a connection, it seems the answer is always "okay, let's use that!". Can there be a counter offer like "No, let's use this codec instead!". Who ultimately decides which settings are used?
The offer contains a list of one side's acceptable codecs (prioritzed).
The answer contains the subset of those codecs, listing only the ones that both sides can do - possibly in a different order.
So: No the answer shouldn't contain a codec that wasn't in the offer.
But... Once Offer/Answer has happened, either side can send a second offer (this is typically used to add video to an existing audio-only session) and receive a new answer.
This means you could send an answer with no codecs and then send an second offer with a different set of codecs, but there is no reason to expect that the other side will change it's mind (unless there was some resource exhaustion)

Ice connection state , Completed vs Connected

Can someone please clarify the difference between iceConnectionstate:completed vs iceConnectionstate:connected.
When I connect to browsers with webrtc I am able to exchange data using datachannel but for some reason the the iceConnectionstate on browser that made the offer reaming completed wheres the browser that accepted the offers changes to connected.
Any idea if this is normal?
In short:
connected: Found a working candidate pair, but still performing connectivity checks to find a better one.
completed: Found a working candidate pair and done performing connectivity checks.
For most purposes, you can probably treat the connected/completed states as the same thing.
Note that, as mentioned by Ajay, there are some notable difference between how the standard defines the states and how they're implemented in Chrome. The main ones that come to mind:
There's no "end-of-candidates" signaling, so none of those parts of the candidate state definitions are implemented. This means if a remote candidate arrives late, it's possible to go from "completed" back to "connected" without an ICE restart. Though I assume this is rare in practice.
The ICE state is actually a combination ICE+DTLS state (see: https://bugs.chromium.org/p/webrtc/issues/detail?id=6145). This is because it was implemented before there was such thing as "RTCPeerConnectionState". This can lead to confusion if there's actually a DTLS-level issue, since the only way to really notice is to look in a native Chrome log.
We definitely plan on fixing all the discrepancies. But for a while we held off on it because the standard was still in flux. And right now our priority is more on implementing unified plan SDP and the RtpSender/RtpReceiver APIs.
ICE Connection state transition is a bit tricky, with below flow diagram you can get clear idea on possible transitions.
In simple words:
new/checking: Not at connected
connected/completed: Media path is available
disconnected/failed: Media path is not available (Whatever data you are sending on data channel won't reach other end)
Read full summary here
Still WebRTC team is working hard to make it stable & spec compliant.
Current chrome behavior is confusing so i filed a bug, you can star it to get notified.

WebRTC: removeStream and then re- addStream after negotiation: Safe?

After a WebRTC session has been established, I would like to stop sending the stream to the peer at some point and then resume sending later.
If I call removeStream, it does indeed stop sending data. If I then call addStream (with the previously removed stream), it resumes. Great!
However, is this safe? Or do I need to re-negotiate/exchange the SDP after the removal and/or after the re-add? I've seen it mentioned in several places that the SDP needs to be re-negotiated after such changes, but I'm wondering if it is ok in this simple case where the same stream is being removed and re-added?
PS: Just in case anyone wants to suggest it: I don't want to change the enabled state of the track since I still need the local stream playing, even while it is not being sent to the peer.
It will only work in Chrome, and is non-spec, so it's neither web compatible nor future-proof.
The spec has pivoted from streams to tracks, sporting addTrack and removeTrack instead of addStream and removeStream. As a result the latter isn't even implemented in Firefox.
Unfortunately, because Chrome hasn't caught up, this means renegotiation currently works differently in different browsers. It is possible to do however with some effort.
The new model has a cleaner separation between streams and what's sent in RTCPeerConnection.
Instead of renegotiating, setting track.enabled = false is a good idea. You can still play a local view of your video by cloning the track.

How looks WebRTC peer negotiation workflow?

I need to develop a custom WebRTC peer (I need to establish audio or/and data connection between web-browser and non-browser). I however, struggle to find a proper, clear description of the handshake phase.
Answers to questions such as How to create data channel in WebRTC peer connection? are not entirely helpful, as they are not too detailed. Specifically, they say nothing about SDP contents.
Can anyone explain this or recommend any good documentation?
Here is a page with some graphs showing how the signaling process works. Basically, you set some client side stuff first:
PeerConnectionFactory; to generate PeerConnections,
PeerConnection; one for every connection to another peer you want (usually 1),
MediaStream; to hook up the audio and video from your client device.
Then you generate an SDP offer
peerConnection.createOffer();
on the caller side and send it to the callee. The callee sets this offer
peerConnection.setRemoteDescription(insert-the-offer-here);
and generates an SDP answer
peerConnection.createAnswer();
and sends it back to the caller. The caller receives this answer and sets it.
peerConnection.setRemoteDescription(insert-the-answer-here);
Both the caller and callee get a call to
onAddStream() {...} //needs to be implemented in your code
The callee when the caller's offer is set and the caller when the callee's answer is set. This callback signals the beginning of the connection.
You can also use ICE (STUN/TURN) to avoid firewall and NAT issues, but this is optional. Although in production code, you probably want to implement it anyway.
Note: Webrtc documentation is scarce and subject to change, take everything you read about webrtc (at least anything written as of now) with a grain of salt...