I am working on setting up group calls involving up to 8 peers using WebRTC.
Let's say a peer needs to set up 7 RTCPeerConnections to join a group call. Instead of relying on onicecandidate event for every single RTCPeerConnection, I was wondering if I can track the client's icecandidates in a central location and reuse it for each new RTCPeerConnection. (e.g. Signaling Server will keep track of a peer's full ICE candidates, and share them with other peers as soon as they need them).
I am unsure what the average number of 'icecandidates' each client will have, but with ice trickle process, it seems that many duplicate http or websocket calls will need to be made to a Signaling Server in oder to exchange ice candidates between any 2 peers.
So I was wondering if I could just "accumulate" ice candidates locally and reuse them when new RTCPeerConnection will need to be made with new peer.
You can not. ICE candidates are associated with the peerconnection and its ice username fragment and password.
There is a feature called ice forking that would allow what you ask for but it is not implemented yet. https://bugs.chromium.org/p/webrtc/issues/detail?id=11252#c3 has some details.
I am using Simple-peer to build a webrtc application. To establish connection, we need to first send the offer and receive the answer. After that onicecandidate event gets triggered generating the candidate, we are required to send the candidate data to remote peer. The remote peer will than run addicecandidate and send back the remote candidate data which need to be added on localpeer using addicecandidate and connection gets established.
I want to understand how simple-peer is handling transfer of candidate data. The SDP data related to OFFER and ANSWER is required to be transferred using server in between, in one of the example socket-io has been used. But how the candidate data is getting transferred?
In simple-peer the signal from peer.on('signal', data=>{}) contains all the webrtc signaling data. If you print out the value of the signal you'll see it contains sdp, offer and answer all labeled to identify which is which.
Does anyone knows a walkaround, ideea or solution for the following webrtc sdp exchange issues. (I'm using an async messaging service to send my sdp candidates from A<->B.)
In the same time :
A <- OFFER -> B (both are creating and send its own sdp(offer) to the other peer).
The problem is that after the sdp arrives to the destination it can't be setted (in order to create an answer) because the destination will be in state "khavelocaloffer" (which waits for an answer).
you can not have both sides create an offer and expect things to work. One side needs to offer, the other side needs to answer.
In nearly all tutorials on WebRTC, the candidates from the onicecandidate callback are sent to the peer via the signalling server prior to createOffer(). The peer then adds the candidate via addicecandidate().
However it is also possible to signal the offer/answer with the ice candidates already built in. This can be accomplished by simply waiting for the null candidate in the onicecandidate callback before creating the offer/answer.
Are there any disadvantages to always sending the candidates via the offer/answer?
gathering all candidates instead of using trickle ice has severe (several seconds) latency implications. This webrtchacks post is still a good description of the topic.
I newly wrote a simple chat application, but I didn't really understand the background of ICE Candidates.
When the peer create a connection they get ICE Candidates and they exchange them and set
them finally to the peerconnection.
So my question is, where do the ICE Candidates come from and how are they used and are they all really used ?
I have noticed that my colleague got less candidates when he executes the application on his machine, what could be the reason for different amount of Candidates ?
the answer from #Ichigo is correct, but it is a litte bit bigger. Every ICE contains 'a node' of your network, until it has reached the outside. By this you send these ICE's to the other peer, so they know through what connection points they can reach you.
See it as a large building: one is in the building, and needs to tell the other (who is not familiar) how to walk through it. Same here, if I have a lot of network devices, the incoming connection somehow needs to find the right way to my computer.
By providing all nodes, the RTC connection finds the shortest route itself. So when you would connect to the computer next to you, which is connected to the same router/switch/whatever, it uses all ICE's and determine the shortest, and that is directly through that point. That your collegue got less ICE candidates has to do with the ammount of devices it has to go through.
Please note that every network adapter inside your computer which has an IP adress (I have a vEthernet switch from hyper-v) it also creates an ICE for it.
ICE stands for Interactive Connectivity Establishment , its a techniques used in NAT( network address translator ) for establishing communication for VOIP, peer-peer, instant-messaging, and other kind of interactive media.
Typically ice candidate provides the information about the ipaddress and port from where the data is going to be exchanged.
It's format is something like follows
a=candidate:1 1 UDP 2130706431 192.168.1.102 1816 typ host
here UDP specifies the protocol to be used, the typ host specifies which type of ice candidates it is, host means the candidates is generated within the firewall.
If you use wireshark to monitor the traffic then you can see the ports that are used for data transfer are same as the one present in ice-candidates.
Another type is relay , which denotes this candidates can be used when communication is to be done outside the firewall.
It may contain more information depending on browser you are using.
Many time i have seen 8-12 ice-candidates are generated by browser.
Ichigo has a good answer, but doesn't emphasise how each candidate is used. I think MarijnS95's answer is plain wrong:
Every ICE contains 'a node' of your network, until it has reached the outside
By providing all nodes, the RTC connection finds the shortest route itself.
First, he means ICE candidate, but that part is fine. Maybe I'm misinterpreting him, but by saying 'until it has reached the outside', he makes it seem like a client (the initiating peer) is the inner most layer of an onion, and suggests the ICE candidate helps you peel the layers until you get to the 'internet', where can get to the responding peer, perhaps peeling another onion to get to it. This is just not true. If an initiating peer fails to reach a responding peer through the transport address, it discards this candidate and will try a different candidate. It does not store any nodes anywhere in the candidate. The ICE candidates are generated before any communication with the responding peer. An ice candidate does not help you peel the proverbial NAT onion. Also regarding the second quote I made from his answer, he makes it seem like ICE is used in a shortest path algorithm, where 'shortest' does not show up in the ICE RFC at all.
From RFC8445 terminology list:
ICE allows the agents to discover enough information
about their topologies to potentially find one or more paths by which
they can establish a data session.
The purpose of ICE is to discover which pairs of addresses will work. The way that ICE does this is to systematically try all possible pairs (in a carefully sorted order) until it finds one or more that work.
Candidate, Candidate Information: A transport address that is a
potential point of contact for receipt of data. Candidates also
have properties -- their type (server reflexive, relayed, or
host), priority, foundation, and base.
Transport Address: The combination of an IP address and the
transport protocol (such as UDP or TCP) port.
So there you have it, (ICE) Candidate was defined (an IP address and port that could potentially be an address that receives data, which might not work), and the selection process was explained (the first transport address pair that works). Note, it is not a list of nodes or onion peels.
Different users may have different ice candidates because of the process of "gathering candidates". There are different types of candidates, and some are obtained from the local interface. If you have an extra virtual interface on your device, then an extra ICE will be generated (I did not test this!). If you want to know how ICE candidates are 'gathered', read the 2.1. Gathering Candidates