If a client is behind NAT, when does STUN/TURN come into play?
1. After Peer connection object is created?
2. After setting local SDPs and sending it to the other client?
3. Before sending ice candidates?
(2) -- setting the local description causes the ice process to kick off and that process includes gathering srflx candidates from the stun server and relay candidates from the turn server.
It is possible to kick this off at the peerconnection creation -- see iceCandidatePoolsize in the specification.
This sample page illustrates the process.
Related
Is anyone aware of a way I could force WebRTC to only try ICE connection establishment via the host candidate?
Right now, I'm looking at non-intrusive ways, such as filtering outgoing traffic to block STUN/TURN servers during ICE candidate gathering. However, this is causing the gathering process to take quite long as this particular stack does not support trickle ICE. If I can do this with only a network change, that would be ideal (whereby any device behind this network must use the host candidate).
Without trickle ICE, I'm wondering if there is a way to filter out the STUN/TURN addresses while also setting the ICE candidate gathering timeout to a low value. This would cause STUN/TURN candidates to be put into a 'Failed' state, and then only the host candidate would inevitable be sent over.
I have started to look into WebRTC a bit and I am using it to build a simple peer to peer chat application using the data channel. I have the following questions:
Do I need to establish a RTCPeerConnection to each peer I want to talk to? So if there are three peers they each need 2 RTCPeerConnections (unless I use one of the peers as a sort of ad-hoc server).
If peer A sends out a candidate and sdp when creating a offer to peer B. Can peer B connect to peer A using that info and send its answer (with candidate and its sdp) over the RTCPeerConnection, i.e. using the RTCPeerConnection (before it's been completely established) as a signaling channel? I would assume that when the offer is created by peer A it starts to listen for connections on some port.
My understanding of WebRTC is a bit limited so if I've missunderstood some concept of WebRTC in my questions above please point them out!
Yes, as a direct P2P protocol everybody must be directly connected to everybody else if they want to communicate; unless you create some kind of mesh network in which one peer forwards messages to other peers.
No, the SDP offer and answer and ICE candidates all need to be exchanged through a signalling server; the connection cannot be established until both peers have actually agreed on a specific session configuration and ICE route to use, so you cannot send the SDP answer over a connection which isn't complete yet.
Especially for a simple text-only chat, going through a server is often easier than using P2P; the processing and bandwidth requirements are so minimal that the complications of P2P connections are probably not worth it. And you need a signalling server anyway. P2P only becomes really interesting once you start sending large files or audio/video streams.
In principle it is possible to establish a WebRTC connection without a signalling server, but that requires an out of band exchange of session tokens between the peers. I.e. the user would have to copy a token from the application, somehow send it to another user and the other user would have to paste it.
Additionally those tokens cannot be reused, so this procedure would have to be repeated every time peers want to establish a connection.
So while theoretically possible webrtc is not distributed in practical terms.
There is some noise about specifying support for incoming connections and reusable peer contacts, but the progress on that is unclear.
I'm going to implement Java VoiP server to work with WebRtc. Implementation of browser p2p connection is really straightforward. Server to client connection is slightly more tricky.
After a quick look at RFC I wrote down what should be done to make Java server as browser. Kindly help me to complete list below.
Implement STUN server. Server should be abke to respond binding
request and keep-alive pings.
Implement DTLS protocol along with DTLS handshake. After the DTLS
handshake shared secret will be used as keying material within SRTP
and SRTCP.
Support multiplexing of SRTP and SRTCP stream. SRTP and SRTCP use
same port to adress NAT issue.
Not sure whether should I implement SRTCP. I believe connection will
not be broken, if server does not send SRTCP reports to client.
Decode SRTP stream to RTP.
Questions:
Is there anything else which should be done on server-side ?
How webRtc handles SRTCP reports ? Does it adjust sample rate/bit
rate depends on SRTCP report?
WebRtc claims that following issues will be addressed:
packet loss concealment
echo cancellation
bandwidth adaptivity
dynamic jitter buffering
automatic gain control
noise reduction and suppression
Is is webRtc internals or codec(Opus) internals? Do I need to do anything on server side to handle this issues, for example variable bitrate etc ?
The first step would be to implement Interactive Connectivity Establishement (RFC 5245). Whether you make use of a STUN/TURN server or not is irrelevant, your code needs to issue connectivity checks (which use STUN messages) to the browser and respond to the brower's connectivity checks. ICE is a fairly complex state machine, but it's doable.
You don't have to reinvent the wheel. STUN / TURN servers are external components. Use as they are. WebRTC source code is available which you can use in your application code and call the related methods.
Pls. refer to similar post - Server as WebRTC data channel peer
I’m writing a simple client-server app which for the time being will be for my own personal use. I’m using Winsock for the net communication. I have not done any networking for the last 10 years, so I am quite rusty. I’d like to use as little external code as possible, so I have written a home-made server discovery mechanism, as follows.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket. The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket. The client then uses this info to connect to the server (on a different socket). This mechanism should allow the server to bind its listening socket to the first port it can within the dynamic port range (49152-65535) and to the clients to discover where the server is and on which port it is listening.
The server part works fine: the server receives the broadcast messages and successfully sends its response.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket).
But the message never makes it to the client app. I’ve tried doing a recv() in blocking and non-blocking mode, and there is never any data available. ioctlsocket() always shows no data is available, even though I know the packet got it to the machine.
The server succeeds on doing a recv() on broadcasted data. But the client fails on doing a recv() of the server’s response which is addressed to its discovery socket.
The question is very vague: what gotchas should I watch for in this scenario? Why would recv() fail to get a packet which has actually arrived to the machine? The sockets are UDP, so the fact that they are not connected is irrelevant. Or is it?
Many thanks in advance.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket.
The message doesn't need to contain anything. Just broadcast an empty message from the 'discovery socket'. recvfrom() will tell the server where it came from, and it can just reply directly.
The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket.
Fair enough, although actually the server could just broadcast its own TCP listening port every 5 seconds or whatever.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket). But the message never makes it to the client app
If it got to the host it must get to the application. You must have got the ports mixed up somehow. Simplify it as above and retry.
Well, it was one of those stupid situations: Windows Firewall was active, besides the other firewall, and silently dropping packets. Deactivating it solved the problem.
But I still don’t understand how it works, as it was allowing the server to receive packets sent through broadcasting. And when I got at my wits end and set the server to answer back through a broadcast, THOSE packets got dropped.
Two days of frustration. I hope someone profits from my experience.
This morning, there were big problems at work because an SNMP trap didn't "go through" because SNMP is run over UDP. I remember from the networking class in college that UDP isn't guaranteed delivery like TCP/IP. And Wikipedia says that SNMP can be run over TCP/IP, but UDP is more common.
I get that some of the advantages of UDP over TCP/IP are speed, broadcasting, and multicasting. But it seems to me that guaranteed delivery is more important for network monitoring than broadcasting ability. Particularly when there are serious high-security needs. One of my coworkers told me that UDP packets are the first to be dropped when traffic gets heavy. That is yet another reason to prefer TCP/IP over UDP for network monitoring (IMO).
So why does SNMP use UDP? I can't figure it out and can't find a good reason on Google either.
UDP is actually expected to work better than TCP in lossy networks (or congested networks). TCP is far better at transferring large quantities of data, but when the network fails it's more likely that UDP will get through. (in fact, I recently did a study testing this and it found that SNMP over UDP succeeded far better than SNMP over TCP in lossy networks when the UDP timeout was set properly). Generally, TCP starts behaving poorly at about 5% packet loss and becomes completely useless at 33% (ish) and UDP will still succeed (eventually).
So the right thing to do, as always, is pick the right tool for the right job. If you're doing routine monitoring of lots of data, you might consider TCP. But be prepared to fall back to UDP for fixing problems. Most stacks these days can actually use both TCP and UDP.
As for sending TRAPs, yes TRAPs are unreliable because they're not acknowledged. However, SNMP INFORMs are an acknowledged version of a SNMP TRAP. Thus if you want to know that the notification receiver got the message, please use INFORMs. Note that TCP does not solve this problem as it only provides layer 3 level notification that the message was received. There is no assurance that the notification receiver actually got it. SNMP INFORMs do application level acknowledgement and are much more trustworthy than assuming a TCP ack indicates they got it.
If systems sent SNMP traps via TCP they could block waiting for the packets to be ACKed if there was a problem getting the traffic to the receiver. If a lot of traps were generated, it could use up the available sockets on the system and the system would lock up. With UDP that is not an issue because it is stateless. A similar problem took out BitBucket in January although it was syslog protocol rather than SNMP--basically, they were inadvertently using syslog over TCP due to a configuration error, the syslog server went down, and all of the servers locked up waiting for the syslog server to ACK their packets. If SNMP traps were sent over TCP, a similar problem could occur.
http://blog.bitbucket.org/2012/01/12/follow-up-on-our-downtime-last-week/
Check out O'Reilly's writings on SNMP: https://library.oreilly.com/book/9780596008406/essential-snmp/18.xhtml
One advantage of using UDP for SNMP traps is that you can direct UDP to a broadcast address, and then field them with multiple management stations on that subnet.
The use of traps with SNMP is considered unreliable. You really should not be relying on traps.
SNMP was designed to be used as a request/response protocol. The protocol details are simple (hence the name, "simple network management protocol"). And UDP is a very simple transport. Try implementing TCP on your basic agent - it's considerably more complex than a simple agent coded using UDP.
SNMP get/getnext operations have a retry mechanism - if a response is not received within timeout then the same request is sent up to a maximum number of tries.
Usually, when you're doing SNMP, you're on a company network, you're not doing this over the long haul. UDP can be more efficient. Let's look at (a gross oversimplification of) the conversation via TCP, then via UDP...
TCP version:
client sends SYN to server
server sends SYN/ACK to client
client sends ACK to server - socket is now established
client sends DATA to server
server sends ACK to client
server sends RESPONSE to client
client sends ACK to server
client sends FIN to server
server sends FIN/ACK to client
client sends ACK to server - socket is torn down
UDP version:
client sends request to server
server sends response to client
generally, the UDP version succeeds since it's on the same subnet, or not far away (i.e. on the company network).
However, if there is a problem with either the initial request or the response, it's up to the app to decide. A. can we get by with a missed packet? if so, who cares, just move on. B. do we need to make sure the message is sent? simple, just redo the whole thing... client sends request to server, server sends response to client. The application can provide a number just in case the recipient of the message receives both messages, he knows it's really the same message being sent again.
This same technique is why DNS is done over UDP. It's much lighter weight and generally it works the first time because you are supposed to be near your DNS resolver.