WebRTC: Does it matter where my TURN server is? - webrtc

Context: I am going to be setting up a support app for global customers. All my customers are going to be connecting to our support agents who are in one location.
Question: Is there any advantage to also having TURN servers closer to the customers, or is it just as well to have all my TURN servers near my support agents?
Current Understanding: If I understand it right, the connection between Peer 1 and Peer 2 is linear through the TURN servers, i.e.
Peer 1 -> TURN 1 -> TURN 2 -> Peer 2
It seems that it wouldn't matter which leg of that journey is longer unless there's some non-linear behavior leading to a lengthier round-trip leg.

It depends on whether the customer is using TURN/TCP on the Peer 1 -> TURN 1 leg. Doing that on high-latency global links is probably going to result in slightly worse quality than having a udp TURN 1 -> TURN 2 link.
See this blog post for TURN deployment considerations and metrics to track.

Related

TURN servers: always or never needed for a given network, or needed unpredictably?

I am currently just using a STUN server and am wondering whether TURN is necessary for an MVP. The number of users accessing the website from a workplace with super secure firewalls should be near-zero.
Let's say 4 people are testing WebRTC connection reliability. Sometimes they all successfully connect and can see/hear one another, but other times they cannot see/hear someone and refresh the page to try again.
Does the fact that they can sometimes all see/hear each other rule out whether a TURN server would make a difference?
In other words, is it possible for a STUN server to identify my IP so I can connect one second, but fail if I try again a few seconds later? Or is it just network-based, so if I a STUN doesn't work for me on my network now, it 100% will not work in 5 minutes either?
If the latter (and a TURN is either always or never needed for a given network), I guess that tells me the problem in my code is elsewhere...

Ice connection state , Completed vs Connected

Can someone please clarify the difference between iceConnectionstate:completed vs iceConnectionstate:connected.
When I connect to browsers with webrtc I am able to exchange data using datachannel but for some reason the the iceConnectionstate on browser that made the offer reaming completed wheres the browser that accepted the offers changes to connected.
Any idea if this is normal?
In short:
connected: Found a working candidate pair, but still performing connectivity checks to find a better one.
completed: Found a working candidate pair and done performing connectivity checks.
For most purposes, you can probably treat the connected/completed states as the same thing.
Note that, as mentioned by Ajay, there are some notable difference between how the standard defines the states and how they're implemented in Chrome. The main ones that come to mind:
There's no "end-of-candidates" signaling, so none of those parts of the candidate state definitions are implemented. This means if a remote candidate arrives late, it's possible to go from "completed" back to "connected" without an ICE restart. Though I assume this is rare in practice.
The ICE state is actually a combination ICE+DTLS state (see: https://bugs.chromium.org/p/webrtc/issues/detail?id=6145). This is because it was implemented before there was such thing as "RTCPeerConnectionState". This can lead to confusion if there's actually a DTLS-level issue, since the only way to really notice is to look in a native Chrome log.
We definitely plan on fixing all the discrepancies. But for a while we held off on it because the standard was still in flux. And right now our priority is more on implementing unified plan SDP and the RtpSender/RtpReceiver APIs.
ICE Connection state transition is a bit tricky, with below flow diagram you can get clear idea on possible transitions.
In simple words:
new/checking: Not at connected
connected/completed: Media path is available
disconnected/failed: Media path is not available (Whatever data you are sending on data channel won't reach other end)
Read full summary here
Still WebRTC team is working hard to make it stable & spec compliant.
Current chrome behavior is confusing so i filed a bug, you can star it to get notified.

Peer to peer cluster for (ideally anonymous) threshold voting

As an introduction to peer to peer networking and/or blockchains, I want to make a little project, but I need to know the limitations of cryptography and what combinations of features are possible. Here's what the ideal (if it were backed by a traditional server) application would feature:
A peer must be initially invited by another peer, with the exception of a seed peer.
Peers are allowed to cast a vote for another peer at a specific rate, and a vote expires either when the peer casts another vote or after a configured (as in unchangeable) TTL.
Votes are anonymous.
If a peer reaches a threshold of "alive" votes, that threshold being a portion of peers that have connected in the past 30 days, it is granted a "point," which can be cryptographically proven to be valid (as in, proven to contain a certain number of valid votes and proven to meet the threshold). At the very least peers must come to a consensus as to the validity of the votes before the point is granted.
Peers that have joined in the past are able to join the cluster without invitation, but points cannot be issued without the threshold.
Is this possible? If so, what technologies should I pursue? I took an initial look at Raft for a consensus protocol but the TTL and time based nature of the votes makes me doubtful a consensus algorithm will be more useful than a blockchain.
For time-locked crypto you can read more here:
https://crypto.stackexchange.com/questions/606/time-capsule-cryptography
When you understand it you will find out that you can limit things by number of calculations required only, which is somewhat a proxy for time required, and thus not guaranteed.
You will have to build a peer-to-peer system with peers competing over to "solve" a vote to their competition and thus make it invalid. Although this wont ensure fixed '30days' it can ensure almost equal time to live for all votes in the network for a certain election round.

Choosing port number for UDP hole-punching

I have a weird problem. I have a successfully working C++ (boost asio) P2P application which works on most of the NAT. The problem is when I give the initial start port number as 1000 it checks if 1000 is free else increment by one and chooses a port and starts handshaking. But when I have 10000, 20000, or any other huge port number the hole punching doesn't work on port restricted cone NAT.
How is that possible? I am pretty sure it nothing to do with the code. and recently it doesn't work on one of my friends' full cone NAT as well, but it has worked in many other full cone NATs. What could be the reason? Is there something I am missing about how a NAT behaves?
In many NAT implementations, there are protection rules in place which prevent one host from tying up a large percentage of ports on the WAN interface, e.g. like described here.
Depending on the router, the NAT table entries have different lifetimes, and there are always limits on how many ports can be allocated to a single client (I've seen numbers from 128 to 4096).
So I think when you get to the point where you need to use high ports, the NAT table for your source IP address is already full (or almost full) with entries from old connections, or connections from other apps, so the router either decides to decline or can't fit the new NAT entry for your port.
However, to be sure, I would try to repeat that on a controlled environment collecting Wireshark dumps on both sides of the NAT and analyze the packets. If possible, it would also be helpful to enable router logs and peek into them.
I understand this is not a "magic bullet", but hope it somehow helps you.
Don't try to choose the port number yourself. The operating system can do this faster and better than your code can.
Bind your socket to port 0 and let the OS choose an available port number for you. You didn't specify what programming language, but it usually involves a call to getsockname() after the bind() call is made to discover what local port is going to be used. Java and .NET have equivalent APIs for doing the same thing.
Then follow all the other steps here:
https://stackoverflow.com/a/8524609/104458
Not sure if this'll help but have you tried having one instance of the client application starting at 1001 and the other starting at 1000, then both increment by 1.
While the 1000 will fail on client B, client A has already tried 1001 and so punched that hole, so hopefully it'll work, right? In theory, it sounds OK in my head.

UDP Broadcast, Multicast, or Unicast for a "Toy Application"

I'm looking to write a toy application for my own personal use (and possibly to share with friends) for peer-to-peer shared status on a local network. For instance, let's say I wanted to implement it for the name of the current building you're in (let's pretend the network topology is weird, and multiple buildings occupy the same LAN). The idea is if you run the application, you can set what building you're in, and you can see the buildings of every other user running the application on the local network.
The question is, what's the best transport/network layer technology to use to implement this?
My initial inclination was to use UDP Multicast, but the more research I do about it, the more I'm scared off by it: while the technology is great and seems easy to use, if the application is not tailored for a particular site deployment, it also seems most likely to get you a visit from an angry network admin.
I'm wondering, therefore, since this is a relatively low bandwidth application — probably max one update every 4–5 minutes or so from each client, with likely no more than 25–50 clients — whether it might be "cheaper" in many ways to use another strategy:
Multicast: find a way to pick a well-known multicast address from 239.255/16 and have interested applications join the group when they start up.
Broadcast: send out a single UDP Broadcast message every time someone's status changes (and one "refresh" broadcast when the app launches, after which every client replies directly to the requesting user with their current status).
Unicast: send a UDP Broadcast at application start to announce interest, and when a client's status changes, it sends a UDP packet directly to every client who has announced. This results in the highest traffic, but might be less likely to annoy other systems with needless broadcast packets. It also introduces potential complications when apps crash (in terms of generating unnecessary traffic).
Multicast is most certainly the best technology for the job, but I'm wondering if the associated hassles are worth avoiding since this is just a "toy application," not a business-critical service intended for professional network admin deployment and configuration.