How many Allocations can TURN server create? - webrtc

I am talking about Nat Traversal. There is a limit to the UDP port number of a server. So i think a TURN server can create 64k Allocations total which means only 64k clients can communicate with other peers though a TURN server at a time.
Is there a way to break the limit and have more clients to communicate without import more TURN server.

Related

Winsock2 receiving data from multiple clients without iterating over each client

I use winsock2 with C++ to connect thousands of clients to my server using the TCP protocol.
The clients will rarely send packets to the server.... there is about 1 minute between 2 packets from a client.
Right now I iterate over each client with non-blocking sockets to check if the client has sent anything to the server.
But I think a much better design of the server would be to wait until it receives a packet from the thousands of clients and then ask for the client's socket. That would be better because the server wouldn't have to iterate over each client, where 99.99% of the time nothing happens. And with a million clients, a complete iteration might take seconds....
Is there a way to receive data from each client without iterating over each client's socket?
What you want is to use IO completion ports, I think. See https://learn.microsoft.com/en-us/windows/win32/fileio/i-o-completion-ports. Microsoft even has an example at GitHub for this: https://github.com/microsoft/Windows-classic-samples/blob/main/Samples/Win7Samples/netds/winsock/iocp/server/IocpServer.Cpp
BTW, do not expect to server million connections on single machine. Generally, there are limits, like available memory, both user space and kernel space, handle limits, etc. If you are careful, you can probably serve tens of thousands of connection per process.

What are the trade offs of limiting relay ports for a TURN server?

By default coturn uses the range 49152-65535 UDP as relay ports. Is there any reason to use the full range? Can't one udp handle infinite connections? What's the point of having all of these open? Are there any security risks? Are there any trade offs to using less udp ports?
Coturn uses the 49152-65535 range by default because this is what is specified in RFC 5766, Section 6.2, which describes how the TURN server should react when it receives an allocation request. This paragraph is of particular interest for your question:
In all cases, the server SHOULD only allocate ports from the range 49152 - 65535 (the Dynamic and/or Private Port range [Port-Numbers]), unless the TURN server application knows, through some means not specified here, that other applications running on the same host as the TURN server application will not be impacted by allocating ports outside this range. This condition can often be satisfied by running the TURN server application on a dedicated machine and/or by arranging that any other applications on the machine allocate ports before the TURN server application starts. In any case, the TURN server SHOULD NOT allocate ports in the range 0 - 1023 (the Well-Known Port range) to discourage clients from using TURN to run standard services.
The Dynamic and/or Private Port range is described in RFC 6335, Section 6:
the Dynamic Ports, also known as the Private or Ephemeral Ports, from 49152-65535 (never assigned)
So, to try and answer your questions:
Is there any reason to use the full range?
Yes, if the default range works for you and your application.
Can't one udp handle infinite connections?
Yes, you could configure coturn to only use one port, as system resources allow, of course.
What's the point of having all of these open?
It is the default range for dynamically assigned port numbers as defined by the IANA.
Are there any security risks?
Not beyond any other normal security risk involved with running a service like coturn.
Are there any trade offs to using less udp ports?
As far as I know, there are not any technical trade offs. I have run coturn with a much smaller range outside of the dynamic range, and it works just fine.
When faced with firewalls or port number restrictions on networks trying to reach a TURN server, a smaller range may be seen as a benefit to some network administrators, but at the same time other administrators may question the use of a port range outside of the IANA-assigned dynamic range. I have encountered both mindsets, and it is not possible to declare one approach as clearly better than the other (when chosing between the default port range or a smaller range). You just have to find what works for you application and usage.
#bradley-t-hughes provides a good answer; to add a point of view on that:
Defining a default range is a strategy to ensure that applications that are run without customised configuration (hence using default settings) don't clash with each other.
As for other applications that dynamically allocate UDP ports for real time communications, the configured port range represents an upper limit for concurrent sessions. The smaller the range, the fewer concurrent sessions can be established.
There are cases where hosts are dedicated to applications like TURN servers, and using wide ranges ensures that the maximum capacity is limited by other factors, like bandwidth or CPU usage.
If you know in advance that the number of concurrent sessions will be small, e.g. because it's just being used for testing functionality and not to provide a Live service, then you can restrict that range, and leave the other ports available for other applications.

Slower data transfer with parallel TCP connections

I used the TCP AsyncSocket to transfer a large file from one machine to another using local connection (using host as local IP address).
First, I did the setup for single TCP socket connection and felt data transfer rate is slow. Its around 1mb/sec.
To make it faster I created 10 TCP sockets (Connecting on separate ports on separate threads) and started reading the partitions of file simultaneously. But It didn't make any difference. Transfer rate is almost same as that of single TCP socket connection (Or even slower).
Any idea ? Why multiple TCP sockets is not transferring data in parallel ? Any ways or suggestions to transfer file fast over TCP ?
Parallelizing an I/O operation only helps if the I/O channel isn't saturated and the task is single-core bound.
Quite likely, adding additional I/O channels will actually slow things down as you now have multiple clients competing for a scarce resource.
What you need to figure out is where is your bottleneck? Only once you've quantified the actual cause of your performance issue will you be able to fix it.

Do ping requests put a load on a server?

I have a lot of clients (around 4000).
Each client pings my server every 2 seconds.
Can these ping requests put a load on the server and slow it down?
How can I monitor this load?
Now the server response slowly but the processor is almost idle and the free memory is ok.
I'm running Apache on Ubuntu.
Assuming you mean a UDP/ICMP ping just to see if the host is alive, 4000 hosts probably isn't much load and is fairly easy to calculate. CPU and memory wise, ping is handled by you're kernel, and should be optimized to not take much resources. So, you need to look at network resources. The most critical point will be if you have a half-duplex link, because all of you're hosts are chatty, you'll cause alot of collisions and retransmissions (and dropped pings). If the links are all full duplex, let's calculate the actual amount of bandwidth required at the server.
4000 client #2 seconds
Each ping is 72 bytes on the wire (32 bytes data + 8 bytes ICMP header + 20 bytes IP header + 14 bytes Ethernet). * You might have some additional overhead if you use vlan tagging, or UDP based pings
If we can assume the pings are randomly distributed, we would have 2000 pings per second # 72 bytes = 144000 bytes
Multiple by 8 to get Bps = 1,152,000 bps or about 1.1Mbps.
On a 100Mbps Lan, this would be about 1.1% utilization just for the pings.
If this is a lan environment, I'd say this is basically no load at all, if it's going across a T1 then it's an immense amount of load. So you should basically run the same calculation on which network links may also be a bottle neck.
Lastly, if you're not using ICMP pings to check the host, but have an application level ping, you will have all the overhead of what protocol you are using, and the ping will need to go all the way up the protocol stack, and you're application needs to respond. Again, this could be a very minimal load, or it could be immense, depending on the implementation details and the network speed. If the host is idle, I doubt this is a problem for you.
Yes, they can. A ping request does not put much CPU load on, but it certainly takes up bandwidth and a nominal amount of CPU.
If you want to monitor this, you might use either tcpdump or wireshark, or perhaps set up a firewall rule and monitor the number of packets it matches.
The other problem apart from bandwidth is the CPU. If a ping is directed up to the CPU for processing, thousands of these can cause a load on any CPU. It's worth monitoring - but as you said yours is almost idle so it's probably going to be able to cope. Worth keeping in mind though.
Depending on the clients, ping packets can be different sizes - their payload could be just "aaaaaaaaa" but some may be "thequickbrownfoxjumpedoverthelazydog" - which is obviously further bandwidth requirements again.

Is there a limit with the number of SSL connections?

Is there a limit with the number of SSL connections?
We are trying to connect through SSL with 2000 sessions. We have tried it a couple of times but it always dies at 1062nd. Is there a limit?
Your operating system will have a limit on the number of open files if you are on linux
ulimit -a will show your various limits.
I imagine yours is set to 1024 and some of the sessions just happened to have closed allow the figure of 1062 (this last bit is a guess)
Yes, everything has a limit. As far as I'm aware, there is no inherit limit with "SSL".. it is after all just a protocol.
But, there is a limited amount of memory, ports, CPU on the machine you are connected to, from and every single one in between.
The actually server you are connected to may have an arbitrary limit set too.
This question doesn't have enough information to answer beyond "YES".
SSL itself doesn't have any limitations, but there are some practical limits you may be running into:
SSL connections require more resources on both ends of the connection, so you may be hitting some built-in server limit.
TCP/IP uses a 16-bit port number to identify connections, only some of which (around 16,000) are used for dynamic client connections. This would limit the number of active connections a single client could make to the same server.
On Linux, each process has a maximum number of file descriptors that it can have open, and each network connection uses one file descriptor. I imagine Windows has a similar limit.