Is there a limit with the number of SSL connections? - ssl

Is there a limit with the number of SSL connections?
We are trying to connect through SSL with 2000 sessions. We have tried it a couple of times but it always dies at 1062nd. Is there a limit?

Your operating system will have a limit on the number of open files if you are on linux
ulimit -a will show your various limits.
I imagine yours is set to 1024 and some of the sessions just happened to have closed allow the figure of 1062 (this last bit is a guess)

Yes, everything has a limit. As far as I'm aware, there is no inherit limit with "SSL".. it is after all just a protocol.
But, there is a limited amount of memory, ports, CPU on the machine you are connected to, from and every single one in between.
The actually server you are connected to may have an arbitrary limit set too.
This question doesn't have enough information to answer beyond "YES".

SSL itself doesn't have any limitations, but there are some practical limits you may be running into:
SSL connections require more resources on both ends of the connection, so you may be hitting some built-in server limit.
TCP/IP uses a 16-bit port number to identify connections, only some of which (around 16,000) are used for dynamic client connections. This would limit the number of active connections a single client could make to the same server.
On Linux, each process has a maximum number of file descriptors that it can have open, and each network connection uses one file descriptor. I imagine Windows has a similar limit.

Related

What are the trade offs of limiting relay ports for a TURN server?

By default coturn uses the range 49152-65535 UDP as relay ports. Is there any reason to use the full range? Can't one udp handle infinite connections? What's the point of having all of these open? Are there any security risks? Are there any trade offs to using less udp ports?
Coturn uses the 49152-65535 range by default because this is what is specified in RFC 5766, Section 6.2, which describes how the TURN server should react when it receives an allocation request. This paragraph is of particular interest for your question:
In all cases, the server SHOULD only allocate ports from the range 49152 - 65535 (the Dynamic and/or Private Port range [Port-Numbers]), unless the TURN server application knows, through some means not specified here, that other applications running on the same host as the TURN server application will not be impacted by allocating ports outside this range. This condition can often be satisfied by running the TURN server application on a dedicated machine and/or by arranging that any other applications on the machine allocate ports before the TURN server application starts. In any case, the TURN server SHOULD NOT allocate ports in the range 0 - 1023 (the Well-Known Port range) to discourage clients from using TURN to run standard services.
The Dynamic and/or Private Port range is described in RFC 6335, Section 6:
the Dynamic Ports, also known as the Private or Ephemeral Ports, from 49152-65535 (never assigned)
So, to try and answer your questions:
Is there any reason to use the full range?
Yes, if the default range works for you and your application.
Can't one udp handle infinite connections?
Yes, you could configure coturn to only use one port, as system resources allow, of course.
What's the point of having all of these open?
It is the default range for dynamically assigned port numbers as defined by the IANA.
Are there any security risks?
Not beyond any other normal security risk involved with running a service like coturn.
Are there any trade offs to using less udp ports?
As far as I know, there are not any technical trade offs. I have run coturn with a much smaller range outside of the dynamic range, and it works just fine.
When faced with firewalls or port number restrictions on networks trying to reach a TURN server, a smaller range may be seen as a benefit to some network administrators, but at the same time other administrators may question the use of a port range outside of the IANA-assigned dynamic range. I have encountered both mindsets, and it is not possible to declare one approach as clearly better than the other (when chosing between the default port range or a smaller range). You just have to find what works for you application and usage.
#bradley-t-hughes provides a good answer; to add a point of view on that:
Defining a default range is a strategy to ensure that applications that are run without customised configuration (hence using default settings) don't clash with each other.
As for other applications that dynamically allocate UDP ports for real time communications, the configured port range represents an upper limit for concurrent sessions. The smaller the range, the fewer concurrent sessions can be established.
There are cases where hosts are dedicated to applications like TURN servers, and using wide ranges ensures that the maximum capacity is limited by other factors, like bandwidth or CPU usage.
If you know in advance that the number of concurrent sessions will be small, e.g. because it's just being used for testing functionality and not to provide a Live service, then you can restrict that range, and leave the other ports available for other applications.

What may be the expected percentage of connections that will fallback to TURN?

Say I have built the WebRTC video chat website, some connections after the handshake (ICE Candidates) will go directly p2p, some will use the STUN server, and some will use the "last resort" the TURN server to establish the connection. TURN server based connection is very expensive compared to the direct connection and the STUN connection (which are free) because all traffic must actually go through the TURN server.
How can we estimate the percentage of connections of random users that will need to to go via TURN? Imagine we know very little about the expected audience, except that the majority is in the US. I believe it must be difficult to figure, but my current estimation is somewhere beween 1% and 99%, which is just too wide, can this at least be narrowed down?
https://medium.com/the-making-of-appear-in/what-kind-of-turn-server-is-being-used-d67dbfc2ff5d has some numbers from appear.in which show around 20%. That is global statistics, the stats for the US might be different.

Parse LiveQuery + Redis Scalability

I want to use Live Query on a separate server on Herku. I am looking at the Redis add on and the number of connections. Can someone explain how the number of connection pertains to the how many users can subscribe to the live query.
Actual use case would be to announce to users who is active online in the app. The add ons run $200 per month to support 1024 connections. That sounds expensive, I don't understand if that means that 1024 users subscribing to a class? or if there is some kind of sharing going on between the 1024 connections and the number of users.
Lastly, what would happen if I exceed the connection limit? Would it just timeout with a parse timeout error?
Thanks
The redis connections will only be used to connect your parse server's together with the liveQuery servers. Usually you would have them on the same instance, listening to the same port. So let's say you have 10 dynos, you need 20 connections; 1 per publisher (parse-server) + 1 per subscriber , liveQuery server.
To calculate how many users can be connected on a single dyno, it's another story in itself, but you can probably have a look into other websocket + nodejs + heroku literature available on the internet. It's unlikely you'll need 1024 connections, unless you plan having as many dynos.

Erlang getting the exact size in memory of a SSL connection

Is there a way in erlang to get exactly how much memory a SSL connection takes ?
Right now I'm kinda guessing by dividing the whole beam.smp size (minus the init size) in memory by the number of connected clients...
I'm using R15B01
The SSL connection is handled by a gen_server, doing
process_info(spawn(Fun), memory).
give me after gc calling:
{memory,2108}
This clearly does not contain the SSL socket connection size.
The thing is that even to handle a single SSL connection Erlang starts several separate processes (certificate db, ssl manager, ssl session, etc) and each of those processes might have a separate storage for its data. Thus it is hard to give a definitive answer how much memory each connection takes as there is quite a few places which keep book keeping information about the connection.
If you need an estimate, I would do the following:
Started a SSL server and a SSL client as described at http://pdincau.wordpress.com/2011/06/22/a-brief-introduction-to-ssl-with-erlang/
Saved TotalMemory1 = proplists:get_value(total, memory()). in the server session.
Tried to open 99 more client connections from a separate client session.
Calculated TotalMemory2 = proplists:get_value(total, memory()).
Found out amortized amount of memory a single connection takes by dividing (TotalMemory2 - TotalMemory1)/99.

Downside to using persistent connections?

I have heard in the past that persistent connections are not good to use on a high traffic web server. Is this true, or does it only apply to apache's prefork mode? Would CGI mode have this problem?
This involves PHP, Apache, and Postgresql.
Are PHP persistent connections evil ? -- in context of PHP and MySQL.
The reason behind using persistent connections is of course reducing number of connects which are rather expensive, even though they are much faster with MySQL than with most other databases.
The first problem with persistent connections...
If you’re establishing thousands of connections per second you normally do not keep it open for long time, but Operation System does. According to TCP/IP protocol Ports can’t be recycled instantly and have to spend some time in “FIN” stage waiting before they can be recycled.
The second problem... using too many MySQL server connections.
Some people simply do not realize you can increase max_connections variable and get over 100 concurrent connections with MySQL others were beaten by older Linux problems of not being able to have more than 1024 connections with MySQL.
Lets talk now about why Persistent connections were disabled in mysqli extension. Even though you could misuse persistent connections and get poor performance that was not the reason. The real reason is – you could get much more problems with it.
Persistent connections were added to PHP during times of MySQL 3.22/3.23 when MySQL was simple enough so you could recycle connections easily without any problems. In later versions number of problems however arose – If you recycle connection which has uncommitted transactions you run into trouble. If you happen to recycle connections with custom character set settings you’re in trouble back again, not to mention about possibly changed per session variables.
One problem with using persistent connections is that it doesn't really scale that well. If you have 5000 people connected, you need 5000 persistent connections. If you take away the need for persistence, you might be able to serve 10000 people with the same number of connections because they're able to share those connections when they're not using them.