Erlang getting the exact size in memory of a SSL connection - ssl

Is there a way in erlang to get exactly how much memory a SSL connection takes ?
Right now I'm kinda guessing by dividing the whole beam.smp size (minus the init size) in memory by the number of connected clients...
I'm using R15B01
The SSL connection is handled by a gen_server, doing
process_info(spawn(Fun), memory).
give me after gc calling:
{memory,2108}
This clearly does not contain the SSL socket connection size.

The thing is that even to handle a single SSL connection Erlang starts several separate processes (certificate db, ssl manager, ssl session, etc) and each of those processes might have a separate storage for its data. Thus it is hard to give a definitive answer how much memory each connection takes as there is quite a few places which keep book keeping information about the connection.
If you need an estimate, I would do the following:
Started a SSL server and a SSL client as described at http://pdincau.wordpress.com/2011/06/22/a-brief-introduction-to-ssl-with-erlang/
Saved TotalMemory1 = proplists:get_value(total, memory()). in the server session.
Tried to open 99 more client connections from a separate client session.
Calculated TotalMemory2 = proplists:get_value(total, memory()).
Found out amortized amount of memory a single connection takes by dividing (TotalMemory2 - TotalMemory1)/99.

Related

How to apply TLS session tickets from a previous TLS session to a new connecton in OpenSSL? [duplicate]

I've got a short-lived client process that talks to a server over SSL. The process is invoked frequently and only runs for a short time (typically for less than 1 second). This process is intended to be used as part of a shell script used to perform larger tasks and may be invoked pretty frequently.
The SSL handshaking it performs each time it starts up is showing up as a significant performance bottleneck in my tests and I'd like to reduce this if possible.
One thing that comes to mind is taking the session id and storing it somewhere (kind of like a cookie), and then re-using this on the next invocation, however this is making me feel uneasy as I think there would be some security concerns around doing this.
So, I've got a couple of questions,
Is this a bad idea?
Is this even possible using OpenSSL?
Are there any better ways to speed up the SSL handshaking process?
After the handshake, you can get the SSL session information from your connection with SSL_get_session(). You can then use i2d_SSL_SESSION() to serialise it into a form that can be written to disk.
When you next want to connect to the same server, you can load the session information from disk, then unserialise it with d2i_SSL_SESSION() and use SSL_set_session() to set it (prior to SSL_connect()).
The on-disk SSL session should be readable only by the user that the tool runs as, and stale sessions should be overwritten and removed frequently.
You should be able to use a session cache securely (which OpenSSL supports), see the documentation on SSL_CTX_set_session_cache_mode, SSL_set_session and SSL_session_reused for more information on how this is achieved.
Could you perhaps use a persistent connection, so the setup is a one-time cost?
You could abstract away the connection logic so your client code still thinks its doing a connect/process/disconnect cycle.
Interestingly enough I encountered an issue with OpenSSL handshakes just today. The implementation of RAND_poll, on Windows, uses the Windows heap APIs as a source of random entropy.
Unfortunately, due to a "bug fix" in Windows 7 (and Server 2008) the heap enumeration APIs (which are debugging APIs afterall) now can take over a second per call once the heap is full of allocations. Which means that both SSL connects and accepts can take anywhere from 1 seconds to more than a few minutes.
The Ticket contains some good suggestions on how to patch openssl to achieve far FAR faster handshakes.

Winsock2 receiving data from multiple clients without iterating over each client

I use winsock2 with C++ to connect thousands of clients to my server using the TCP protocol.
The clients will rarely send packets to the server.... there is about 1 minute between 2 packets from a client.
Right now I iterate over each client with non-blocking sockets to check if the client has sent anything to the server.
But I think a much better design of the server would be to wait until it receives a packet from the thousands of clients and then ask for the client's socket. That would be better because the server wouldn't have to iterate over each client, where 99.99% of the time nothing happens. And with a million clients, a complete iteration might take seconds....
Is there a way to receive data from each client without iterating over each client's socket?
What you want is to use IO completion ports, I think. See https://learn.microsoft.com/en-us/windows/win32/fileio/i-o-completion-ports. Microsoft even has an example at GitHub for this: https://github.com/microsoft/Windows-classic-samples/blob/main/Samples/Win7Samples/netds/winsock/iocp/server/IocpServer.Cpp
BTW, do not expect to server million connections on single machine. Generally, there are limits, like available memory, both user space and kernel space, handle limits, etc. If you are careful, you can probably serve tens of thousands of connection per process.

Pooling, Client Checked out, idleTimeoutMillis

This is my understanding after reading the Documents:
Pooling, like many other DBs, we have only a number of allowed connections, so you guys all line-up and wait for a free connection returned to the pool. (a connection is like a token in a sense)
at any given time, number of active and/or available connections is controlled in the range of 0-max.
idleTimeoutMillis said is "milliseconds a client must sit idle in the pool and not be checked out before it is disconnected from the backend and discarded." Not clear on this. Generally supposed when a client say a web app has done its CRUD but not return the connection voluntarily believed is idle. node-postgres will start the clock, once reaches the number of milliseconds will take the connection back to the pool for next client. So what is not be checked out before it is disconnected from the backend and discarded?
Say idleTimeoutMillis: 100, does it mean this connection will be literally disconnected (log-out) after idle for 100 millisecond? If yes then it's not returning to the pool and will result in frequent login connection as the doc said below:
Connecting a new client to the PostgreSQL server requires a handshake
which can take 20-30 milliseconds. During this time passwords are
negotiated, SSL may be established, and configuration information is
shared with the client & server. Incurring this cost every time we
want to execute a query would substantially slow down our application.
Thanks in advance for the stupid questions.
Sorry this question was not answered for so long but I recently came across a bug which questioned my understanding of this library too.
Essentially when you're pooling you're saying to the library you can have a maximum of X connections to the Database simultaneously open. So every request that comes into a CRUD API for example will open a new connection and you will have a total of X requests possible as each request opens a new connection. Now that means as soon as a request comes in it 'checks out' a connection from the pool. This also means another request cannot use this connection as it is currently blocked by another request.
So in order to let's say 'reuse' the same connection when one request is done with that connection you have to release it and say it's ready to use again 'checking out'. Now when another request comes in it is able to use this connection and do the aforementioned query.
idleTimeoutMillis this variable to me is very confusing to me and took a while to get my head around. When there is an open connection to a DB which has been released or 'checked out' it is in an IDLE state, which means that anyone wanting to make a request can make a request with this connection as it is not being used. This variable says that when a connection is in an IDLE state how long do we wait until we can close this connection. For various things this may be used. Obviously having open DB connections requires memory and so forth so closing them might be beneficial. Also when autoscaling - let's say you been at max requests/second and and you're using all DB conns then this is useful to keep IDLE connections open for a bit. However if this is too long and you scale down then you can run into prolonged memory as each IDLE connection will require some memory space.
The benefit of this is when you have an open connection and just send a query with it you don't need to re-authenticate with the DB it's authenticated and ready to go.

What may be the expected percentage of connections that will fallback to TURN?

Say I have built the WebRTC video chat website, some connections after the handshake (ICE Candidates) will go directly p2p, some will use the STUN server, and some will use the "last resort" the TURN server to establish the connection. TURN server based connection is very expensive compared to the direct connection and the STUN connection (which are free) because all traffic must actually go through the TURN server.
How can we estimate the percentage of connections of random users that will need to to go via TURN? Imagine we know very little about the expected audience, except that the majority is in the US. I believe it must be difficult to figure, but my current estimation is somewhere beween 1% and 99%, which is just too wide, can this at least be narrowed down?
https://medium.com/the-making-of-appear-in/what-kind-of-turn-server-is-being-used-d67dbfc2ff5d has some numbers from appear.in which show around 20%. That is global statistics, the stats for the US might be different.

Is there a limit with the number of SSL connections?

Is there a limit with the number of SSL connections?
We are trying to connect through SSL with 2000 sessions. We have tried it a couple of times but it always dies at 1062nd. Is there a limit?
Your operating system will have a limit on the number of open files if you are on linux
ulimit -a will show your various limits.
I imagine yours is set to 1024 and some of the sessions just happened to have closed allow the figure of 1062 (this last bit is a guess)
Yes, everything has a limit. As far as I'm aware, there is no inherit limit with "SSL".. it is after all just a protocol.
But, there is a limited amount of memory, ports, CPU on the machine you are connected to, from and every single one in between.
The actually server you are connected to may have an arbitrary limit set too.
This question doesn't have enough information to answer beyond "YES".
SSL itself doesn't have any limitations, but there are some practical limits you may be running into:
SSL connections require more resources on both ends of the connection, so you may be hitting some built-in server limit.
TCP/IP uses a 16-bit port number to identify connections, only some of which (around 16,000) are used for dynamic client connections. This would limit the number of active connections a single client could make to the same server.
On Linux, each process has a maximum number of file descriptors that it can have open, and each network connection uses one file descriptor. I imagine Windows has a similar limit.