Singlethreaded OpenSSL - OK to read/write on several ports? - ssl

When using OpenSSL in a multithreaded program one needs to implement certain locking callbacks.
When using a singlethreaded program but with nonblocking sockets, do I need to think of this? I mean, is it a prolem if multiple ports are doing SSL_read/write and connect at the same time? COntrast that with a singlethreaded program with blocking swockets where one operation would have to finish beofre the next one.
But with my non blocking app one could try SSL_read and have to call it again and before retrying another connection would also call SSL_read...

It's not a problem to use multiple non-blocking sockets in parallel and do TCP accept, connect and SSL handshake, read, write all at parallel. I'm doing this for years and it is very stable. And since there can be only a single SSL operation at one time you don't need any kind of locking.

Related

Redis Lettuce Connection and BLPOP

Lettuce is using a single shared native connection under the hood.
Is it safe to use BLPOP blocking command with this design - will it block this shared native connection and affect other clients? I didn't find a concrete explanatory description for that in Lettuce docs.
Thanks in advance,
Using BLPOP/BLMOVE and similar commands block the connection for the duration of the response or until timeout. If you are using synchronous APIs, the calling thread would also be blocked on this IO. Meanwhile, other threads can continue issuing commands through other client connections without being impacted.
In case the blocked connection is shared with other threads, commands from such other threads would be queued behind BLPOP/BLMOVE. As a side effect, all other threads sharing the blocked connection will experience delays until Redis responds back to the first BLPOP/BLMOVE command, after which the connection gets automatically unblocked and all queued commands would be executed FIFO. This is a classic head-of-the-line blocking pattern and will occur if you use blocking commands on a shared connection.
In your specific use case, it is advisable not to use a shared connection for issuing blocking commands. The same rule applies to transactions and disabling flush for batched commands. This is one of the rare use cases where Lettuce connections should not be shared.

event-drive TLS server

I'm working on a server-side software that receives requests from clients via TLS (over TCP). For better performance and user experience, I'd like to avoid a full handshake for every request. Ideally, the client can just establish a TLS session with the server for hours, although most of the time the session might be idle. At the same time, high throughput is also required.
One easy way to do it is to dedicate a thread for each session and use a big thread pool to boost throughput. But the performance overhead of this method could be huge, if I want, say, 10s thousands of concurrent sessions.
The requirement of high throughput leads to me the event-driven model. The idea is when the connection is idle (namely no IO event on the underlying socket), the TLS server can switch context to serve other connections. One of the challenges is to sort of freeze the entire TLS session context while the socket is idle and retrieve it when the socket becomes readable/writable.
I'm wondering if there is support already in TLS for this kind of feature? Both cache and ticket seem relevant. Also, I'm wondering if people have implemented this idea.
You are talking about SSL Session resumption, and it is already implemented in both OpenSSL and JSSE, and no doubt every other SSL API you would be using. SSL sessions already survive connections. So there is nothing for you to do to get this.
The part about 'freezing the SSL session context' is completely pointless.

TCP socket used in a TLS Openssl connection becomes readable after Openssl call returned WANT_WRITE

I'm trying to create a generic TLS over TCP socket in C++, using Openssl. The socket would be used in programs running a select loop and utilizing non-blocking I/O.
I'm concerned about the case where the underlying TCP socket becomes readable after the previous SSL_get_error call returned SSL_ERROR_WANT_WRITE. I can think of two situations where this may occur:
The local application and remote application simultaneously decide to send large amounts of data. Both applications call SSL_write simultaneously and subsequent SSL_get_error calls on both applications return SSL_ERROR_WANT_WRITE. The TCP packets sent from both applications cross on the wire. The local application's TCP socket is now readable after the previous SSL_get_error call returned SSL_ERROR_WANT_WRITE.
As above, except the remote Openssl library decides to perform SSL re-negotiation in the SSL_write call, prior to writing any application data. This simply changes the meaning of the data received on the local application's TCP socket from encrypted application data to session re-negotiation data.
How should the local application handle this data? Should it:
call SSL_write as it is currently mid-write?
call SSL_read as would happen if the socket is idle?
SSL_ERROR_WANT_READ and SSL_ERROR_WANT_WRITE can be caused by (re)negotiations or full socket buffers and can not only occure within SSL_read and SSL_write but also SSL_connect and SSL_accept on non-blocking sockets. All you have to do is to wait for the wanted socket state (e.g. readable or writable) and then repeat the same operation. E.g. if you get an SSL_ERROR_WANT_READ from SSL_write you wait until the socket gets readable (with select, poll or similar) and then call SSL_write again. Same with SSL_read.
It might also be useful to use SSL_CTX_set_mode with SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER|SSL_MODE_ENABLE_PARTIAL_WRITE .

Is it possible to thread pool IMAP connections?

From what I understand IMAP requires a connection per each user. I'm writing an IMAP client (currently just gmail) that supports many (100s, 1000s maybe 10000s+) users at a time. Obviously cutting down the number of open connections would be great. I'm wondering if it's possible to use thread pooling on my side to connect to gmail via IMAP or if that simply isn't supported by the IMAP protocol.
IMAP typically uses SSL over TCP/IP. And a TCP/IP connection will need to be maintained per IMAP client connection, meaning that there will be many simultaneous open connections.
These multiple simultaneous connections can easily be maintained in a non-threaded (single thread) implementation without affecting the state of the TCP connections. You'll have to have some sort of a flow concept per IMAP TCP/IP connection, and store all of the flows in a container (a c++ STL map for instance) using the TCP/IP five-tuple (or socketFd) as a key. For each data packet received, lookup the flow and handle the packet accordingly. There is nothing about this approach that will affect the TCP nor IMAP connections.
Considering that this will work in a single-thread environment, adding a thread pool will only increase the throughput of the application, since you can handle data packets for several flows simultaneously (assuming its a multi-core CPU) You will just need to make sure that 2 threads dont handle data packets for the same flow at the same time, which could cause the packets to be handled out of order. An approach could be to have a group of flows per thread, maybe using IP pools or something similar.

Is Apache blocking I/O?

Is Apache blocking I/O or non-blocking IO?
It forks a process for each connection, so it probably is blocking (unless it watches for timeout on the same thread as the socket i/o?).
To be sure you should probably look for socket creation calls in the source, and follow accesses to the socket descriptors... I'm not even sure if Apache has to do the forking mode, maybe it has an asynchronous mode too.
Edit
Right, there are a bunch of "Multi-Processing Modules", which decide how to handle multiple HTTP requests.
Apache supports both. default its blocking. there is non-blocking module using NIO events.
Its a performance based tuning to decide which method is to be used.
http://hc.apache.org/
For serving static contents its better to use non-blocking, but for use with a servlet container its better to use blocking[thread locals].
Apache is blocking i/o afaik. nginx uses an event based non blocking single thread and the memory usage is relatively much lower than apache. Apache uses one thread per connection and that is how it handles multiple connections.