Lettuce is using a single shared native connection under the hood.
Is it safe to use BLPOP blocking command with this design - will it block this shared native connection and affect other clients? I didn't find a concrete explanatory description for that in Lettuce docs.
Thanks in advance,
Using BLPOP/BLMOVE and similar commands block the connection for the duration of the response or until timeout. If you are using synchronous APIs, the calling thread would also be blocked on this IO. Meanwhile, other threads can continue issuing commands through other client connections without being impacted.
In case the blocked connection is shared with other threads, commands from such other threads would be queued behind BLPOP/BLMOVE. As a side effect, all other threads sharing the blocked connection will experience delays until Redis responds back to the first BLPOP/BLMOVE command, after which the connection gets automatically unblocked and all queued commands would be executed FIFO. This is a classic head-of-the-line blocking pattern and will occur if you use blocking commands on a shared connection.
In your specific use case, it is advisable not to use a shared connection for issuing blocking commands. The same rule applies to transactions and disabling flush for batched commands. This is one of the rare use cases where Lettuce connections should not be shared.
Related
We are known that KEYS command block Redis server and need to use *SCAN commands instead.
As I understand Redis server can handle a lot of pubsub connection. So, if I call PUBSUB CHANNELS command on such server can it handle pubsub connections or handle other commands during execution of this command?
Redis is single threaded. It can have any number of clients, but the commands that are getting executed is single threaded (one by one).
In PUBSUB you are subscribing to a client, which will hold the connection to the server.
When you publish a message it gets delivered to all the channels that have subscribed, so basically it's a single call which does publishing to all channels in that call itself. So if you have multiple clients (say a million) subscribing to a single channel, it will take some time to publish to all those clients, then yes it is blocking. Also note that blocking will happen only during publish action.
Hope this answers your question.
When using OpenSSL in a multithreaded program one needs to implement certain locking callbacks.
When using a singlethreaded program but with nonblocking sockets, do I need to think of this? I mean, is it a prolem if multiple ports are doing SSL_read/write and connect at the same time? COntrast that with a singlethreaded program with blocking swockets where one operation would have to finish beofre the next one.
But with my non blocking app one could try SSL_read and have to call it again and before retrying another connection would also call SSL_read...
It's not a problem to use multiple non-blocking sockets in parallel and do TCP accept, connect and SSL handshake, read, write all at parallel. I'm doing this for years and it is very stable. And since there can be only a single SSL operation at one time you don't need any kind of locking.
From what I understand IMAP requires a connection per each user. I'm writing an IMAP client (currently just gmail) that supports many (100s, 1000s maybe 10000s+) users at a time. Obviously cutting down the number of open connections would be great. I'm wondering if it's possible to use thread pooling on my side to connect to gmail via IMAP or if that simply isn't supported by the IMAP protocol.
IMAP typically uses SSL over TCP/IP. And a TCP/IP connection will need to be maintained per IMAP client connection, meaning that there will be many simultaneous open connections.
These multiple simultaneous connections can easily be maintained in a non-threaded (single thread) implementation without affecting the state of the TCP connections. You'll have to have some sort of a flow concept per IMAP TCP/IP connection, and store all of the flows in a container (a c++ STL map for instance) using the TCP/IP five-tuple (or socketFd) as a key. For each data packet received, lookup the flow and handle the packet accordingly. There is nothing about this approach that will affect the TCP nor IMAP connections.
Considering that this will work in a single-thread environment, adding a thread pool will only increase the throughput of the application, since you can handle data packets for several flows simultaneously (assuming its a multi-core CPU) You will just need to make sure that 2 threads dont handle data packets for the same flow at the same time, which could cause the packets to be handled out of order. An approach could be to have a group of flows per thread, maybe using IP pools or something similar.
I have following scenario:
Server should be Daemon.
Other Apps should be clients.
Many clients should communicate with server to get their task done by server at a time.
These tasks are such as copyfile, deletefile etc.
My solution:
Server has 5 worker threads each containing named pipe. Each pipe's availability status is kept in Shared memory structure. When client wants to communicate with server, it checks which pipe is available from shared memory then opens that pipe & sends its message on that pipe, respective worker thread of server servers this client request. That worker thread sends request status (Success/failure) on that pipe so that client will become aware of last operation status.
As far as I know, pipes on Mac os x are unidirectional & they lack capability of creating unlimited instances like Windows.
What mechanism could be best suited for such kind of communication?
Thanks,
Vaibhav.
As far as I know, pipes on Mac os x are unidirectional & they lack capability of creating unlimited instances like Windows.
Pipes are one directional, but Unix sockets are not. This is probably what you are after if you want to directly port your code to OS X.
However, there are probably better ways to do what you want to do, including stuff like Distributed Objects which I admit I have never used. Even if you stick with a socket interface, I think one socket would be easier with a thread monitoring the socket and handing off work to worker threads as it arrives, using listen and accept. Better still, have an NSOperationQueue or a dispatch queue to put the work on, then the OS will handle the task of optimising the thread count.
Is Apache blocking I/O or non-blocking IO?
It forks a process for each connection, so it probably is blocking (unless it watches for timeout on the same thread as the socket i/o?).
To be sure you should probably look for socket creation calls in the source, and follow accesses to the socket descriptors... I'm not even sure if Apache has to do the forking mode, maybe it has an asynchronous mode too.
Edit
Right, there are a bunch of "Multi-Processing Modules", which decide how to handle multiple HTTP requests.
Apache supports both. default its blocking. there is non-blocking module using NIO events.
Its a performance based tuning to decide which method is to be used.
http://hc.apache.org/
For serving static contents its better to use non-blocking, but for use with a servlet container its better to use blocking[thread locals].
Apache is blocking i/o afaik. nginx uses an event based non blocking single thread and the memory usage is relatively much lower than apache. Apache uses one thread per connection and that is how it handles multiple connections.