Is PUBSUB CHANNELS command blocking Redis server? - redis

We are known that KEYS command block Redis server and need to use *SCAN commands instead.
As I understand Redis server can handle a lot of pubsub connection. So, if I call PUBSUB CHANNELS command on such server can it handle pubsub connections or handle other commands during execution of this command?

Redis is single threaded. It can have any number of clients, but the commands that are getting executed is single threaded (one by one).
In PUBSUB you are subscribing to a client, which will hold the connection to the server.
When you publish a message it gets delivered to all the channels that have subscribed, so basically it's a single call which does publishing to all channels in that call itself. So if you have multiple clients (say a million) subscribing to a single channel, it will take some time to publish to all those clients, then yes it is blocking. Also note that blocking will happen only during publish action.
Hope this answers your question.

Related

Redis Lettuce Connection and BLPOP

Lettuce is using a single shared native connection under the hood.
Is it safe to use BLPOP blocking command with this design - will it block this shared native connection and affect other clients? I didn't find a concrete explanatory description for that in Lettuce docs.
Thanks in advance,
Using BLPOP/BLMOVE and similar commands block the connection for the duration of the response or until timeout. If you are using synchronous APIs, the calling thread would also be blocked on this IO. Meanwhile, other threads can continue issuing commands through other client connections without being impacted.
In case the blocked connection is shared with other threads, commands from such other threads would be queued behind BLPOP/BLMOVE. As a side effect, all other threads sharing the blocked connection will experience delays until Redis responds back to the first BLPOP/BLMOVE command, after which the connection gets automatically unblocked and all queued commands would be executed FIFO. This is a classic head-of-the-line blocking pattern and will occur if you use blocking commands on a shared connection.
In your specific use case, it is advisable not to use a shared connection for issuing blocking commands. The same rule applies to transactions and disabling flush for batched commands. This is one of the rare use cases where Lettuce connections should not be shared.

ServiceStack Redis Mq: is eventual consistency an issue?

I'm looking at turning a monolith application into a microservice-oriented application and in doing so will need a robust messaging system for interprocesses-communication. The idea is for the microserviceprocesses to be run on a cluster of servers for HA, with requests to be processed to be added on a message queue that all the applications can access. I'm looking at using Redis as both a KV-store for transient data and also as a message broker using the ServiceStack framework for .Net but I worry that the concept of eventual consistency applied by Redis will make processing of the requests unreliable. This is how I understand Redis to function in regards to Mq:
Client 1 posts a request to a queue on node 1
Node 1 will inform all listeners on that queue using pub/sub of the existence of
the request and will also push the requests to node 2 asynchronously.
The listeners on node 1 will pull the request from the node, only 1 of them will obtain it as should be. An update of the removal of the request is sent to node 2 asynchronously but will take some time to arrive.
The initial request is received by node 2 (assuming a bit of a delay in RTT) which will go ahead and inform listeners connected to it using pub/sub. Before the update from node 1 is received regarding the removal of the request from the queue a listener on node 2 may also pull the request. The result being that two listeners ended up processing the same request, which would cause havoc in our system.
Is there anything in Redis or the implementation of ServiceStack Redis Mq that would prevent the scenario described to occur? Or is there something else regarding replication in Redis that I have misunderstood? Or should I abandon the Redis/SS approach for Mq and use something like RabbitMQ instead that I have understood to be ACID-compliant?
It's not possible for the same message to be processed twice in Redis MQ as the message worker pops the message off the Redis List backed MQ and all Redis operations are atomic so no other message worker will have access to the messages that have been removed from the List.
ServiceStack.Redis (which Redis MQ uses) only supports Redis Sentinel for HA which despite Redis supporting multiple replicas they only contain a read only view of the master dataset, so all write operations like List add/remove operations can only happen on the single master instance.
One notable difference from using Redis MQ instead of specific purpose MQ like Rabbit MQ is that Redis doesn't support ACK's, so if the message worker process that pops the message off the MQ crashes then it's message is lost, as opposed to Rabbit MQ where if the stateful connection of an un Ack'd message dies the message is restored by the RabbitMQ server back to the MQ.

Redis Pub/Sub persistence

I am working on redis SMQ persistence. My questions here is, While publisher publishing the messages, consumer has stopped suddenly. When consumer connects again, is it possible to subscribe messages from where it has stopped?
No - Redis' Pub/Sub has no persistence, and once a message has been published, it is sent only to the connected subscribed clients. Afterwards, the message is gone forever.
With standard Pub/Sub you can use Lua scripts to persist your message. You need to check whether you have a listener on channel or not. If not then storing your message with channel key on redis . When the subscriber cames back it checks if there is anything for him based on channel key. Second option is to use Redis Stream. Check this gist.
Plz use 2 redis connections: 1 pubsub, second - LPOP/RPOP

In RabbitMQ, do we need to manage Connections and Channels in a separate thread?

I am new to the world of Message Queues and I am currently evaluating RabbitMQ, ActiveMQ and Kafka. I see that in RabbitMQ, the Producer will create a Connection to the RabbitMQ server and the thread holding the Connection will remain active until the connection is closed. This leads me to believe that there MUST be a separate thread which delivers information to the RMQ Producer thread which will simply publish the message to the queue and keep looping until connection to the RMQ Server is closed? Is this assumption correct? Any thoughts/inputs would be appreciated.
Thanks!
P.S: This isn't the behaviour with Kafka. [ Apache Kafka: Java Producer reusability ]
in general, you should have a single RMQ connection per application instance. that connection can be opened as soon as your application starts.
having a connection does not yet give you the ability to publish or consume messages, though.
to do that, you need to create a channel.
the general best practice is one channel per thread in your application. need to publish a messages from this thread? create a channel for the thread. done with publishing it and not doing any other RMQ work on this channel? close the channel.
unlike connections, channels are cheap and easy to create. they work over the existing RMQ connection, and they take very little resources to create.
you can create thousands of channels in a single connection (though you might want to limit that number for performance reasons)

Advice on disconnected messages with WCF through firewalls

All,
I'm looking for advice over the following scenario:
I have a component running in one part of the corporate network that sends messages to an application logic component for processing. These components might reside on the same server, different servers in the same network (LAN ot WAN) or live outside in the cloud. The application server should be scalable and resilient.
The messages are related in that the sequence they arrive is important. They are time-stamped with the client timestamp.
My thinking is that I'll get the clients to use WCF basicHttpBinding (some are based on .NET CF which only has basic) to send messages to the Application Server (this is because we can guarantee port 80/443 will be open for outgoing connections). Server accepts these, and writes these into a queue. This queue can be scaled out if needed over multiple machines.
I'm hesitant to use MSMQ for the queue though as to properly scale out we are going to have to install seperate private queues on each application server and round-robin monitor the queues. I'm concerned though that we could lose a message on a server that's gone down until the server is restored, and we could end up processing a later message from a different server and disrupt the sequence.
What I'd prefer is a central queue (e.g. a database table) that all application servers monitor.
With this in mind, what I'd like to do is to create a custom WCF binding, similar to netMsmqBinding, but that uses the DB table instead but I'm confused as to whether I can simply create a custom transport or a I need a full binding, and whether the binding will allow the client to send over HTTP. I've looked around the internet but I'm a little confused as to where to start.
I could not bother with the custom WCF binding but it seems a good way to introduce scalability if I do need to seperate the servers.
Any suggestions please would be helpful, including alternatives.
Many thanks
I would start with MSMQ because it is exactly for this purpouse. Use single transactional queue on clustered machine and let application servers to take messages for processing from this queue. Each message processing has to be part of distributed transaction (MSDTC).
This scenario will ensure:
clustered queue host will ensure that if one cluster node fails the other will still be able to handle requests
sending each message as recoverable - it means that message will be persisted on hard drive (not only in memory) so in critical failure of the whole cluster you will still have all messages.
transactional queue will ensure that all message transport operations will be atomic - moving message from outgoing queue to destination queue will be processed as transaction. It means that original message from outgoing queue will be kept in queue until ack from destination queue arrives. Transactional processing can ensure in order delivery.
Distributed transaction will allow application servers consuming messages in transaction. Message will not be deleted from queue until application server commits transaction or transaction time outs.
MSMQ is also available on .NET CF so you can send messages directly to queue without intermediate non-reliable web service layer.
It should be possible to configure MSMQ over HTTP (but I have never used it so I'm not sure how it cooperates with previous mentioned features).
Your proposed solution will be pretty hard. You will end up in building BizTalk's MessageBox. But if you really want to do it, check Omar's post about building database queue table.