Once the Redis server is restarted, how to start all the processes which were run by Redis instance?
Here in my application, I can see that Redis instance is created, but all the subscriptions which the Redis instance was doing, is not restarted. Hence application is not able to receive new messages from the event bus/ Redis bus.
You application needs to capture the disconnect event and, once the database is back online, reconnect to it and resubscribe to the relevant channels.
Related
I'm looking at turning a monolith application into a microservice-oriented application and in doing so will need a robust messaging system for interprocesses-communication. The idea is for the microserviceprocesses to be run on a cluster of servers for HA, with requests to be processed to be added on a message queue that all the applications can access. I'm looking at using Redis as both a KV-store for transient data and also as a message broker using the ServiceStack framework for .Net but I worry that the concept of eventual consistency applied by Redis will make processing of the requests unreliable. This is how I understand Redis to function in regards to Mq:
Client 1 posts a request to a queue on node 1
Node 1 will inform all listeners on that queue using pub/sub of the existence of
the request and will also push the requests to node 2 asynchronously.
The listeners on node 1 will pull the request from the node, only 1 of them will obtain it as should be. An update of the removal of the request is sent to node 2 asynchronously but will take some time to arrive.
The initial request is received by node 2 (assuming a bit of a delay in RTT) which will go ahead and inform listeners connected to it using pub/sub. Before the update from node 1 is received regarding the removal of the request from the queue a listener on node 2 may also pull the request. The result being that two listeners ended up processing the same request, which would cause havoc in our system.
Is there anything in Redis or the implementation of ServiceStack Redis Mq that would prevent the scenario described to occur? Or is there something else regarding replication in Redis that I have misunderstood? Or should I abandon the Redis/SS approach for Mq and use something like RabbitMQ instead that I have understood to be ACID-compliant?
It's not possible for the same message to be processed twice in Redis MQ as the message worker pops the message off the Redis List backed MQ and all Redis operations are atomic so no other message worker will have access to the messages that have been removed from the List.
ServiceStack.Redis (which Redis MQ uses) only supports Redis Sentinel for HA which despite Redis supporting multiple replicas they only contain a read only view of the master dataset, so all write operations like List add/remove operations can only happen on the single master instance.
One notable difference from using Redis MQ instead of specific purpose MQ like Rabbit MQ is that Redis doesn't support ACK's, so if the message worker process that pops the message off the MQ crashes then it's message is lost, as opposed to Rabbit MQ where if the stateful connection of an un Ack'd message dies the message is restored by the RabbitMQ server back to the MQ.
I am working on redis SMQ persistence. My questions here is, While publisher publishing the messages, consumer has stopped suddenly. When consumer connects again, is it possible to subscribe messages from where it has stopped?
No - Redis' Pub/Sub has no persistence, and once a message has been published, it is sent only to the connected subscribed clients. Afterwards, the message is gone forever.
With standard Pub/Sub you can use Lua scripts to persist your message. You need to check whether you have a listener on channel or not. If not then storing your message with channel key on redis . When the subscriber cames back it checks if there is anything for him based on channel key. Second option is to use Redis Stream. Check this gist.
Plz use 2 redis connections: 1 pubsub, second - LPOP/RPOP
I am using ServiceStack 5.0.2 and Redis 3.2.100 on Windows.
I have got several nodes with active Pub/Sub Subscription and a few Pub's per second.
I noticed that if Redis Service restarts while there is no physical network connection (so one of the clients cannot connect to Redis Service), that client stops receiving any messages after network recovers. Let's call it a "zombie subscriber": it thinks that it is still operational, but never actually receives a message: client thinks it has a connection, the same connection on server is closed.
The problem is no exception is thrown in RedisSubscription.SubscribeToChannels, so I am not able to detect the issue in order to resubscribe.
I have also analyzed RedisPubSubServer and I think I have discovered a problem. In the described case RedisPubSubServer tries to restart (send stop command CTRL), but "zombie subscriber" does not receive it and no resubscription is made.
Can I change the node name from RabbitMq Management Console for a specific queue? I tried, but I think that this is created when I started my app. Can I change it afterwards? My queue is on node RabbitMQ1, and my connection on node RabbitMQ2, so I cannot read messages from that queue. Maybe I can change my connection node?
The node name is not just a label, but it's where the queue is physically located. In fact by default queues are not distributed/mirrored, but created on the server where the application connected, as you correctly guessed.
However you can make your queue mirrored using policies, so you can consume messages from both the servers.
https://www.rabbitmq.com/ha.html
You can change the policy for the queues by using the rabbitmqctl command or from the management console, admin -> policies.
You need to synchronize the queue in order to clone the old messages to the mirror queue with:
rabbitmqctl sync_queue <queue_name>
Newly published messages will end in both the copies of the queue, and can be consumed from both alternatively (the same message won't be consumed from both).
According to the git commit messages, ServiceStack has recently added failover support. I initially assumed this meant that I could pull one of my Redis instances down, and my pooled client manager would handle the failover elegantly and try to connect with one of my alternate Redis instances. Unfortunately, my code just bugs out and says that it can't connect with the initial Redis instance.
I am currently running instances of Redis 2.6.12 on a Windows, with the master at port 6379 and a slave at 6380, with sentinels set up to automatically promote the slave to a master if the master goes down. I am currently instantiating my client manager like this:
PooledRedisClientManager pooledClientManager =
new PooledRedisClientManager(new string[1] { "localhost:6379"},
new string[1] {"localhost:6380"});
where the first array is read-write hosts (for the master), and the second array is read-only hosts (for the slave).
When I terminate the master at port 6379, the sentinels promote the slave to a master. Now, when I try to run my C# code, instead of failing over to port 6380, it simply breaks and returns the error "could not connect to redis Instance at localhost:6379".
Is there a way around this, or will failover simply not work the way I want it to?
PooledRedisClientManager.FailoverTo allows you to reset which are the read/write hosts, vs readonly hosts, and restart the factory. This allows for a quick transition without needing to recreate clients.