Redis cluster error "ClusterDown Hash slot not served" - redis

After running windows redis 3.2 for over 4 years without issue I am now getting the intermittent error "ClusterDown Hash slot not served" when connecting from my client application. The error occurs on roughly 50% of the calls to redis.
The client uses stackexchange redis with coding in C#.
We have 2 redis servers set up as master ports (7000, 7001, 7002) and secondary on ports (7003, 7004, 7005)
Using Redis Insight to view the redis server we see the error below, but only sometimes.
"The seed nodes have different cluster configuration. This means that your application may be reading from two different clusters."
We have inspected the all configurations and tried failovers with no success fixing the issue.
Any idea on what to look at would be very appreciated.

Related

is stackexchange.redis a "smart client" when using Redis cluster?

A smart client for Redis cluster will "take persistent connections to many nodes, will cache hashslot -> node info, and will update the table when they receive a -MOVED error".
I checked numerous documents but can't find a definitive answer on whether Stackexchange.Redis is a smart client. Can anyone advice? Thanks.
I am using Stackexchange.Redis client in my web application to connect to Redis cluster having 6 instances of Redis servers. Stackexchange.Redis client works perfectly with Redis cluster and we did not get -MOVED error.

One mule app server in cluster polling maximum message from MQ

My mule application is comprised of 2 nodes running in a cluster, and it listens to IBM MQ Cluster (basically connecting to 2 MQ via queue manager). There are situations where one mule node pulls or takes more than 80% of message from MQ cluster and another mule node picks rest 20%. This is causing CPU performance issues.
We have double checked that all load balancing is proper, and very few times we get CPU performance problem. Please can anybody give some ideas what could be possible reason for it.
Example: last scenario was created where there are 200000 messages in queue, and node2 mule server picked 92% of message from queue within few minutes.
This issue has been fixed now. Got into the root cause - our mule application running on MULE_NODE01 reads/writes to WMQ_NODE01, and similarly for node 2. One of the mule node (lets say MULE_NODE02) reads from linux/windows file system and puts huge messages to its corresponding WMQ_NODE02. Now, its IBM MQ which tries to push maximum load to other WMQ node to balance the work load. That's why MULE_NODE01 reads all those loaded files from WMQ_NODE01 and causes CPU usage alerts.
#JoshMc your clue helped a lot in understanding the issues, thanks a lot for helping.
Its WMQ node in a cluster which tries to push maximum load to other WMQ node, seems like this is how MQ works internally.
To solve this, we are now connecting our mule node to MQ gateway, rather making 1-to-1 connectivity
This could be solved by avoiding the racing condition caused by multiple listeners. Configure the listener in the cluster to the primary node only.
republish the message to a persistent VM queue.
move the logic to another flow that could be triggered via a VM listener and let the Mule cluster do the load balancing.

Redis Sentinel with 2 App Servers and 1 Additional Sentinel Node Setup

We have 2 app/web servers running HA application, we need to setup redis with high availability/replication to support our app.
Considering the minimum sentinel setup requirement of 3 nodes.
We are planning to prepare the first app serve with redis master and 1 sentinel, the second app server will have the redis slave and 1 sentinel, we plan to add one additional server to hold the third sentinel node to achieve the 2 quorum sentinel setup.
Is this a valid setup ? what could be the risks ?
Thanks ,,,
Well it looks its not recommended to put the redis nodes on the app servers (where it is recommended to put the sentinel nodes there).
We ended with a setup for KeyDB (a fork from Redis) which claimed to be faster and support high availability/replication (and much more) to create two nodes within the app servers.
Of course We had to modify little in the client side to support some advance Lua scripts (There is some binary serialized data not getting replicated to the other node).
But after some effort, it worked ! as expected.
Hope this helps ...

Redis cluster node failure not detected on MISCONF

We currently have a redis cache cluster with 3 masters and 3 slaves hosted on 3 windows servers (1 master/slave by server). We are using StackExhange.Redis as our client.
We have RBD disabled but AOF enabled and are experiencing some problems with the cluster in the following situation :
One of our servers became full and the redis node on this server was unable to write to the AOF file (the error returned to the client was MISCONF Errors writing to the AOF file: No space left on device).
The cluster did not detect that the node was failing and so did not exlclude it from the cluster.
All cache operations were blocked until we make some place on the server.
We know that we don't need the AOF, so we have disalbed it after the incident.
But we would like to confirm or infirm our view on redis clustering: for us, if a node was experiencing a failure, the cluster would redirect all requests to another one. We have tested that with a stopped node master, a slave is promoted into a master so we are confident that our cluster is working, but we are not sure why, in our case, the node was not marked as a failure.
Is the cluster capable of detecting a node failure when the failure is only happening when a request is made from a client to the cluster ?

Apache Ignite Force Server Mode

We are trying to prevent our application startups from just spinning if we cannot reach the remote cluster. From what I've read Force Server Mode states
In this case, discovery will happen as if all the nodes in topology
were server nodes.
What i want to know is:
Does this client then permanently act as a server which would run computes and store caching data?
If connection to the cluster does not happen at first, a later connection to an establish cluster cause issue with consistency? What would be the expect behavior with a Topology version mismatch? Id their potential for a split brain scenario?
No, it's still a client node, but behaves as a server on discovery protocol level. For example, it can start without any server nodes running.
Client node can never cause data inconsistency as it never stores the data. This does not depend forceServerMode flag.