redis slave node continually require auth - redis

I have deployed a redis service using master-slave mode with 3 nodes at Kubernetes. And I using sentinel to keep it stay high avaliable. All node has set requirepass and masterauth.
When I connect to any of slave node, execute auth command, and after few seconds no op, about 5-15 seconds, redis require auth again.
As I know, redis has no auth expired settings item. So I'm curious is this a redis mechanism which I don't know or there is problem at my redis service.

I guess that your Redis server has set timeout config as N seconds.
The timeout option controls whether or not to close the connection after a client is idle for N seconds (0 to disable) (quote from redis.conf).
You connect to Redis with redis-cli, and send AUTH command to Redis. However, if you do not send any other commands in N seconds, Redis will close your connection. Then you send a new connection with redis-cli, redis-cli will create a new connection and send the command. However, since this is a new connection, and no AUTH command has been sent on this connection, you will fail and Redis asks you for authentication.

Related

Moleculer JS "Redis-pub client is disconnected" every 10 minutes

my application (Node.js) is using moleculer for microservices and redis as transporter. However, I find that the application will have this log Redis-pub client is disconnected every 10 minutes, then reconnect with the log Redis-pub client is connected after a few seconds. This is a problem because if a client send a moleculer action during this time, it will fail.
Any idea what is causing this? Let me know if more information is needed.
Azure Cache for Redis currently has a 10-minute idle timeout for connections, so the idle timeout setting in your client application should be less than 10 minutes. Most common client libraries have a configuration setting that allows client libraries to send Redis PING commands to a Redis server automatically and periodically. However, when using client libraries without this type of setting, customer applications themselves are responsible for keeping the connection alive.
More info: https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-best-practices-connection#idle-timeout

Redis node restarts when active connection reaches limit

I am having a multiple redis clusters in cluster mode. Each node has default limit of 10k client connections. One behavior I have observed is when redis client connection reaches this limit node restarts.
Expectation is that client connection should get refused when this limit is breached instead of node restarting

Failover and client timeout

I am using ServiceStack 5.0.2 with Redis Sentinel (3 + 3) and having issues in case of a failover: commands being issued during or after a failover fail with timeout.
I have come up with an idea to implement retry pattern via custom IRedisClient. But probably there is a better strategy to employ in this case.
Answer given in the post How does ServiceStack PooledRedisClientManager failover work? does not seem to be the right way to go.
Thank you,
Redis Clients wrap a TCP connection with a Redis Server, a Redis Client that was connected with the instance that failed over will fail, but any new Redis Clients retrieved from the pool after failover will be connected to the new failed over instance.

Redis - Tomcat Session Manager : Read from Slave

I am using redis(Redis 3.1) as session store for tomcat(Tomcat 7). To ensure high availability, there is a sentinel setup and two instances(master and slave) of redis server. The slave is configured as read-only. After running few tests and verifying the statistics, it's observerd there are no read requests sent to the slave. All the read requests are processed by the master alone.
Could you please let me know how I can make the slave serve the read requests?
You could use Redis based Tomcat Session Manager provided by Redisson. It allows to manage which type of node use for read operation (master, slave or both master and slave). Perfectly works in Sentinel/Cluster modes.

ActiveMQ failover protocol not reconnecting to master after restarting

I am using ActiveMQ version 5.4 and I have a pure master slave configuration. My slave is configured such that starts its network transports connectors in the event of a failure. My clients are configured using the failover protocol, just like the docs say:
failover://(tcp://masterhost:61616,tcp://slavehost:61616)?randomize=false
When my master dies, the clients successfully fail over to the slave perfectly. The problem is that after I recover (i.e. stop the slave, copy over the data, restart the master, then restart the slave), the clients are still trying to connect to the the slave (which does not have any open network connectors at that point). Thus, the clients never reconnect to the master after restarting it. Is this how it's supposed to work?
I've seen this as well. If you're using the PooledConnectionFactory, set an expiry timeout on the pooled connections via setExpiryTimeout. The API documentation here suggests that this will force reconnection to the master broker:
allow connections to expire, irrespective of load or idle time. This is useful with failover to force a reconnect from the pool, to reestablish load balancing or use of the master post recovery