One Physical server, five Redis instances and a requirement to cluster - redis

We have one physical server to install Redis on, the requirements are that Redis is required for five different applications, each accessible through five ports opened on the firewall.
My understanding is that if we create five individual Redis configs and scripts such as:
Application 1 = redis.port.6301.config (and associated Redis script
in etc/init.d)
Application 2 = redis.port.6302.config (and associated Redis
script in etc/init.d)
Application 3 = redis.port.6303.config (and associated Redis
script in etc/init.d)
Application 4 = redis.port.6304.config (and associated Redis
script in etc/init.d)
Application 5 = redis.port.6305.config (and associated Redis script
in etc/init.d)
this will allow for five separate instances of Redis to be started. The next part of the Requirement is that Redis be clustered. From reading the Redis Cluster specification and Redis Cluster tuorial.
When the script to create a cluster is run to create a Redis cluster is creates a minimum of six Redis Nodes, three masters and three slaves, each of these require a port in their individual Redis configs. So for redis.port.6301.config, there would be an addition five ports (eg 6311,6321,6331,6341,6351) listening for the other nodes in this cluster.
Following that, it means the 30 ports would be listening on the server.
Is my understanding correct?

Related

Three machines deploy six nodes redis cluster

Three servers are used to deploy redis cluster, and three masters and three slaves are deployed in a cross-deployment method. One master and another server are deployed on one server. There is no problem in using the cluster normally, and the master and slave will be crossed. However, if a server is directly shut down and reconnected, there may be two masters on one server, or two slaves on one server. Is there any way to avoid it?

Average_ttl is 0 on one of the Redis cluster nodes

I have a Drupal cluster of 3 servers that are using HAproxy (TCP) to handle communication with a Redis cluster of 3 nodes (used for caching) on which the sentinel service is active as well.
The Redis cluster has 1 main (master) node and 2 secondary (slave) nodes in replication mode.
I recently noticed that the avg_ttl is zero on one of the secondary (slave) nodes.
It's weird, I mean the data is synced between these nodes so they should have the same keys.
I checked the configuration and they almost have the same configuration in the redis.conf file.
Any idea what could this mean?
Thanks!
avg_ttl
Replication Info

Does it require to put load balancer before Redis cluster

I am using Redis Cluster on 3 Linux servers (CentOS 7). I have standard configuration i.e. 6 nodes, 3 master instances, and 3 slave instances (one master have one slave) distributed on these 3 Linux servers. I am using this setup for my web application for data caching, HTTP response caching. My aim is to read primary and write secondary i.e. Read operation should not fail or delayed.
Now I would like to ask is it necessary to configure any load balancer before by 3 Linux servers so that my web application requests to Redis cluster instances can be distributed properly on these Redis servers? Or Redis cluster itself able to handle the load distribution?
If Yes, then please mention any reference link to configure the same. I have checked official documentation Redis Cluster but it does not specify anything regarding load balancer setup.
If you're running Redis in "Cluster Mode" you don't need a load balancer. Your Redis client (assuming it's any good) should contact Redis for a list of which slots are on which nodes when your application starts up. It will hash keys locally (in your application) and send requests directly to the node which owns the slot for that key (which avoids the extra call to Redis which results in a MOVED response).
You should be able to configure your client to do reads on slave and writes on master - or to do both reads and writes on only masters. In addition to configuring your client, if you want to do reads on slaves, check out the READONLY command: https://redis.io/commands/readonly .

Redis Sentinel with 2 App Servers and 1 Additional Sentinel Node Setup

We have 2 app/web servers running HA application, we need to setup redis with high availability/replication to support our app.
Considering the minimum sentinel setup requirement of 3 nodes.
We are planning to prepare the first app serve with redis master and 1 sentinel, the second app server will have the redis slave and 1 sentinel, we plan to add one additional server to hold the third sentinel node to achieve the 2 quorum sentinel setup.
Is this a valid setup ? what could be the risks ?
Thanks ,,,
Well it looks its not recommended to put the redis nodes on the app servers (where it is recommended to put the sentinel nodes there).
We ended with a setup for KeyDB (a fork from Redis) which claimed to be faster and support high availability/replication (and much more) to create two nodes within the app servers.
Of course We had to modify little in the client side to support some advance Lua scripts (There is some binary serialized data not getting replicated to the other node).
But after some effort, it worked ! as expected.
Hope this helps ...

task queue on redis cluster

I have a setup with single Redis node and I use a list to push and consume tasks through various clients connected to this single Redis node.
I want to move this setup to Redis cluster eventually. Should the task queue/list which is present in the above setup be split across all the nodes of Redis cluster or should it fit in only one of the hash slots or in other words in one of the redis nodes in the cluster.