I have a usecase where Redis cluster needs to be deployed over cloud compute server. This cluster should have access from local network as well as internet.
At the time of creating the cluster, I had provided internal IPs of the nodes. When I try to connect it using Jedis, it returns those IPs no matter where I access it from, even if I declare all public IPs at the time of initialisation.
Is there a way to handle both internal and public IPs on redis cluster?
I have already tried cluster meet <IP> <PORT>. It simply replaces private IPs with public ones and vice-versa. I need both to work simultaneously.
Related
I am using apache ignite with default configurations. I have two development server A and B where each server has the same code. I have 3 ignite nodes started on each server. 3 ignite nodes on A and 3 on B
I have created a ignite cache " ignite-bridg". Since on one server each node would create a cache and partition the data and these two servers are isolated so nothing will happen.
However I see that both the servers form a cluster and 6 nodes get connected. This is highly problematic for me. I think this is happening because both servers are accidently on same multicast group.
How to resolve this problem. I need to rectify it quickly
By default Ignite uses Multicast IP finder (TcpDiscoveryMulticastIpFinder) for nodes discovery process, in your case you should use Static IP finder (TcpDiscoveryVmIpFinder) instead. By using it you could specify different lists of IP addresses for each server and form two clusters instead of one.
Here is more information regarding Static IP Finder configuration:
https://www.gridgain.com/docs/latest/developers-guide/clustering/tcp-ip-discovery#static-ip-finder
Why Redis client use multiple address in cluster-mode for create connection? is this to switch between addresses when one of them has failed?
Thanks.
Redis uses multiple address to setUp application with all the master and slave node available in redis cluster. Redis never switch address it is just redis-cluster responsibility to promote the slave node to master if any one of them failed. After that subsequent request can be served directly from that redis node.
More details here : https://redis.io/topics/cluster-tutorial
I am trying to setup redis cluster on Kubernetes. One of my requirements is that my redis cluster should be resilient in case of kubernetes cluster restart(due to issues like power failure).
I have tried Kubernetes statefulset and deployment.
In case of statefulset, on reboot a new set of IP addresses are assigned to Pods and since redis cluster works on IP addresses, it is not able to connect to other redis instance and form cluster again.
In case of services with static IP over individual redis instance deployment, again redis stores IP of Pod even when I created cluster using static service IP addresses, so on reboot it is not able to connect to other redis instance and form cluster again.
My redis-cluster statefulset config
My redis-cluster deployment config
Redis-4.0.0 has solved this problem by adding support for cluster announce node IP and Port
Set cluster-announce-ip as static IP of service over redis instance kubernetes deployment.
Link to setup instructions: https://github.com/zuxqoj/kubernetes-redis-cluster/blob/master/README-using-statefulset.md
Are you able to use DNS names instead of IP addresses? I think that is the preferred way to route your traffic to individual nodes in a statefulset:
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id
I have a Redis Cluster that clients are connecting to via HAPRoxy with a Virtual IP. The Redis cluster has three nodes (with each node sharing the same server with a running sentinel instance).
My question is, when i clients gets a "MOVED" error/message from a cluster node upon sending a request, does it bypass the HAProxy the second time when it connects since it has been provided with an IP:port when the MOVEd message was issued? If not, how does the HAProxy know the second time to send it to the correct node?
I just need to understand how this works under the hood.
If you want to use HAProxy in front of Redis Cluster nodes, you will need to either:
Set up an HAProxy for each master/slave pair, and wire up something to update HAProxy when a failure happens, as well as probably intercept the topology related commands to insert the virtual IPs rather than the IPs the nodes themselves have and report via the topology commands/responses.
Customize HAProxy to teach it how to be the cluster-aware Redis client so the actual client doesn't know about cluster at all. This means teaching it the Redis protocol, storing the cluster's topology information, and selecting the node to query based on the key(s) being accessed by the consumer code.
With Redis Cluster the client must be able to access every node in the cluster. Of the two options above Option 2 is the "easier" one, but at this point I wouldn't recommend either.
Conceivably you could use the VIP as a "first place to get the topology info" IP but I suspect you'd have serious issues develop as that original IP would not be one of the ones properly being reported as a nod handling data. For that you could simply use round-robin DNS and avoid that problem, or use the built-in "here is a list of cluster IPs (or names?)" to the initial connection configuration.
Your simplest, and least likely to be problematic, route is to go "full native" and simply give full and direct access to every node in the cluster to your clients and not use HAProxy at all.
Is it possible for a Redis master instance to initiate a connection for replication to a slave?
What i need is for the master to MASTEROF to the slave instead of a SLAVEOF.
Use case: Redis Master on private network ip address and I want to create a replica on a server on an externally/publicly accessible ip address. Useful when slaves cannot see the master which is on a private 192.168.x.x ip address.
I'm afraid not. Since Redis allows replication from a single master to many slaves, the command you're looking for wouldn't make a lot of sense.
For your situation, you might consider one of several options:
Change your network setup so that the Redis master is publicly visible.
Add a slave which has a public address, but which is still within your network (i.e., so it can still see the master). Then, set up your public slave to replicate from that.
Use SSH to create a tunnel from your slave to your master and forward the appropriate ports along (i.e., use the -L and -N options). By adding the public key from your slave machine to your master machine's ~/.ssh/authenticated_keys file, you won't have to use a password to log in.