we have 4 node patroni postgresql cluster on production. We try to remove 2 node from cluster.
How can i do this?
Regards.
just stop the patroni!
https://github.com/zalando/patroni/issues/1377
But we want to delete only one node from patroni cluster.
sudo systemctl stop patroni.service
Related
I've installed Redis version 3.2.12 on one node CentOS 7 of a cluster with Cloudera Manager 6.3 and my redis never stop.
Everything is on default, I just added the password, but that has no effects because I can't restart. Option daemonize is no
My instalation was:
sudo yum -y install redis
sudo service redis start
When I type redis-cli, CLI starts normally at 127.0.0.1:6379. When I try shutdown, the console shows 'not connected', but with lsof -i :6379 I can identify that some jobs die and return with another PID.
If I try to kill the redis jobs, it always return with another PID.
service redis stop Return 'Redirecting to /bin/systemctl stop redis.service' but has no effects.
If I try service redis restart then service redis status it returns:
redis.service: main process exited, code=exited, status=1/FAILURE
Unit redis.service entered failed state.
Someone can please help me as a way to debug or understand what is happening? It's my first time with Redis.
Not sure how is this related to celery...
CentOS 7 uses systemd so I would recommend stop using the service tool and start using the systemctl. First thing you should try is systemctl status redis to check the status of the Redis service. If it shows that for whatever reason it is down, then you should either check Redis logs, or use journalctl tool to look for system logs created by Redis.
I have seen that some installations might have redis as the command-line executable while some might have redis-server. So, please try one of these commands (one will work depending on the redis package):
sudo service redis-server restart
# OR
sudo service redis restart
If you have a newer Cent OS having systemctl installed, then try one of these:
sudo systemctl restart redis-server
# OR
sudo systemctl restart redis
I have a Redis cluster of 6 nodes, running in my Kubernetes cluster as a stateful set. Since it was for testing and not yet on production, all of the Redis nodes were on the same machine. Of course, the machine failed, and all of Redis' nodes crashed immediately.
When the machine was back alive the pods were recreated and was given different cluster ips, therefore they couldn't re-connect with each other.
I need to find a solution for a disaster case such as this. Let's assume all the nodes were reassigned with different ips, how can I configure the nodes to get to other ips?
The slaves are easy to reset with the CLUSTER RESET command, but the masters contain slots and data that shouldn't be deleted.
Should I manually re-write nodes.conf? I'm afraid this will make it even worse? I there a known method to deal with it?
Thanks!
Found a solution:
The first step is to change the current pod ip in nodes.conf when the pod is starting. You can achieve that with this script
#!/bin/sh
CLUSTER_CONFIG="/data/nodes.conf"
if [ -f ${CLUSTER_CONFIG} ]; then
if [ -z "${POD_IP}" ]; then
echo "Unable to determine Pod IP address!"
exit 1
fi
echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
sed -i.bak -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG}
fi
exec "$#"
You should start any pod by calling this script and passing to it the original redis-server start command.
Now every pod in the cluster has the correct IP of itself set.
Make sure the cluster's pods are stable and don't crash.
Manually edit nodes.conf in one of the pods. Set the correct IPs instead of the deprecated ones.
Restart the pod you've edit with redis-cli shutdown. Kubernetes will set up a new pod for it. The new pod's IP will be set by the script I added above.
In my opinion you shouldn't rely on Pods' internal IP address at all, when referencing your Redis cluster in any place within your application. Pods are mortal, this means they are designed to crash. So when node dies, they are destroyed too. When node resurrects, PODs are recreated with new IP addresses.
The proper way to target your PODs would be via their DNS names (as explained here), if you created your Redis cluster as a Stateful application.
Suppose I have a redis cluster with nodes 10.0.0.1, 10.0.0.2, 10.0.0.3 and 10.0.0.4, which I'm using as a cache.
Then, for whatever reason, node 10.0.0.4 fails and goes down. This brings down the entire cluster:
2713:M 13 Apr 21:07:52.415 * FAIL message received from [id1] about [id2]
2713:M 13 Apr 21:07:52.415 # Cluster state changed: fail
Which causes any query to be shut down with "CLUSTERDOWN The cluster is down".
However, since I'm using the cluster as a cache, I don't really care if a node goes down. A key can get resharded to a different node and lose its contents without affecting my application.
Is there a way to set up such an automated resharding?
I found something close enough to what I need.
By setting cluster-require-full-coverage to "no", the rest of the cluster will continue to respond to queries, although the client needs to handle the possibility of being redirected to a failing node.
Then I can replace the broken node by running:
redis-trib.rb call 10.0.0.1:6379 cluster forget [broken_node_id]
redis-trib.rb add-node 10.0.0.5:6379 10.0.0.1:6379
redis-trib.rb fix 10.0.0.1:6379
Where 10.0.0.5:6379 is the node that will replace the broken one.
By assuming you have only master nodes in your current cluster, you will definitely get cluster down error because there is no replica of down master and Redis thinks cluster is not in safe and triggers an error.
Solution
Create a new node (Create redis.conf with desired parameters.)
Join that node to cluster
redis-trib.rb add-node 127.0.0.1:6379 EXISTING_MASTER_IP:EXISTING_MASTER_PORT
Make node slave of 10.0.0.4
redis-cli -p 6379 cluster replicate NODE_ID_OF_TARGET_MASTER
To Test
First be sure, cluster is in good shape.(All slots are covered and nodes are agreed about configurations.)
redis-trib.rb check 127.0.0.1:6379 (On any master)
Kill process of 10.0.0.4
Wait Slave to be new master.(It happens quickly. Slots assigned to 10.0.0.4 will be resharded automatically to Slave.)
Check cluster and be sure all slots are moved new master
redis-trib.rb check 127.0.0.1:6379 (On any master)
No manual actions needed. Additionally, if you have more slaves in cluster they may be promoted as new masters of other masters as well. (e.g. You have a setup of 3 master, 3 slaves. Master1 goes down, Slave1 becomes new master. Slave1 goes down, Slave1 can be new master as Master1.)
I have master-slave configuration of RabbitMQ. As two Docker containers, with dynamic internal IP (changed on every restart).
Clustering works fine on clean run, but if one of servers got restarted it cannot reconnect to the cluster:
rabbitmqctl join_cluster --ram rabbit#master
Clustering node 'rabbit#slave' with 'rabbit#master' ...
Error: {ok,already_member}
And following:
rabbitmqctl cluster_status
Cluster status of node 'rabbit#slave' ...
[{nodes,[{disc,['rabbit#slave']}]}]
says that node not in a cluster.
Only way I found it remove this node, and only then try to rejoin cluster, like:
rabbitmqctl -n rabbit#master forget_cluster_node rabbit#slave
rabbitmqctl join_cluster --ram rabbit#master
That works, but doesn't look good for me. I believe there should be better way to rejoin cluster, than forgetting and join again. I see there is a command update_cluster_nodes also, but seems that this something different, not sure if it could help.
What is correct way to rejoin cluster on container restart?
I realize that this has been opened for a year but I though I would answer just in case it might help someone.
I believe that this issue has been resolved in a recent RabbitMQ release.
I implemented a Dockerized RabbitMQ Cluster using the Rabbit management 3.6.5 image and my nodes are able to auto rejoin the cluster on container or Docker host restart.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Is it possible to create a redis cluster with 2 nodes , one acting as a master and other one as slave.
I get the following error if I try with 2 nodes (one as master and other as slave)
>>> Creating cluster
Connecting to node 127.0.0.1:6379: OK
Connecting to node 192.168.40.159:6379: OK
*** ERROR: Invalid configuration for cluster creation.
*** Redis Cluster requires at least 3 master nodes.
*** This is not possible with 2 nodes and 1 replicas per node.
*** At least 6 nodes are required.
Yes. The requirement of at least 3 master nodes is set by the ruby script, but not a hard limit in cluster.
The first thing you need to do is send a cluster command with 16385 arguments like
cluster addslots 0 1 2 3 ... 16384
to the cluster. Since there is too many arguments to manually type them in a redis-cli, I suggest write a program to do that, in which you open a TCP socket connecting to the redis node, convert the previous command into a redis command string and write it to the socket.
The single node cluster will be online after few seconds you send the command. Then connect to the other node with redis-cli, type the following command to make it a slave
cluster meet MASTER_HOST MASTER_PORT
cluster replicate MASTER_ID
where MASTER_HOST:MASTER_PORT is the address of the previous node, and MASTER_ID is the ID of that node, which you could retrieve it via a cluster nodes command.
For convenience I've written a python tool for those kinds of redis cluster management, you could install it with
pip install redis-trib
For more detail please go to https://github.com/HunanTV/redis-trib.py/
Redis-Cluster is not a fit for your use case.
For your use case, you need to configure one server (the master), then configure a second server and add the "slaveof" directive - pointing it to the master. How you handle failover is up to your scenario but I would recommend the use of redis-sentinel.
For a more detailed walkthrough, see the Redis Replication page
Nope, it is not possible to create a redis cluster with 1 master node, as suggested here setting up a redis cluster requires atleast 3 master nodes.
'redis-trib.rb create' command requires at least 3 nodes.
this way can make 1 master, 1 slave redis cluster.
Using redis-trib
$ redis-server 5001/redis.conf
$ redis-trib.rb fix 127.0.0.1:5001
so many messages ...
$ redis-server 5002/redis.conf
$ redis-trib.rb add-node --slave 127.0.0.1:5002 127.0.0.1:5001
>>> Adding node 127.0.0.1:5002 to cluster 127.0.0.1:5001<br>
Connecting to node 127.0.0.1:5001: OK<br>
>>> Performing Cluster Check (using node 127.0.0.1:5001)<br>
M: 015bec64d631990b83ad63736d906cda257a762c 127.0.0.1:5001<br>
slots:0-16383 (16384 slots) master<br>
0 additional replica(s)<br>
[OK] All nodes agree about slots configuration.<br>
>>> Check for open slots...<br>
>>> Check slots coverage...<br>
[OK] All 16384 slots covered.<br>
Automatically selected master 127.0.0.1:5001<br>
Connecting to node 127.0.0.1:5002: OK<br>
>>> Send CLUSTER MEET to node 127.0.0.1:5002 to make it join the cluster.<br>
Waiting for the cluster to join.<br>
>>> Configure node as replica of 127.0.0.1:5001.<br>
[OK] New node added correctly.
Using cluster commands
$ redis-server 5001/redis.conf
Using Ruby : addslots
$ echo '(0..16383).each{|x| puts "cluster addslots "+x.to_s}' | ruby | redis-cli -c -p 5001 > /dev/null
$ redis-server 5002/redis.conf
$ redis-cli -c -p 5002
127.0.0.1:5002> cluster meet 127.0.0.1 5001
OK
127.0.0.1:5002> cluster replicate 7c38d2e5e76fc4857fe238e34b4096fc9c9f12a5
node-id of 5001
OK