I have a two server RabbitMQ cluster behind a load balancer, but right now only the first nodes seems to be fielding traffic.
When I do:
> rabbitmqadmin list nodes name type running uptime
+----------------------+------+---------+------------+
| name | type | running | uptime |
+----------------------+------+---------+------------+
| rabbit#n2-rabbitmq-1 | disc | True | 3899164848 |
| rabbit#n2-rabbitmq-2 | disc | True | |
+----------------------+------+---------+------------+
The second node shows no uptime. A cluster_status shows:
> sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit#n2-rabbitmq-1' ...
[{nodes,[{disc,['rabbit#n2-rabbitmq-1','rabbit#n2-rabbitmq-2']}]},
{running_nodes,['rabbit#n2-rabbitmq-2','rabbit#n2-rabbitmq-1']},
{cluster_name,<<"rabbit#n2-rabbitmq-1">>},
{partitions,[]}]
...done.
What am I doing wrong or what should I look for?
Maybe for some reasons, one of your node went down and when it's up again, it does not sync with the 'master' node( which is the one running). To do so, set in the configuration file, /etc/rabbitmq/rabbitmq.config :
{cluster_partition_handling, autoheal}
I recommend that you should be using web management plugin for better observation:
$ rabbitmq-plugins enable rabbitmq_management
From the main (overview) page, you could see the status of your nodes in the cluster(connected, partitioned ...)
Anyway you'd better show your procedure of configuring (every step) your cluster with more information. If my above guess is wrong, please show me some information you get from the web interface.
Related
I am trying to connect my RedisInsightsv2 client to a cluster of redis instances.
When the redis instance hasn't joined the cluster yet, redisinsights is able to make a connection.
After the cluster is created however, new connections from the GUI, just fail.
I have 3 shards with 1 replica each:
redis-cli -h 10.9.9.4 -p 7001 --cluster create 10.9.9.4:7001 10.9.9.5:7002 10.9.9.6:7003 10.9.9.4:7004 10.9.9.5:7005 10.9.9.6:7006 --cluster-replicas 1 -a Password
The cluster gets successfully created with the right shards and everything.
I can even verify using the CLUSTER NODES command
root ➜ ~ $ redis-cli -h 10.9.9.4 -p 7004 -a Password
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.9.9.4:7004> CLUSTER NODES
5b77b776f0ed08b4f34b0fe3e48b609e4bd8400e 10.9.9.6:7003#17003 master - 0 1662318446553 3 connected 10923-16383
a42f44163b046273ca02b1fc99ed93cf6188f65e 10.9.9.5:7002#17002 master - 0 1662318446755 2 connected 5461-10922
d2b21a37b62283a6cfbd5fb436df505ddc31aea8 10.1.1.10:7001#17001 master - 0 1662318445549 1 connected 0-5460
2cd5783411ceea96b4006b596942cc49484884ab 10.9.9.5:7005#17005 slave d2b21a37b62283a6cfbd5fb436df505ddc31aea8 0 1662318445750 1 connected
61541ad0455539335f27d5a90a5a8e504b3dea5f 10.1.1.11:7004#17004 myself,slave 5b77b776f0ed08b4f34b0fe3e48b609e4bd8400e 0 1662318445000 3 connected
c00d264a625998e89becb9334a1f4ea9d2057a0d 10.9.9.6:7006#17006 slave a42f44163b046273ca02b1fc99ed93cf6188f65e 0 1662318445550 2 connected
10.9.9.4:7004>
However, when trying to connect to any of these in the UI I get the following errors:
9/4/2022, 12:03:31 PM | ERROR | TimeoutInterceptor | Request Timeout. GET /api/instance/9e253e74-0091-44b8-bf8c-29ff0f4f0275/connect | {"stack":[{}]}
9/4/2022, 12:03:41 PM | ERROR | TimeoutInterceptor | Request Timeout. GET /api/instance/9e253e74-0091-44b8-bf8c-29ff0f4f0275/connect | {"stack":[{}]}
OR
9/4/2022, 12:16:17 PM | ERROR | KeysBusinessService | Failed to get keys with details info. Connection is closed.. | {"stack":[{}]}
9/4/2022, 12:16:18 PM | ERROR | ExceptionsHandler | Connection is closed. | {"stack":[{}]}
9/4/2022, 12:16:23 PM | ERROR | ExceptionsHandler | Connection is closed. | {"stack":[{}]}
This is the redis.conf that I use for 10.9.9.5:
port 7002
loadmodule /opt/redis-stack/lib/redisearch.so
loadmodule /opt/redis-stack/lib/redisgraph.so
loadmodule /opt/redis-stack/lib/redistimeseries.so
loadmodule /opt/redis-stack/lib/rejson.so
loadmodule /opt/redis-stack/lib/redisbloom.so
cluster-enabled yes
cluster-config-file cluster-node-2.conf
cluster-node-timeout 5000
dbfilename dump-2.rdb
maxmemory 1862mb
maxmemory-policy allkeys-lru
requirepass Password
masterauth Password
I've done a bunch of googling but I'm not able to determine why this is failing. Any help is appreciated!
RedisInsight version: 2.8.0
Running on: Windows 11
Cluster is running on remote machines part of my local network i.e.
10.9.9.0/24
please specify additional information:
what is your OS
what is the version of RedisInsight? (2.8.0?)
where is your cluster running? (is it local? k8s? any SSH tunnels?)
Can you try and see if you are able to connect using this debug build: https://drive.google.com/file/d/1od2uClDKb0649ixkgyRwXfqj8QLr0GXw/view?usp=sharing
Also please check and comment the logs if it is not working
I'm trying to introduce more redundancy and reliability to my server architecture.
Currently I have a variable number of web servers (minimum two) running apache with PHP and Memcached for sessions, which means that if one of the server goes down, the user will have to log in again.
After much research, I've decided to give Redis a go to store sessions, Redis Sentinel to manage Redis instance failure, and HAProxy to pass the sessions to the nominated Redis Master.
I'm a bit unclear about where all these things fit together despite days of reading. Let's say I have a minimum of two front end servers and two Redis servers, could this look like this:
Load balanced connection
|
-------------
| |
---------- ----------
| Apache | | Apache |
|Sentinel| |Sentinel|
| HAProxy| | HAProxy|
---------- ----------
| |
|----------
? ? - Managed by HAProxy,
---------- ---------- by talking to Sentinel
| | | |
| Redis | | Redis |
| | | |
---------- ----------
And then scale up, by adding more of the Apache/Sentinel/HAProxy severs? If so would this work on small (1gb memory) instances, or does Sentinel/HAProxy take up a lot of resources?
Secondly, how does HAProxy reliably talk to Sentinel? There appears to be lots of conflicting advice about how this works.
Many thanks for any help you can offer!
In the end, the simplest way to get this to work was to avoid HAProxy, and use Sentinel to tell the web server which Redis server was the Master.
To achieve this, I made a masterupdate.sh script that contained the following
#!/bin/bash
echo $6 > /location/master.txt
And edited redis/sentinel.conf to contain the following line
sentinel client-reconfig-script cache-master /location/masterupdate.sh
This meant that every time the master switched, Sentinel wrote the IP address to the master.txt text file, and I could read that text file from PHP to set up the sessions. Simple!
Is there a way to check which node in a VerneMQ cluster a client is connected to short of using something like WireShark?
sudo vmq-admin session show --client_id=ClientID --node
should give you what you need. The session command is cluster-aware.
Look up further options with --help
sudo vmq-admin session show --help
vmq-admin cluster show
Last login: xxxxx
+------------------+-------+
| Node |Running|
+------------------+-------+
|VerneMQ#1.1.1.1| true |
|VerneMQ#1.1.1.2| true |
+------------------+-------+
Team,
I am facing some difficulties running commands on a remote machine. I am unable to understand why ssh is trying to think that the command I pass is a host.
ssh -tt -i /root/.ssh/teamuser.pem teamuser#myserver 'cd ~/bin && ./ssh-out.sh'
|-----------------------------------------------------------------|
| This system is for the use of authorized users only. |
| Individuals using this computer system without authority, or in |
| excess of their authority, are subject to having all of their |
| activities on this system monitored and recorded by system |
| personnel. |
| |
| In the course of monitoring individuals improperly using this |
| system, or in the course of system maintenance, the activities |
| of authorized users may also be monitored. |
| |
| Anyone using this system expressly consents to such monitoring |
| and is advised that if such monitoring reveals possible |
| evidence of criminal activity, system personnel may provide the |
| evidence of such monitoring to law enforcement officials. |
|-----------------------------------------------------------------|
ssh: Could not resolve hostname cd: No address associated with hostname
Connection to myserver closed.
It works absolutely fine if I don't pass a command. It simply logs me in. Any ideas?
Man ssh says:
If command is specified, it is executed on the remote host instead of
a login shell.
The thing is that cd is a bash built-in (run type cd in your terminal). So, ssh tries to run cd as a shell, but can not find it in PATH.
You should invoke ssh something like this:
ssh user#host -t 'bash -l -c "cd ~/bin && ./ssh-out.sh"'
I have deployed a Redis Cluster using Kubernetes. I am now attempting to use HAProxy to load balance. HAProxy is great for load balancing a redis cluster, IF you have static IPs. However, we don't have this when using kubernetes. While testing failover, Redis and Kubernetes handle election of a new master and deploying a new pod, respectively. However, kubernetes elects a new IP to the new pod. How can we inject this new IP into the HAProxy healthchecks and remove the old master IP?
I have the following setup.
+----+ +----+ +----+ +----+
| W1 | | W2 | | W3 | | W4 | Web application servers
+----+ +----+ +----+ +----+
\ | | /
\ | | /
\ | | /
+---------+
| HAProxy |
+---------+
/ \ \
+----+ +----+ +----+
| P1 | | P2 | | P3 | K8S pods = Redis + Sentinel
+----+ +----+ +----+
Which is very similar to the setup described on the haproxy blog.
According to https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/redis it uses sentinel to manage the failover. This reduces the problem to the "normal" sentinel based solution.
In this case I would recommend running HAProxy in the same container as the Senrinels and using a simple sentinel script to update the HAProxy Config and issue a reload. A simple HAProxy Config which o ly talks to the master can easily be a simple search, replace, reload script.
Oh and don't use the HAProxy check in that blog post. It doesn't account for or detect split brain conditions. You could either go with a simple port check for availability, or write a custom check which queries each of the sentinels and only talks to the one with at least two sentinels reporting it as the master.