RabbitMQ HA & Failover - rabbitmq

I've read both the clustering and HA chapters and got a fair understanding on RabbitMQ clustering. One thing I did not understand is, having 2+ nodes on the cluster and a set of HA queues, how connections can be made by the clients so that if one node fails they automatically and seamlessly connect to the remaining node(s). Can this be achieved by a load balancer such as, say, Amazon ELB for deployments made in AWS?

Using a load balancer like Amazon ELB or HAProxy is exactly how you should route traffic to the available nodes in the Rabbit cluster.
I'd recommend HAProxy. Here's a sample HAProxy config:
global
log 127.0.0.1 local1
maxconn 4096
#chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
#debug
#quiet
defaults
log global
mode tcp
option tcplog
retries 3
option redispatch
maxconn 2000
timeout connect 5000
timeout client 50000
timeout server 50000
listen stats :1936
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
listen aqmp_front :5672
mode tcp
balance roundrobin
timeout client 3h
timeout server 3h
option clitcpka
server aqmp-1 rabbitmq1.domain:5672 check inter 5s rise 2 fall 3
server aqmp-2 rabbitmq2.domain:5672 backup check inter 5s rise 2 fall 3
Note the last two lines. You'll need to substitute rabbitmq1.domain and rabbitmq2.domain with the location of your two nodes. Since the second server is setup as backup check HAProxy will balance the request only on the first node, and if this node fails the request will be route to the second node.

I would use simple keepalived deamon on all rabbit nodes. It just adds a virtual IP address shared between nodes which you can use for client access. Configuration is very simple, check this Hollenback's page.
Sample config:
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 1
priority 100
virtual_ipaddress {
192.168.1.1/24 brd 192.168.1.255 dev eth0
}
}

You have to configure mirror queue between rabbitmq-servers.
rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}'
In the example rabbitmq will be mirror queue have prefix with amq. When server A fail, those queue unitl exit on server B. You also HA on code(connect to server fail, then connect to server B) or using HA rabbitmq port using keepalive

Related

Haproxy + TCP + SSL Passthrough + send-proxy

I am using the following Haproxy configuration to pass SSH connections to the backend servers.
global
log 127.0.0.1 local0
daemon
maxconn 2048
defaults
log global
timeout connect 500000ms
timeout client 86400s
timeout server 86400s
listen stats
bind :1936
mode http
stats enable
stats realm Haproxy\ Statistics
stats uri /
frontend front-ssh-servers
mode tcp
option tcplog
bind *:22
default_backend back-ssh-servers
timeout client 8h
backend back-ssh-servers
mode tcp
balance leastconn
stick-table type ip size 1m expire 8h
stick on src
server server1 X.X.X.X:22 check send-proxy
server server2 X.X.X.X:22 check send-proxy
server server3 X.X.X.X:22 backup send-proxy
The idea of adding send-proxy was to capture the actual client IP in the backend SSH servers. However, with send-proxy or send-proxy-v2, the connections are not reaching the destination backend SSH servers. Without the send-proxy option, the connections are reaching the backend SSH servers.
The Haproxy version is 1.8. Haproxy logs show the below.
2023-02-09T10:27:59-08:00 127.0.0.1 haproxy[3190902]: X.X.X.X:36730 [09/Feb/2023:10:27:59.175] front-ssh-servers back-ssh-servers/X.X.X.X 1/0/8 21 SD 2/1/0/0/0 0/0
The termination code is "SD". I read that proxy protocol also needs to be enabled at the backend hosts. Appreciate any help on how to achieve this for SSH connections. My backend hosts are running OpenSSH_7.4p1.

Haproxy Sockjs Websocket loadbalancing and RabbitMQ loadbalancing in same config

I am looking for a haproxy (HAProxy version 1.5.18) configuration which will allow websocket loadbalancing as well as RabbitMQ load balancing. I have tried many options but none seem to work, below is my haproxy config file:
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 15s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
timeout tunnel 3600s
frontend http_web *:80
mode http
default_backend rgw
backend rgw
balance roundrobin
server rgw1 173.36.22.49:8080 maxconn 10000 weight 10 cookie rgw1 check
server rgw2 10.42.139.69:8080 maxconn 10000 weight 10 cookie rgw2 check
listen stats :9000
mode http
stats enable
stats realm Haproxy\ Statistics
stats uri /haproxy_stats # Stats URI
stats auth websocketadmin:websocketadmin
listen ampq
bind *:61613
mode tcp
option clitcpka
server rabbit1 10.42.6.112:61613 check inter 1s rise 3 fall 1
server rabbit2 10.42.6.113:61613 check inter 1s rise 3 fall 1
server rabbit3 10.42.6.114:61613 check inter 1s rise 3 fall 1
server rabbit4 10.42.6.115:61613 check inter 1s rise 3 fall 1
Haproxy doesn't give any error, it prints the below message, but it doesn't work, i cannot connect to websocket or connect to Rabbitmq. But as soon as i remove "listen ampq", everything starts working fine.
Sep 8 21:00:40 localhost haproxy[3184]: Proxy http_web started.
Sep 8 21:00:40 localhost haproxy[3184]: Proxy rgw started.
Sep 8 21:00:40 localhost haproxy[3184]: Proxy stats started.
The problem was the port 61613, which was already taken by another process. So i had to change to a new port and add it in the firewall rules and it is working now.

HaProxy - group tcp and http hosts dependent of each other

I have the following scenario:
Haproxy is running in front of my two groups of servers:
two http servers (active / backup)
two tcp servers (active / backup)
I now want to fail over from the active sides to the backup ones of ANY of the active services goes down (fail over HTTP and TCP at the same time).
Is there any way to do so in HAproxy? I so far was only able to fail over to one of them depending on the protocol but not both. Can these be grouped?
i was wondering if the can be done via ACLs and things like the fe_conn directive
I think haproxy's nbsrv works here. If your nbsrv count, number of healthy instances, falls below desired amount on EITHER pool switch both pools to the backup backend. Otherwise just use the default pool. Here is an example verified on 1.5.18 but should work fine on newer versions:
defaults all
timeout connect 30s
timeout client 30s
timeout server 30s
mode http
# http frontend
frontend http *:80
# use the backup service if EITHER service is down
acl use_backup nbsrv(http_service) lt 1
acl use_backup nbsrv(tcp_service) lt 1
use_backend http_service_backup if use_backup
default_backend http_service
# tcp frontend
frontend tcp_10000 *:10000
mode tcp
# use the backup service if EITHER service is down
acl use_backup nbsrv(http_service) lt 1
acl use_backup nbsrv(tcp_service) lt 1
use_backend tcp_service_backup if use_backup
default_backend tcp_service
backend tcp_service
mode tcp
# main tcp instance here
# can also include backup server here with backup directive if desired
server tcp-service1 tcp-service1.local:10000 check
backend tcp_service_backup
mode tcp
# backup tcp instance here
server tcp-service2 tcp-service2.local:10000 check
backend http_service
# main http instance here
# can also include backup server here with backup directive if desired
server http-service1 http-service1.local:80 check
backend http_service_backup
# backup http instance here
server http-service2 http-service2.local:80 check
See https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#nbsrv for more nbsrv details.

haproxy failover active-passive

i want to setup haproxy to switch to passive s2 after s1 fails but not to back to s1 when it gets healthy. i mean when switches to s2 if the s1 gets available, haproxy still send requests to s2 and s1 work as passive until failure of s1.
haproxy configuration :
listen http_web 192.168.1.3:80
mode http
balance roundrobin
option httpchk
option forwardfor
server server1 192.168.1.1:80 weight 1 maxconn 512 check backup
server server2 192.168.1.2:80 weight 1 maxconn 512 check backup
i set backup for both servers but when s1 fails haproxy send requests to s2 but when s1 gets back available it sends requests to s1 again.
round robin balancing mode, means that both servers will get requests one by one.
If you want persistence, you should use source method or add cookies.
Otherwise, if you don't need a load balacing feature and just active passive solution. You can use keepalived service ;)

How to do sticky load-balancing with HAProxy with Session transfer to new servers

I am using appsession config element for sticky session. I have 5 weblogic instances 3 of them are active and serving load now when load increases i start additional 2 instances. Now HAProxy marks them "Helthy" but does not transfer any traffic to it because it sticky.
How do I transfer existing sessions to new weblogic servers. I am using Terracotta for session clustering so it does not matter which server is serving the request. Below is my config for HAProxy.
# this config needs haproxy-1.1.28 or haproxy-1.2.1
global
log 127.0.0.1 local0
maxconn 1024
daemon
# debug
#quiet
defaults
log global
mode http
option httplog
option httpchk
option httpclose
retries 3
option redispatch
contimeout 5000
clitimeout 50000
srvtimeout 50000
stats uri /admin?stats
stats refresh 5s
listen terracotta 0.0.0.0:10001
# balance url_param JSESSIONID
balance roundrobin
option httpchk OPTIONS /Townsend
server L1_1 10.211.55.1:7003 check
server L1_2 10.211.55.2:7004 check
server L1_3 10.211.55.3:7004 check
appsession JSESSIONID len 52 timeout 3h
Then if it does not matter which server serves the request, disable stickiness and remove the appsession line. You must understand that stickiness is the opposite of load-balancing. If your issue is that you don't scale, don't stick first.