Highly Available AMQP via Spring Integration and RabbitMQ - Issue - rabbitmq
WHAT I DID:
I am using RabbitMQ message broker which supports the AMQP standard in my spring micro services. Its main purpose is to prevent the message losing, during the rabbitMQ server (unfortunate) shutdown. So I am planning to give the Highly Available AMQP Backed Message Channels.
I tried to setup two RabbitMQ servers and I followed the below link to setup the RabbitMQ servers.
https://dzone.com/articles/highly-available-amqp-backed
First RabbitMQ Server’ s config file(rabbitmq.config) is as follows. It should be put under ../rabbitmq_server-version/etc/rabbitmq/
[
{rabbit, [ {tcp_listeners, [5672]},
{collect_statistics_interval, 10000},
{heartbeat,30},
{cluster_partition_handling, pause_minority},
{cluster_nodes, {[ 'rabbit#master',
'rabbit2#master'],
disc}} ] },
{rabbitmq_management, [ {http_log_dir,"/tmp/rabbit-mgmt"},{listener, [{port, 15672}]} ] },
{rabbitmq_management_agent, [ {force_fine_statistics, true} ] }
].
Second RabbitMQ Server’ s rabbitmq.config file :
[
{rabbit, [ {tcp_listeners, [5673]},
{collect_statistics_interval, 10000},
{heartbeat,30},
{cluster_partition_handling, pause_minority},
{cluster_nodes, {[ 'rabbit#master',
'rabbit2#master'],
disc}} ] },
{rabbitmq_management, [ {http_log_dir,"/tmp/rabbit-mgmt"},{listener, [{port, 15673}]} ] },
{rabbitmq_management_agent, [ {force_fine_statistics, true} ] }
].
First RabbitMQ Server’ s sample bash script is as follows. Also please have a look RabbitMQ Cluster documentation for other configuration steps.
#!/bin/bash
echo "*** First RabbitMQ Server is setting up ***"
export RABBITMQ_HOSTNAME=rabbit#master
export RABBITMQ_NODE_PORT=5672
export RABBITMQ_NODENAME=rabbit#master
export RABBITMQ_SERVER_START_ARGS="-rabbitmq_management listener [{port,15672}]"
/DEV_TOOLS/rabbitmq_server-3.4.2/sbin/rabbitmq-server &
echo "*** Second RabbitMQ Server is set up succesfully. ***"
sleep 5
echo "*** First RabbitMQ Server' s status : ***"
/DEV_TOOLS/rabbitmq_server-3.4.2/sbin/rabbitmqctl status
Second RabbitMQ Server’ s sample bash script is as follows :
#!/bin/bash
echo "*** Second RabbitMQ Server is setting up ***"
export RABBITMQ_HOSTNAME=rabbit2#master
export RABBITMQ_NODE_PORT=5673
export RABBITMQ_NODENAME=rabbit2#master
export RABBITMQ_SERVER_START_ARGS="-rabbitmq_management listener [{port,15673}]"
/DEV_TOOLS/rabbitmq_server-3.4.2_2/sbin/rabbitmq-server &
echo "*** Second RabbitMQ Server is set up succesfully. ***"
sleep 5
echo "*** Second RabbitMQ Server' s status : ***"
/DEV_TOOLS/rabbitmq_server-3.4.2_2/sbin/rabbitmqctl status
sleep 5
echo "*** Second RabbitMQ Server is being added to cluster... ***"
/DEV_TOOLS/rabbitmq_server-3.4.2_2/sbin/rabbitmqctl -n rabbit2#master stop_app
/DEV_TOOLS/rabbitmq_server-3.4.2_2/sbin/rabbitmqctl -n rabbit2#master join_cluster rabbit#master
/DEV_TOOLS/rabbitmq_server-3.4.2_2/sbin/rabbitmqctl -n rabbit2#master start_app
/DEV_TOOLS/rabbitmq_server-3.4.2/sbin/rabbitmqctl -n rabbit#master set_policy ha-all "^ha\." '{"ha-mode":"all"}'
echo "*** Second RabbitMQ Server is added to cluster successfully... ***"
sleep 5
echo "*** Second RabbitMQ Server' s cluster status : ***"
/DEV_TOOLS/rabbitmq_server-3.4.2_2/sbin/rabbitmqctl cluster_status
WHAT IS ISSUE:
I can see that two RabbitMQ servers are running synchronously.
When I stop the first node, the second node is working.
sbin/rabbitmqctl stop_app
But When I shut down the first RabbitMQ server, the second node also going down.
sbin/rabbitmqctl stop
But, The result should be,
If First Messaging Node and First RabbitMQ Server are shut down
accidentally, Second Messaging Node and Second RabbitMQ Server will
continue to process Order messages so potential message loosing and
service interruption problems can be prevented by using high available
AMQP backed channel.
Kindly help me to fix this issue. Thanks.
Related
No processes are listening on 80, and yet my GCE instance is still receiving traffic. Are there ways of hiding a socket?
So - google compute instance, Debian 10, trying to set up Node/PM2/NGINX. When I visit my instance's external ip in my browser, it sends back the Apache2 Ubuntu Default Page. Strange - I didn't install Apache, and I didn't know it came with the debian image I'm using with GCE. Also, the text on this default page says that it's an html doc stored at /var/www/html/index.html. But when I check, it isn't there. There isn't even a /www folder in /var. When I check running services using sudo systemctl status, I get this back: System has not been booted with systemd as init system (PID 1). Can't operate. Failed to connect to bus: Host is down So instead, I run sudo service --status-all. Here's the output: [ + ] apparmor [ ? ] binfmt-support [ - ] cron [ - ] dbus [ + ] docker [ ? ] hwclock.sh [ ? ] kmod [ ? ] lxc [ ? ] lxc-net [ - ] postgresql [ - ] procps [ + ] rsyslog [ + ] ssh [ - ] sudo [ - ] supervisor [ - ] udev [ - ] x11-common* It occurs to me now that Apache might not be a "service", but there might still be an apache process running and listening on 80. idk, linux noob here. So. When I check who's listening on 80 (sudo netstat -tulnp | grep :80), there's no results. So...no apache service running (unless there's something i don't understand about why systemctl is down). No processes listening on port 80. And yet, when I copy the external ip from GCE and enter it into my browser, I'm served this default page. I'm reasonably confident it's the correct ip - when i alter the firewall provided by GCP to prevent traffic entering on 80, I can't access the instance - no default apache page. on top of it, when I boot up nginx to listen on 80 and reverse proxy to my node app on 8080, I still get sent to the default apache page when I enter it into my browser. it seems that nginx is the only thing listening on 80 (checked via netstat - a process appears when I boot up nginx, and disappears when I stop the nginx service). There's clearly something I don't understand at work here. I'm off to research more, but if any of you lords of SO have some insight, it would be greatly appreciated.
Terraform and RabbitMQ - Enable Plugins
I am running a deployment using Terraform. This deployment requires a rabbitmq server with WebStomp enabled. I want WebStomp to be enabled automatically when terraform starts the rabbitmq server during apply. I have looked at the following: Use a config file for RabbitMQ - don't see a way to link a config file with Terraform Use a Provider - RabbitMQ providers listed on Terraform registry don't provide an option to enable this plugin Can I use a provisioner? (see second "docker exec" in code block): # block until the service is running provisioner "local-exec" { command = <<EOT counter=0 until [ "$(docker exec ${var.service_name} rabbitmq-diagnostics check_running)" ]; do sleep 10 counter=$((counter+1)) if [ $counter -eq 90 ]; then echo "Unable to connect to service after 15 minutes. Exiting." exit 1 fi done "$(docker exec ${var.service_name} rabbitmq-plugins enable rabbitmq_web_stomp)" EOT } Otherwise WebStomp can be manually enabled, but the deployment needs to be automatic.
rabbitmq erlang ets rpc:call badrpc
i have a rabbitmq server running on docker . i want to show the ets tables in rabbitmq. i use rpc:call to do this,but it does not work. my rabbit server node name is rabbit#f72eb878386f and i use erl -sname rabbit it tips : Protocol 'inet_tcp': the name rabbit#f72eb878386f seems to be in use by another Erlang node and i use erl -sname node1 then. rpc:call('rabbit#f72eb878386f',ets,tab2list,[queue_stats]). error message shows below {badrpc,nodedown}
How to change default port(15672) of RabbitMQ Management plugin?
I am running a RabbitMQ Management console on a machine where port above 10000 range are blocked using firewall. Can I change the port so that I can use any one of 9000 range ports ? Please help!
RabbitMQ has a config file rabbitmq.config.example or just rabbitmq.config under /etc/rabbitmq directory on linux servers. Locate the rabbitmq_management tuple and change the port value (default is 12345, change it to whatever you want). Be sure to uncomment or add the following content into /etc/rabbitmq/rabbitmq.config file as shown below. {rabbitmq_management,[{listener, [{port, 12345}]}]} Then restart the RabbitMQ server instance once $ sudo /etc/init.d/rabbitmq-server restart
Normally, RabbitMQ doesn't comes with config file, so you need to create it: sudo nano /etc/rabbitmq/rabbitmq.config And you can add this content %% -*- mode: erlang -*- %% ---------------------------------------------------------------------------- %% RabbitMQ Sample Configuration File. %% %% Related doc guide: http://www.rabbitmq.com/configure.html. See %% http://rabbitmq.com/documentation.html for documentation ToC. %% ---------------------------------------------------------------------------- [ {rabbit, [ ]}, {kernel, [ ]}, {rabbitmq_management, [ {listener, [{port, 3009} ]} ]}, {rabbitmq_shovel, [{shovels, [ ]} ]}, {rabbitmq_stomp, [ ]}, {rabbitmq_mqtt, [ ]}, {rabbitmq_amqp1_0, [ ]}, {rabbitmq_auth_backend_ldap, [ ]}, {lager, [ ]} ]. As you can see, I changed my rabbitmq_management port to 3009 according to the firewall of my server. After that, you need to modify the /etc/rabbitmq/rabbitmq-env.conf by adding this line: export RABBITMQ_CONFIG_FILE="/etc/rabbitmq/rabbitmq" The .config will be automatically added. By the end, just restart the service: sudo /etc/init.d/rabbitmq-server restart
#rpm -qa | grep rabbit rabbitmq-server-3.6.10-1.el7.noarch #rpm -ql rabbitmq-server-3.6.10-1.el7.noarch search file like /usr/sbin/rabbitmq-server cat /usr/sbin/rabbitmq-server | grep RABBITMQ_ENV RABBITMQ_ENV=/usr/lib/rabbitmq/bin/rabbitmq-env open file # vi /usr/lib/rabbitmq/bin/rabbitmq-env *change according to you port #DEFAULT_NODE_PORT=5672 DEFAULT_NODE_PORT=2055 After change first kill rabbitmq process and then restart.
The documentation explains it well: https://www.rabbitmq.com/management.html What makes me respond here is all above responses although correct, they use the legacy "syntax", the new and recommended way of configuring RabbitMQ moves away from the Erlang legacy style, story short: management.tcp.port = 15672
salt-minion tunnel initial connection
I have successfully set up a master and a minion using the Salt tutorial, between two hosted VPS (Debian 7). I am trying to set-up a second minion on my laptop (Ubuntu 14.04), but following the same steps fails. I am suspecting that my ISP is blocking some ports used by Salt. I ma be wrong but that wouldn't be the first time my problems are related to that (I have some wireless connection included in my housing contract and live in some kind of young worker residence). Is there a way to tell which ports my ISP is blocking ? Can I tunnel my salt minion connection through ssh ? Note : ssh runs fine, if that can help, and I have access to remote servers (the other master and minion). Anonymised command output below : $ salt-minion -l debug [DEBUG ] Reading configuration from /etc/salt/minion [DEBUG ] Using cached minion ID from /etc/salt/minion_id: nha-Ub [DEBUG ] Configuration file path: /etc/salt/minion [INFO ] Setting up the Salt Minion "my_machine_name" [DEBUG ] Created pidfile: /var/run/salt-minion.pid [DEBUG ] Reading configuration from /etc/salt/minion [DEBUG ] Attempting to authenticate with the Salt Master at X.X.X.X [DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem [ERROR ] Attempt to authenticate with the salt master failed
I'd run a tcpdump on each side to see what packets are being sent and received, an example command for tcpdump: tcpdump -i $INTERFACE -s 15000 -w testing.pcap Where interface is eth0 etc.. (ifconfig) to confirm. Then you can look at this in wireshark. Another thing to look at is firewall, I'd put an allow rule in iptables for each WAN IP, just to make sure that's not causing any issues also. By default, salt needs ports 4505 and 4506 open (TCP)