rabbitmq-plugins failed to run as rabbitmq user - rabbitmq

I'm using RabbitMQ 3.7.5.
I created a docker image which run rabbitmq as rabbitmq user.
But I failed to run rabbitmq-plugins.
bash-4.2$ rabbitmq-plugins list
Usage:
rabbitmq-plugins [-n <node>] [-l] [-q] <command> [<command options>]
.......
<timeout> - operation timeout in seconds. Default is "infinity".
Only root or rabbitmq can run rabbitmq-plugins
bash-4.2$ id
uid=10000(rabbitmq) gid=10000(rabbitmq) groups=10000(rabbitmq),1(bin)
bash-4.2$ rabbitmqctl cluster_status
Cluster status of node rabbit#zt-crmq-0 ...
[{nodes,[{disc,['rabbit#zt-crmq-0','rabbit#zt-crmq-1','rabbit#zt-crmq-2']}]},
{running_nodes,['rabbit#zt-crmq-2','rabbit#zt-crmq-1','rabbit#zt-crmq-0']},
{cluster_name,<<"rabbit#zt-crmq-0.zt-crmq.default.svc.cluster.local">>},
{partitions,[]},
{alarms,[{'rabbit#zt-crmq-2',[]},
{'rabbit#zt-crmq-1',[]},
{'rabbit#zt-crmq-0',[]}]}]
for /usr/sbin/rabbitm-plugins:
main() {
ensure_we_are_in_a_readable_dir
if current_user_is_rabbitmq && calling_rabbitmq_server
then
exec_rabbitmq_server "$#"
elif current_user_is_rabbitmq && ! calling_rabbitmq_plugins
then
exec_script_as_rabbitmq "$#"
elif current_user_is_root && calling_rabbitmq_plugins
then
exec_script_as_rabbitmq "$#"
elif current_user_is_root
then
exec_script_as_root "$#"
else
run_script_help_and_fail
fi
}
why it judges not to run rabbitmq_plugins as rabbitmq user?
Any help will be appreiated.
B.R,
Tao

The if-statements in the main() indeed don't let us run rabbitmq-plugins from rabbitmq user. Not sure if this is expected or just a bug but you can
circumvent the script altogether with:
$ /usr/lib/rabbitmq/bin/rabbitmq-plugins list | grep hash
[E*] rabbitmq_consistent_hash_exchange 3.8.4

Try to add the rabbitmq user to the docker group. If the docker group does not exist create the group and then add rabbitmq to the docker group.

Related

Why Molecule is not able to start a docker container (Failed to create temporary directory)

I found similar case here, that I am using molecule to test my ansible roles, but for some reason it is skipping "creation" part and gives error like:
fatal: [rabbitmq]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" && echo ansible-tmp-1638541586.6239848-828-250053975102429=\"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" ), exited with result 1", "unreachable": true}
It is skipping the create process: Skipping, instances already created. However, nothing is running:
name#EEW00438:~/.cache$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
name#EEW00438:~/.cache$
what I tried:
molecule destroy
molecule reset
restart
rm -rf ~/.cache/
changed remote_tmp to /tmp/.ansible/ in /etc/ansible/ansible.cfg
reinstall molecule
This issue is only with one role.
UPDATE:
it is failing on step:
mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623 `\" && echo ansible-tmp-1638782939.31706-2913-12516475286623=
mkdir: cannot create directory ‘"/home/user/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623"’: No such file or directory
I stumbled upon this issue as well.
When you create the role you need to create it as molecule init role --driver-name docker ns.myrole to enable docker. Be sure to install the docker driver too if you haven't pip install --upgrade molecule-docker
So if you need to tweak the container that runs, edit molecule.yml. It defaults to centos. I switched to ubuntu in there, an created a Dockerfile to provision the container with things that need to exist.
molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
platforms:
- name: instance
image: ubuntu:22.04 # this is required but ignored since I specify a `dockerfile`
pre_build_image: false
dockerfile: Dockerfile
provisioner:
name: ansible
verifier:
name: ansible
For example, Ubuntu 22.04 doesn't use python anymore, so I added an alias at the end of what molecule renders so that Ansible can use python and have it redirect to python3
FROM ubuntu:22.04
RUN if [ $(command -v apt-get) ]; then export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y python3 sudo bash ca-certificates iproute2 python3-apt aptitude && apt-get clean && rm -rf /var/lib/apt/lists/*; \
elif [ $(command -v dnf) ]; then dnf makecache && dnf --assumeyes install /usr/bin/python3 /usr/bin/python3-config /usr/bin/dnf-3 sudo bash iproute && dnf clean all; \
elif [ $(command -v yum) ]; then yum makecache fast && yum install -y /usr/bin/python /usr/bin/python2-config sudo yum-plugin-ovl bash iproute && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \
elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python3 sudo bash iproute2 && zypper clean -a; \
elif [ $(command -v apk) ]; then apk update && apk add --no-cache python3 sudo bash ca-certificates; \
elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python3 sudo bash ca-certificates iproute2 && xbps-remove -O; fi
RUN echo 'alias python=python3' >> ~/.bashrc
It's been years since I last used Molecule, and I must say... it's gone downhill. It used to be easy/clear/direct to get things working. Sigh. I guess I should stick to containers and force the migration off VMs sooner!
The problem may be caused by a Docker context change performed at the start of Docker Desktop. Despite this, Molecule does create a container, but in an inactive context.
At startup, Docker Desktop automatically switches the context from default to desktop-linux [1]. The active context determines which containers are available from CLI.
The context cannot be set in the molecule, i.e. the default context is always used to create containers [2].
$ molecule create --scenario-name test
... # The output with the error is skipped because it duplicates the output from the question
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
desktop-linux * moby unix:///home/bkarpov/.docker/desktop/docker.sock
$ docker context use default
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a71bfd28992f geerlingguy/docker-ubuntu2004-ansible "bash -c 'while true…" 5 minutes ago Up 5 minutes some-instance
$ molecule login --scenario-name test
INFO Running test > login
root#some-instance:/#
Solutions
Switch the context back to default manually
docker context use default
This solution is suitable for one-time execution, since the context will need to be switched every time Docker Desktop is started. Docker Desktop service will continue to work using the desktop-linux context.
Issue with the request to add context switching to Docker Desktop - https://github.com/docker/roadmap/issues/47
Stop Docker Desktop
systemctl --user stop docker-desktop
Stopping the Docker Desktop service will automatically switch to the default context.
Set DOCKER_CONTEXT so that Docker Desktop does not change the context in the current shell
export DOCKER_CONTEXT=default
systemctl --user restart docker-desktop
When stopping, the context returns to default, and when starting, it does not switch to desktop-linux.
References
https://docs.docker.com/desktop/install/ubuntu/#launch-docker-desktop
https://github.com/ansible-community/molecule-docker#faq

Unable to receive messages in RabbitMQ and UI shows empty queue

I'm following the tutorial on https://www.rabbitmq.com/tutorials/tutorial-one-python.html
I've set up RabbitMQ using docker. Have defined the exchange, etc, in there.
The management UI shows the exchange created. And when the sender script is executed the first time, the queue is showing up in the UI too.
I run the consumer first & then the publisher. But while the message gets published (assuming it is, since the send script doesn't throw any errors), the consumer doesn't receive any messages. I can see the AMQP connections getting established and closed (in the case of the publisher) correctly. But the queue is empty.
The management UI also shows an empty queue. I tried publishing persistent & non-persistent messages using the UI itself, but even there, while the message gets published, I receive "Queue is empty" while doing Get Messages.
Please help me out!
docker-compose.yml:
...
my_rabbit:
hostname: my_rabbit # persistence
build:
context: .
dockerfile: Dockerfile_rabbit
restart: unless-stopped
container_name: my_rabbit
volumes:
- "./rabbitmq:/var/lib/rabbitmq"
- "./rabbitmq_logs:/var/log/rabbitmq"
command: ["./rabbit_init.sh"]
ports:
- 5670:5672
- 20888:15672 # rabbitmq management plugin
logging:
driver: "json-file"
options:
max-size: "100M"
max-file: "10"
...
Dockerfile:
FROM rabbitmq
RUN apt-get update && apt-get install -y wget python3
# Define environment variables.
ENV RABBITMQ_USER user
ENV RABBITMQ_PASSWORD password
ENV RABBITMQ_VHOST myvhost
ENV RABBITMQ_PID_FILE /var/lib/rabbitmq/mnesia/rabbitmq
ADD rabbit_init.sh /rabbit_init.sh
EXPOSE 15672
# Define default command
RUN chmod +x /rabbit_init.sh
CMD ["/rabbit_init.sh"]
rabbit_init.sh:
#!/bin/sh
# Create Rabbitmq user
( sleep 10 ; \
rabbitmqctl wait --timeout 60 $RABBITMQ_PID_FILE ; \
rabbitmqctl add_user $RABBITMQ_USER $RABBITMQ_PASSWORD 2>/dev/null ; \
rabbitmqctl set_user_tags $RABBITMQ_USER administrator ; \
rabbitmqctl add_vhost $RABBITMQ_VHOST ; \
rabbitmqctl set_permissions -p $RABBITMQ_VHOST $RABBITMQ_USER ".*" ".*" ".*" ; \
rabbitmq-plugins enable rabbitmq_management ; \
wget 'https://raw.githubusercontent.com/rabbitmq/rabbitmq-management/v3.7.15/bin/rabbitmqadmin' ; \
chmod +x rabbitmqadmin ; \
sed -i 's|#!/usr/bin/env python|#!/usr/bin/env python3|' rabbitmqadmin ; \
mv rabbitmqadmin /bin/ ; \
sleep 2; \
rabbitmqadmin declare queue --username=$RABBITMQ_USER --password=$RABBITMQ_PASSWORD --vhost=$RABBITMQ_VHOST name=xxx durable=true arguments='{"x-overflow":"reject-publish", "x-max-length-bytes":5000000000}' ; \
rabbitmqadmin declare exchange --username=$RABBITMQ_USER --password=$RABBITMQ_PASSWORD --vhost=$RABBITMQ_VHOST name=xxx type=direct durable=true ; \
rabbitmqadmin declare binding --username=$RABBITMQ_USER --password=$RABBITMQ_PASSWORD --vhost=$RABBITMQ_VHOST source=xxx destination=xxx routing_key=xxx; \
) &
rabbitmq-server $#
Have you tried publishing the message without the consumer enabled? Then the message will just be stored in the queue and you can view it. If the consumer is on it will consume the message straight away. If you are publishing and receiving no errors it is most likely the consumer that is the problem.

How to remove non-running nodes from RabbitMQ cluster

I need to delete all nodes that are not running in a RabbitMQ cluster via the command line.
I have tried rabbitmqctl forget_cluster_node, but I'm not sure how to get the list of non-running nodes.
I see all the nodes and running_nodes in the output of rabbitmqctl cluster_status. Can someone help me parse it and let me know if there is any other solution to delete the nodes from a cluster easily?
Figured out by myself
# Remove nodes that are not running from the cluster
nodes=($(egrep -o '[a-z0-9#-]+' <<< $(sudo rabbitmqctl cluster_status --formatter json | jq .nodes.disc)))
running_nodes=($(egrep -o '[a-z0-9#-]+' <<< $(sudo rabbitmqctl cluster_status --formatter json | jq .running_nodes)))
for node in ${nodes[#]}
do
match_count=0
for rnode in ${running_nodes[#]}
do
if [ "${node}" == "${rnode}" ]
then
match_count=1
break
fi
done
if [ $match_count == 1 ]
then
continue
else
sudo rabbitmqctl forget_cluster_node $node
fi
done
In my case the proposed script didn't worked, possibly due RabbitMQ version output differs on the version I'm using(3.9.13). Anyhow, this is what I ended up using:
#!/bin/bash
offline_nodes=$(rabbitmqctl --quiet --formatter json cluster_status \
| jq -r '.disk_nodes-.running_nodes | .[]')
for node in ${offline_nodes[#]}; do
rabbitmqctl forget_cluster_node "$node"
done

How do I start a RabbitMQ node?

I keep getting this error every time I try to do something with RabbitMQ:
attempted to contact: [fdbvhost#FORTE]
fdbvhost#FORTE:
* connected to epmd (port 4369) on FORTE
* epmd reports: node 'fdbvhost' not running at all
no other nodes on FORTE
* suggestion: start the node
current node details:
- node name: 'rabbitmq-cli-54#FORTE'
- home dir: C:\Users\Jesus
- cookie hash: iuRlQy0F81aBpoY9aQqAzw==
This is the output I get when I run rabbitmqctl -n fdbvhost status or /rabbitmqctl -n fdbvhost list_vhosts.
I've tried rabbitmqctl -n fdbvhost start which gives me the following output:
Error: could not recognise command
Usage:
rabbitmqctl [-n <node>] [-t <timeout>] [-q] <command> [<command options>]
...
So this doesn't start it. I cannot find anything about starting a node in the documentation. How do I actually start my node/vhost?
Try running the following command from the RabbitMQ's installation sbin directory
rabbitmq-server start -detached
This should start the broker node if it was stopped for some reason.
Check if you have RabbitMQ installed as a service in the /etc/init.d/ folder
sudo su # might be needed
cd /etc/init.d/
ls . | grep rabbit
The output should be rabbitmq-server
If that's the case, then, try restarting your service with:
sudo service rabbitmq-server restart
For mac users
To Start
brew services start rabbitmq
To Restart
brew services restart rabbitmq
To Stop
brew services stop rabbitmq
To Know the status of the server
brew services info rabbitmq

docker rabbitmq hostname issue

I am build an image using Dockerfile, and I would like to add users to RabbitMQ right after installation. The problem is that during build hostname of the docker container is different from when I run the resultant image. RabbitMQ loses that user; because of changed hostname it uses another DB.
I connot change /etc/hosts and /etc/hostname files from inside a container, and looks that RabbitMQ is not picking my changes to RABBITMQ_NODENAME and HOSTNAME variables.
The only thing that I found working is running this before starting RabbitMQ broker:
echo "NODENAME=rabbit#localhost" >> /etc/rabbitmq/rabbitmq.conf.d/ewos.conf
But then I will have to run docker image with changed hostname all the time.
docker run -h="localhost" image
Any ideas on what can be done? Maybe the solution is to add users to RabbitMQ not on build but on image run?
Just here is example how to configure from Dockerfile properly:
ENV HOSTNAME localhost
RUN /etc/init.d/rabbitmq-server start ; rabbitmqctl add_vhost /test; /etc/init.d/rabbitmq-server stop
This is remember your config.
Yes, I would suggest to add users when the container runs for the first time.
Instead of starting RabbitMQ directly, you can run a wrapper script that will take care of all the setup, and then start RabbitMQ. If the last step of the wrapper script is a process start, remember that you can use exec so that the new process replaces the script itself.
This is how I did it.
Dockerfile
FROM debian:jessie
MAINTAINER Francesco Casula <fra.casula#gmail.com>
VOLUME ["/var/www"]
WORKDIR /var/www
ENV HOSTNAME my-docker
ENV RABBITMQ_NODENAME rabbit#my-docker
COPY scripts /root/scripts
RUN /bin/bash /root/scripts/os-setup.bash && \
/bin/bash /root/scripts/install-rabbitmq.bash
CMD /etc/init.d/rabbitmq-server start && \
/bin/bash
os-setup.bash
#!/bin/bash
echo "127.0.0.1 localhost" > /etc/hosts
echo "127.0.1.1 my-docker" >> /etc/hosts
echo "my-docker" > /etc/hostname
install-rabbitmq.bash
#!/bin/bash
echo "NODENAME=rabbit#my-docker" > /etc/rabbitmq/rabbitmq-env.conf
echo 'deb http://www.rabbitmq.com/debian/ testing main' | tee /etc/apt/sources.list.d/rabbitmq.list
wget -O- https://www.rabbitmq.com/rabbitmq-release-signing-key.asc | apt-key add -
apt-get update
cd ~
wget https://www.rabbitmq.com/releases/rabbitmq-server/v3.6.5/rabbitmq-server_3.6.5-1_all.deb
dpkg -i rabbitmq-server_3.6.5-1_all.deb
apt-get install -f -y
/etc/init.d/rabbitmq-server start
sleep 3
rabbitmq-plugins enable amqp_client mochiweb rabbitmq_management rabbitmq_management_agent \
rabbitmq_management_visualiser rabbitmq_web_dispatch webmachine
rabbitmqctl delete_user guest
rabbitmqctl add_user bunny password
rabbitmqctl set_user_tags bunny administrator
rabbitmqctl delete_vhost /
rabbitmqctl add_vhost symfony_prod
rabbitmqctl set_permissions -p symfony_prod bunny ".*" ".*" ".*"
rabbitmqctl add_vhost symfony_dev
rabbitmqctl set_permissions -p symfony_dev bunny ".*" ".*" ".*"
rabbitmqctl add_vhost symfony_test
rabbitmqctl set_permissions -p symfony_test bunny ".*" ".*" ".*"
/etc/init.d/rabbitmq-server restart
IS_RABBIT_INSTALLED=`rabbitmqctl status | grep RabbitMQ | grep "3\.6\.5" | wc -l`
if [ "$IS_RABBIT_INSTALLED" = "0" ]; then
exit 1
fi
IS_RABBIT_CONFIGURED=`rabbitmqctl list_users | grep bunny | grep "administrator" | wc -l`
if [ "$IS_RABBIT_CONFIGURED" = "0" ]; then
exit 1
fi
Don't forget to run the container by specifying the right host with the -h flag:
docker run -h my-docker -it --name=my-docker -v $(pwd)/htdocs:/var/www my-docker
The only thing that helped me was to change default value in rabbitmq-env.conf of MNESIA_BASE property to MNESIA_BASE=/data and I added this command RUN mkdir /data in Dockerfile before starting server and add users.