RabbitMQ in Docker - user creation not persisted - rabbitmq

I've got a problem where the user user1 is not persisted in the container that I have created using the following Dockerfile. What is the reason for this? Is this a RabbitMQ specific issue? e.g. I have to explicitly specify that a user must be persisted
FROM dockerfile/rabbitmq
# Define mount points.
VOLUME ["/data/log", "/data/mnesia"]
# Define working directory.
WORKDIR /data
RUN (rabbitmq-start &) && \
sleep 10 && \
rabbitmqctl add_user user1 password1 && \
rabbitmqctl set_user_tags user1 administrator && \
rabbitmqctl set_permissions -p / user1 ".*" ".*" ".*" && \
sleep 10 && \
rabbitmqctl stop && \
sleep 10
# Define default command.
CMD ["rabbitmq-start"]
# Expose ports.
EXPOSE 5672
EXPOSE 15672

I know it's an old question, but struggled for hours with this problem today and finally solved it for me:
The issue seems to be due to the default hostname changing at every new container with Docker, and RabbitMQ actually binds the configuration to the host name.
I set the NODENAME variable in /etc/rabbitmq/rabbitmq-env.conf before setting up the user:
# make the node name static
RUN echo 'NODENAME=rabbit#localhost' > /etc/rabbitmq/rabbitmq-env.conf
and now it works.
Hope it can help.
EDIT:
Here is a working Dockerfile (copying a rabbitmq-env.conf file to the container):
FROM ubuntu:latest
RUN groupadd -r rabbitmq && useradd -r -d /var/lib/rabbitmq -m -g rabbitmq rabbitmq
# add rabbitmq repo
RUN apt-get update && \
apt-get install wget --assume-yes && \
wget https://www.rabbitmq.com/rabbitmq-signing-key-public.asc && \
sudo apt-key add rabbitmq-signing-key-public.asc && \
sed -i -e '1ideb http://www.rabbitmq.com/debian/ testing main\' /etc/apt/sources.list && \
apt-get update && \
apt-get install rabbitmq-server --assume-yes
# Enable plugins
RUN rabbitmq-plugins enable rabbitmq_management && \
rabbitmq-plugins enable rabbitmq_web_stomp && \
rabbitmq-plugins enable rabbitmq_mqtt
# expose ports
# Management
EXPOSE 15672
# Web-STOMP plugin
EXPOSE 15674
# MQTT:
EXPOSE 1883
# configure RabbitMQ
COPY ["rabbitmq-env.conf", "/etc/rabbitmq/rabbitmq-env.conf"]
RUN chmod 755 /etc/rabbitmq/rabbitmq-env.conf
# Create users for the apps
COPY ["rabbitmq-setup.sh", "/tmp/rabbitmq/rabbitmq-setup.sh"]
RUN mkdir /var/run/rabbitmq && \
chmod -R 755 /var/run/rabbitmq && \
chown -R rabbitmq:rabbitmq /var/run/rabbitmq && \
service rabbitmq-server start && \
sh /tmp/rabbitmq/rabbitmq-setup.sh && \
rm /tmp/rabbitmq/rabbitmq-setup.sh && \
service rabbitmq-server stop
# start rabbitmq
USER rabbitmq
CMD ["rabbitmq-server", "start"]
My rabbitmq-env.conf file:
NODENAME=rabbimq#localhost
My rabbitmq-setup.sh:
rabbitmqctl add_vhost myvhost && rabbitmqctl add_user myuser mypasswd && rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*" && rabbitmqctl set_user_tags myuser administrator

I do something similar and it persists:
RUN service rabbitmq-server start ; \
rabbitmqctl add_vhost /sensu ; \
rabbitmqctl add_user sensu sensu ; \
rabbitmqctl set_permissions -p /sensu sensu ".*" ".*" ".*" ; \
service rabbitmq-server stop
Are you sure the creation process occurs in the first place? The sleeps and subshells don't make it obvious.

Because many people are still having this problem (including me), what I did was:
At building, copy the RabbitMQ database_dir at /var/lib/rabbitmq/mnesia/rabbit\#$(hostname) to /root (everything in /root stays persisted) after configuring all users.
At runtime, copy the database dir back from /root to /var/lib/rabbitmq/mnesia.
Only disadvantages: changes made to the database in RabbitMQ will be reset at runtime. I found no other way to do this with docker-compose however.
Configure.sh (as RUN command in Dockerfile):
echo "NODENAME=rabbit#message-bus" > /etc/rabbitmq/rabbitmq-env.conf
echo "127.0.0.1 message-bus" >> /etc/hosts #prevents error that 'message-bus' node doesnt exist (this doesnt persist in /etc/hosts)
rabbitmqctl add user ... #etc
rabbitmqctl stop
mkdir /root/rabbitmq_database
cp -R /var/lib/rabbitmq/mnesia/rabbit\#message-bus/* /root/rabbitmq_database
Runtime.sh (as entrypoint in Dockerfile):
#copy database back from /root
mkdir -p /var/lib/rabbitmq/mnesia/rabbit\#message-bus
cp -R /root/rabbitmq_database/* /var/lib/rabbitmq/mnesia/rabbit\#message-bus
rabbitmq-server

For what it's worth, something similar is done in this dockerfile, but I can't get it to persist either:
RUN /usr/sbin/rabbitmq-server -detached && \
sleep 5 && \
rabbitmqctl add_user bunnyuser my_pass1 && \
rabbitmqctl add_user bunny-admin my_pass2 && \
rabbitmqctl set_user_tags bunny-admin administrator && \
rabbitmqctl set_permissions -p / bunnyuser ".*" ".*" ".*"

Related

Unable to receive messages in RabbitMQ and UI shows empty queue

I'm following the tutorial on https://www.rabbitmq.com/tutorials/tutorial-one-python.html
I've set up RabbitMQ using docker. Have defined the exchange, etc, in there.
The management UI shows the exchange created. And when the sender script is executed the first time, the queue is showing up in the UI too.
I run the consumer first & then the publisher. But while the message gets published (assuming it is, since the send script doesn't throw any errors), the consumer doesn't receive any messages. I can see the AMQP connections getting established and closed (in the case of the publisher) correctly. But the queue is empty.
The management UI also shows an empty queue. I tried publishing persistent & non-persistent messages using the UI itself, but even there, while the message gets published, I receive "Queue is empty" while doing Get Messages.
Please help me out!
docker-compose.yml:
...
my_rabbit:
hostname: my_rabbit # persistence
build:
context: .
dockerfile: Dockerfile_rabbit
restart: unless-stopped
container_name: my_rabbit
volumes:
- "./rabbitmq:/var/lib/rabbitmq"
- "./rabbitmq_logs:/var/log/rabbitmq"
command: ["./rabbit_init.sh"]
ports:
- 5670:5672
- 20888:15672 # rabbitmq management plugin
logging:
driver: "json-file"
options:
max-size: "100M"
max-file: "10"
...
Dockerfile:
FROM rabbitmq
RUN apt-get update && apt-get install -y wget python3
# Define environment variables.
ENV RABBITMQ_USER user
ENV RABBITMQ_PASSWORD password
ENV RABBITMQ_VHOST myvhost
ENV RABBITMQ_PID_FILE /var/lib/rabbitmq/mnesia/rabbitmq
ADD rabbit_init.sh /rabbit_init.sh
EXPOSE 15672
# Define default command
RUN chmod +x /rabbit_init.sh
CMD ["/rabbit_init.sh"]
rabbit_init.sh:
#!/bin/sh
# Create Rabbitmq user
( sleep 10 ; \
rabbitmqctl wait --timeout 60 $RABBITMQ_PID_FILE ; \
rabbitmqctl add_user $RABBITMQ_USER $RABBITMQ_PASSWORD 2>/dev/null ; \
rabbitmqctl set_user_tags $RABBITMQ_USER administrator ; \
rabbitmqctl add_vhost $RABBITMQ_VHOST ; \
rabbitmqctl set_permissions -p $RABBITMQ_VHOST $RABBITMQ_USER ".*" ".*" ".*" ; \
rabbitmq-plugins enable rabbitmq_management ; \
wget 'https://raw.githubusercontent.com/rabbitmq/rabbitmq-management/v3.7.15/bin/rabbitmqadmin' ; \
chmod +x rabbitmqadmin ; \
sed -i 's|#!/usr/bin/env python|#!/usr/bin/env python3|' rabbitmqadmin ; \
mv rabbitmqadmin /bin/ ; \
sleep 2; \
rabbitmqadmin declare queue --username=$RABBITMQ_USER --password=$RABBITMQ_PASSWORD --vhost=$RABBITMQ_VHOST name=xxx durable=true arguments='{"x-overflow":"reject-publish", "x-max-length-bytes":5000000000}' ; \
rabbitmqadmin declare exchange --username=$RABBITMQ_USER --password=$RABBITMQ_PASSWORD --vhost=$RABBITMQ_VHOST name=xxx type=direct durable=true ; \
rabbitmqadmin declare binding --username=$RABBITMQ_USER --password=$RABBITMQ_PASSWORD --vhost=$RABBITMQ_VHOST source=xxx destination=xxx routing_key=xxx; \
) &
rabbitmq-server $#
Have you tried publishing the message without the consumer enabled? Then the message will just be stored in the queue and you can view it. If the consumer is on it will consume the message straight away. If you are publishing and receiving no errors it is most likely the consumer that is the problem.

How to run Apache as non-root user?

I'm building an image from the following Dockerfile and following command docker build --rm -f "Dockerfile" -t non_root_image_plz_work .:
DockerFile
FROM node:14.7.0-buster-slim AS apache_for_selenium
# Create non-root group and user
RUN addgroup --system shared-folder \
&& adduser --system --home /var/cache/shared-folder --group shared-folder --uid 1001
# Make Port accessable
EXPOSE 80/tcp
# Set Node env.Name
ENV NODE_ENV=dev
RUN apt-get -qq update && apt-get -qq install -y --no-install-recommends nano git openssl bash musl curl apache2 apache2-utils systemd && \
systemctl enable apache2 && npm config set registry http://localhost:5000/repository/repo && \
npm i -g pm2 serve && mkdir /usr/share/shared-folder
RUN ln -sf /dev/stdout /var/log/apache2/access.log && \
ln -sf /dev/stderr /var/log/apache2/error.log
WORKDIR /usr/share/shared-folder
COPY . /usr/share/shared-folder/
RUN npm install && npm cache clean --force && npm cache verify && \
rm /var/www/html/index.html && \
ln -s /usr/share/shared-folder/mochawesome-report /var/www/html/mochawesome-report && \
chown www-data -R /var/www/html/mochawesome-report && chgrp www-data -R /var/www/html/mochawesome-report
VOLUME /usr/share/shared-folder/mochawesome-report
USER 1001
CMD [ "sh", "-c", "service apache2 start ; pm2-runtime process.yml --no-daemon" ]
When I try to run the image using docker run non_root_image_plz_work, I get the following error:
Error after running docker run command:
Starting Apache httpd web server: apache2 failed!
The apache2 configtest failed. ... (warning).
Output of config test was:
mkdir: cannot create directory '/var/run/apache2': Permission denied
chown: changing ownership of '/var/lock/apache2.3FGoa8Y71E': Operation not permitted
It seems to be a permissions issue, as if I'm not properly running the container as a non-root user. Any suggestions on how I can get the container to run properly as a non-root user?
Note: I used a dummy registry in the Dockerfile for I don't want to show the actual registry, thanks
In Docker, all folders are owned by root. Without knowing your directory structure, I guess your problem is, that your user 1001 (or the setup programm which is run with 1001's permission) tries to access directories that (probably) are owned by root.
Either you can try:
Change your permissions of the folders.
This can be used of you know which folders are accessed and want to prevent further permission issues.
chmod -R 777 /path/to/folder
Give your user proper permissions.
Here is a very quick walkthrough. Please comment if it didn't slove your problem and I'll try to update this for a more specific answer.
A small example (taken from here).
You can setup your non-root-user foo with passwordless access:
RUN \
groupadd -g 1001 foo && useradd -u 1001 -g foo -G sudo -m -s /bin/bash 1001
&& \
sed -i /etc/sudoers -re 's/^%sudo.*/%sudo ALL=(ALL:ALL) NOPASSWD: ALL/g' && \
sed -i /etc/sudoers -re 's/^root.*/root ALL=(ALL:ALL) NOPASSWD: ALL/g' && \
sed -i /etc/sudoers -re 's/^#includedir.*/## **Removed the include directive** ##"/g' && \
echo "foo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers; su - foo -c id
Hint: You will probably need to install sudo
apt-get install sudo
Now, try running the entrypoint (or your commad) with sudo.
EDIT:
I've updated the answer to match your Docker-File. Have a look at it. The user nonroot is assigned uuid 1001 and added to /etc/sudoers. Also your command is now run with sudo which should prevent the permission issues.
FROM node:14.7.0-buster-slim AS apache_for_selenium
# Create non-root group and user
RUN addgroup --system shared-folder \
&& adduser --system --home /var/cache/shared-folder --ingroup shared-folder --uid 1001 nonroot
# Make Port accessable
EXPOSE 80/tcp
# Set Node env.Name
ENV NODE_ENV=dev
RUN apt-get -qq update && apt-get -qq install -y --no-install-recommends \
sudo nano git openssl bash musl curl apache2 apache2-utils systemd \
&& systemctl enable apache2
#\
# && #npm config set registry http://localhost:5000/repository/repo && \
#npm i -g pm2 serve && mkdir /usr/share/shared-folder
RUN ln -sf /dev/stdout /var/log/apache2/access.log && \
ln -sf /dev/stderr /var/log/apache2/error.log
WORKDIR /usr/share/shared-folder
COPY . /usr/share/shared-folder/
RUN npm install && npm cache clean --force && npm cache verify && \
rm /var/www/html/index.html && \
ln -s /usr/share/shared-folder/mochawesome-report /var/www/html/mochawesome-report && \
chown www-data -R /var/www/html/mochawesome-report && chgrp www-data -R /var/www/html/mochawesome-report
VOLUME /usr/share/shared-folder/mochawesome-report
RUN \
sed -i /etc/sudoers -re 's/^%sudo.*/%sudo ALL=(ALL:ALL) NOPASSWD: ALL/g' && \
sed -i /etc/sudoers -re 's/^root.*/root ALL=(ALL:ALL) NOPASSWD: ALL/g' && \
sed -i /etc/sudoers -re 's/^#includedir.*/## **Removed the include directive** ##"/g' && \
echo "nonroot ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
USER nonroot
CMD [ "sudo sh", "-c", "service apache2 start ; pm2-runtime process.yml --no-daemon" ]
The problem here: apache is special user. it only can start by root user.
you can not start apache by another user. That why you got permission deny.
Seen i saw your dockerfile. your user created is normal user.
Try make script like below with name apache-start :
#!/bin/sh
set -e
# Apache gets grumpy about PID files pre-existing
rm -f /usr/local/apache2/logs/httpd.pid
exec httpd -DFOREGROUND "$#"
and your docker file should be like
FROM node:14.7.0-buster-slim AS apache_for_selenium
# Create non-root group and user
RUN addgroup --system shared-folder \
&& adduser --system --home /var/cache/shared-folder --group shared-folder --uid 1001
# Make Port accessable
EXPOSE 80/tcp
# Set Node env.Name
ENV NODE_ENV=dev
RUN apt-get -qq update && apt-get -qq install -y --no-install-recommends nano git openssl bash musl curl apache2 apache2-utils systemd && \
systemctl enable apache2 && npm config set registry http://localhost:5000/repository/repo && \
npm i -g pm2 serve && mkdir /usr/share/shared-folder
RUN ln -sf /dev/stdout /var/log/apache2/access.log && \
ln -sf /dev/stderr /var/log/apache2/error.log
WORKDIR /usr/share/shared-folder
COPY . /usr/share/shared-folder/
RUN npm install && npm cache clean --force && npm cache verify && \
rm /var/www/html/index.html && \
ln -s /usr/share/shared-folder/mochawesome-report /var/www/html/mochawesome-report && \
chown www-data -R /var/www/html/mochawesome-report && chgrp www-data -R /var/www/html/mochawesome-report
VOLUME /usr/share/shared-folder/mochawesome-report
COPY apache-start /usr/local/bin/
CMD ["apache-start"]
USER 1001
Another option is to switch to podman tool which is an alternative to Docker. With podman you can run containers (the same images you use in Docker) but with normal users. That has a lot of benefits specially from security point of view.

Docker Container from php:5.6-apache as root

This would be related to Docker php:5.6-Apache Development Environment missing permissions on volume mount
I have tried pretty much everything to make the mounted volume be readable by www-data, my current solution is trying to move by scripts the folders needed by the application to /var and giving the proper permissions to be writable by www-data but that is becoming hard to maintain.
Giving the fact that it's a development environment I don't mind being a security hole so I would like to run apache as root and I get
Error: Apache has not been designed to serve pages while running as
root. There are known race conditions that will allow any local user
to read any file on the system. If you still desire to serve pages as
root then add -DBIG_SECURITY_HOLE to the CFLAGS line in your
src/Configuration file and rebuild the server. It is strongly
suggested that you instead modify the User directive in your
httpd.conf file to list a non-root user.
Is there any easy way I can accomplish this using the docker image php:5.6-apache?
This is my docker-compose.yml
version: '2'
services:
api:
container_name: api
privileged: true
build:
context: .
dockerfile: apigility/Dockerfile
ports:
- "2020:80"
volumes:
- /ft/code/api:/var/www:rw
And this is my Dockerfile:
FROM php:5.6-apache
USER root
RUN apt-get update \
&& apt-get install -y sudo openjdk-7-jdk \
&& echo "www-data ALL=NOPASSWD: ALL" >> /etc/sudoers
RUN apt-get install -y git zlib1g-dev libmcrypt-dev nano vim --no-install-recommends \
&& apt-get clean \
&& rm -r /var/lib/apt/lists/* \
&& docker-php-ext-install mcrypt zip \
&& curl -sS https://getcomposer.org/installer \
| php -- --install-dir=/usr/local/bin --filename=composer \
&& a2enmod rewrite \
&& sed -i 's!/var/www/html!/var/www/public!g' /etc/apache2/apache2.conf \
&& echo "AllowEncodedSlashes On" >> /etc/apache2/apache2.conf \
&& cp /usr/src/php/php.ini-production /usr/local/etc/php/php.ini \
&& printf '[Date]\ndate.timezone=UTC' > /usr/local/etc/php/conf.d/timezone.ini
WORKDIR /var/www
Why not to do exactly what it says in the question you referred to?
RUN usermod -u 1000 www-data
RUN groupmod -g 1000 www-data
This is not a hack. It's a proper solution to the problem you have in the development environment.
So, I managed to make the mounted data available for www-data by using the part of the answer in the related post but another step is required for it to work.
After you run docker-machine start default you need to ssh into it and run the following:
sudo mkdir --parents /code [where /code is the shared folder in virtualbox]
sudo mount -t vboxsf -o uid=999,gid=999 code /code [this is to make sure the uid and gid is 999 for the next part to work]
Then in your Dockerfile add
RUN usermod -u 999 www-data \
&& groupmod -g 999 www-data
After it's mounted, /code will have the owner www-data, and problem solved!
Another and better solution.
Add this in your dockerfile
RUN cd ~ \
&& apt-get -y install dpkg-dev debhelper libaprutil1-dev libapr1-dev libpcre3-dev liblua5.1-0-dev autotools-dev \
&& apt-get source apache2.2-common \
&& cd apache2-2.4.10 \
&& export DEB_CFLAGS_SET="-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -DBIG_SECURITY_HOLE" \
&& dpkg-buildpackage -b \
&& cd .. \
&& dpkg -i apache2-bin_2.4.10-10+deb8u7_amd64.deb \
&& dpkg -i apache2.2-common_2.4.10-10+deb8u7_amd64.deb
After that, you could be able to run apache as root.
PS : apache2-2.4.10, apache2-bin_2.4.10-10+deb8u7_amd64.deb and apache2.2-common_2.4.10-10+deb8u7_amd64.deb could change according to your source

Docker - Cannot start Redis Service

I'm installation Redis, setting up init.d, placed the redis.conf beside init.d.
Then using CMD service init.d start to start Redis.
However, Redis-Server does not start, and there are no indiciation in the log file that the service failed to start.
Installing Redis and Placing redis.conf to the etc/init.d folder
Commands:
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r redis && useradd -r -g redis redis
RUN apt-get update > /dev/null \
&& apt-get install -y curl > /dev/null 2>&1 \
&& rm -rf /var/lib/apt/lists/* > /dev/null 2>&1
# grab gosu for easy step-down from root
RUN gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
RUN curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" > /dev/null 2>&1 \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" > /dev/null 2>&1 \
&& gpg --verify /usr/local/bin/gosu.asc > /dev/null 2>&1 \
&& rm /usr/local/bin/gosu.asc > /dev/null 2>&1 \
&& chmod +x /usr/local/bin/gosu > /dev/null 2>&1
ENV REDIS_VERSION 3.0.1
ENV REDIS_DOWNLOAD_URL http://download.redis.io/releases/redis-3.0.1.tar.gz
ENV REDIS_DOWNLOAD_SHA1 fe1d06599042bfe6a0e738542f302ce9533dde88
# for redis-sentinel see: http://redis.io/topics/sentinel
RUN buildDeps='gcc libc6-dev make'; \
set -x \
&& apt-get update > /dev/null && apt-get install -y $buildDeps --no-install-recommends > /dev/null 2>&1 \
&& rm -rf /var/lib/apt/lists/* > /dev/null 2>&1 \
&& mkdir -p /usr/src/redis > /dev/null 2>&1 \
&& curl -sSL "$REDIS_DOWNLOAD_URL" -o redis.tar.gz > /dev/null 2>&1 \
&& echo "$REDIS_DOWNLOAD_SHA1 *redis.tar.gz" | sha1sum -c - > /dev/null 2>&1 \
&& tar -xzf redis.tar.gz -C /usr/src/redis --strip-components=1 > /dev/null 2>&1 \
&& rm redis.tar.gz > /dev/null 2>&1 \
&& make -C /usr/src/redis > /dev/null 2>&1 \
&& make -C /usr/src/redis install > /dev/null 2>&1 \
&& cp /usr/src/redis/utils/redis_init_script /etc/init.d/redis_6379
&& rm -r /usr/src/redis > /dev/null 2>&1 \
&& apt-get purge -y --auto-remove $buildDeps > /dev/null 2>&1
RUN mkdir /data && chown redis:redis /data
VOLUME [/data]
WORKDIR /data
CMD Service init.d start
Command:
RUN touch /var/redis/6379/redis-6379-log.txt
RUN chmod 777 /var/redis/6379/redis-6379-log.txt
ENV REDISPORT 6379
ADD $app$/redis-config.txt /etc/redis/$REDISPORT.conf
CMD service /etc/init.d/redis_6379 start
If I use shellinabox to access the container, and if I type in
/etc/init.d/redis_6379 start
Redis server will start, but it won't start in the dockerfile. Why is this?
It seems that you cannot use background processes, but instead you need something called supervisord.
To Install:
RUN apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
ADD $app$/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD /usr/bin/supervisord
Configuration File:
[supervisord]
nodaemon=true
[program:shellinabox]
command=/bin/bash -c "cd /tmp && exec /opt/shellinabox/shellinaboxd --no-beep --service ${service}"
[program:redis-server]
command=/bin/bash -c "redis-server /etc/redis/${REDISPORT}.conf"
What happens is that after the command is executed, it will start both programs shelllinabox and redis-server.
Thanks everyone for the help!
In general, you can't use an init script inside a Docker container. These scripts are typically designed to start a service "in the background", which means that even if the service starts, the script ultimately exits.
If this is the first process in your Docker container, Docker will see it exit, which will cause it to clean up the container. You will need to arrange for redis to run in the foreground in your container, or you will need to arrange to run some sort of process supervisor in your container.
Consider looking at the official resource container to see one way of setting things up. You can see the Dockerfiles in the github repository.

docker rabbitmq hostname issue

I am build an image using Dockerfile, and I would like to add users to RabbitMQ right after installation. The problem is that during build hostname of the docker container is different from when I run the resultant image. RabbitMQ loses that user; because of changed hostname it uses another DB.
I connot change /etc/hosts and /etc/hostname files from inside a container, and looks that RabbitMQ is not picking my changes to RABBITMQ_NODENAME and HOSTNAME variables.
The only thing that I found working is running this before starting RabbitMQ broker:
echo "NODENAME=rabbit#localhost" >> /etc/rabbitmq/rabbitmq.conf.d/ewos.conf
But then I will have to run docker image with changed hostname all the time.
docker run -h="localhost" image
Any ideas on what can be done? Maybe the solution is to add users to RabbitMQ not on build but on image run?
Just here is example how to configure from Dockerfile properly:
ENV HOSTNAME localhost
RUN /etc/init.d/rabbitmq-server start ; rabbitmqctl add_vhost /test; /etc/init.d/rabbitmq-server stop
This is remember your config.
Yes, I would suggest to add users when the container runs for the first time.
Instead of starting RabbitMQ directly, you can run a wrapper script that will take care of all the setup, and then start RabbitMQ. If the last step of the wrapper script is a process start, remember that you can use exec so that the new process replaces the script itself.
This is how I did it.
Dockerfile
FROM debian:jessie
MAINTAINER Francesco Casula <fra.casula#gmail.com>
VOLUME ["/var/www"]
WORKDIR /var/www
ENV HOSTNAME my-docker
ENV RABBITMQ_NODENAME rabbit#my-docker
COPY scripts /root/scripts
RUN /bin/bash /root/scripts/os-setup.bash && \
/bin/bash /root/scripts/install-rabbitmq.bash
CMD /etc/init.d/rabbitmq-server start && \
/bin/bash
os-setup.bash
#!/bin/bash
echo "127.0.0.1 localhost" > /etc/hosts
echo "127.0.1.1 my-docker" >> /etc/hosts
echo "my-docker" > /etc/hostname
install-rabbitmq.bash
#!/bin/bash
echo "NODENAME=rabbit#my-docker" > /etc/rabbitmq/rabbitmq-env.conf
echo 'deb http://www.rabbitmq.com/debian/ testing main' | tee /etc/apt/sources.list.d/rabbitmq.list
wget -O- https://www.rabbitmq.com/rabbitmq-release-signing-key.asc | apt-key add -
apt-get update
cd ~
wget https://www.rabbitmq.com/releases/rabbitmq-server/v3.6.5/rabbitmq-server_3.6.5-1_all.deb
dpkg -i rabbitmq-server_3.6.5-1_all.deb
apt-get install -f -y
/etc/init.d/rabbitmq-server start
sleep 3
rabbitmq-plugins enable amqp_client mochiweb rabbitmq_management rabbitmq_management_agent \
rabbitmq_management_visualiser rabbitmq_web_dispatch webmachine
rabbitmqctl delete_user guest
rabbitmqctl add_user bunny password
rabbitmqctl set_user_tags bunny administrator
rabbitmqctl delete_vhost /
rabbitmqctl add_vhost symfony_prod
rabbitmqctl set_permissions -p symfony_prod bunny ".*" ".*" ".*"
rabbitmqctl add_vhost symfony_dev
rabbitmqctl set_permissions -p symfony_dev bunny ".*" ".*" ".*"
rabbitmqctl add_vhost symfony_test
rabbitmqctl set_permissions -p symfony_test bunny ".*" ".*" ".*"
/etc/init.d/rabbitmq-server restart
IS_RABBIT_INSTALLED=`rabbitmqctl status | grep RabbitMQ | grep "3\.6\.5" | wc -l`
if [ "$IS_RABBIT_INSTALLED" = "0" ]; then
exit 1
fi
IS_RABBIT_CONFIGURED=`rabbitmqctl list_users | grep bunny | grep "administrator" | wc -l`
if [ "$IS_RABBIT_CONFIGURED" = "0" ]; then
exit 1
fi
Don't forget to run the container by specifying the right host with the -h flag:
docker run -h my-docker -it --name=my-docker -v $(pwd)/htdocs:/var/www my-docker
The only thing that helped me was to change default value in rabbitmq-env.conf of MNESIA_BASE property to MNESIA_BASE=/data and I added this command RUN mkdir /data in Dockerfile before starting server and add users.