How to correctly setup RabbitMQ on Openshift - rabbitmq

I have created new app on OpenShift using this image: https://hub.docker.com/r/luiscoms/openshift-rabbitmq/
It runs successfully and I can use it. I have added a persistent volume to it.
However, every time a POD is restarted, I loos all my data. This is because RabbitMq uses a hostname to create database directory.
For example:
node : rabbit#openshift-rabbitmq-11-9b6p7
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash : BsUC9W6z5M26164xPxUTkA==
log : tty
sasl log : tty
database dir : /var/lib/rabbitmq/mnesia/rabbit#openshift-rabbitmq-11-9b6p7
How can I set RabbitMq to always use same database dir?

You should be able to set an environment variable RABBITMQ_MNESIA_DIR to override the default configuration. This can be done via the OpenShift console by add an entry to environment in the deployment config or via the oc tool, for example:
oc set env dc/my-rabbit RABBITMQ_MNESIA_DIR=/myDir
You would then need to mount the persistent volume inside the Pod at the required path. Since you have said it is already created, then you just need to update it, example:
oc volume dc/my-rabbit --add --overwrite --name=my-pv-name --mount-path=/myDir
You will need to make sure you have correct r/w access on the provided mount path
EDIT: Some additional workarounds based on issues in comments
The issues caused by the dynamic hostname could be solved in a number of ways:
1.(Preferred IMO) Move the deployment to a StatefulSet. StatefulSet will provide stability in the naming and hence network identifier of the Pod, which must be fronted by a headless service. This feature is out of beta as of Kubernetes 1.9 and tech preview in OpenShift since version 3.5
Set the hostname for the Pod if Statefulsets are not an option. This can be done by adding the environment variable oc set env dc/example HOSTNAME=example to make the hostname static and setting RABBITMQ_NODENAME to do likewise.

I was able to get it to work by setting the HOSTNAME environment variable. OSE normally sets that value to the pod name, so it changes everytime the pod restarts. By setting it the pod's hostname doesn't change when the pod restarts.
Combined with a Persistent Volume the the queues, messages users and i assume whatever other configuration is persisted through pod restarts.
This was done on an OSE 3.2 server. I just added an environment variable to the deployment config. You can do it through the UI or with the OC CLI:
oc set env dc/my-rabbit HOSTNAME=some-static-name
This will probably be an issue if you run multiple pods for the service, but in that case you would need to setup proper RabbitMq clustering, which is a whole different beast.

The easiest and production-safest way to run RabbitMQ on K8s including OpenShift is the RabbitMQ Cluster Operator.
See this video on how to deploy RabbitMQ on OpenShift.

Related

users created in redis delete if redis pod delete/restart

i have deployed a single/standalone redis container using bitnami/redis helm chart.
i created a new user in redis by connecting to redis using redis-cli and running command "ACL SETUSER username on allkeys +#all +SADD >password" and the user created is shown by ACL LIST.
Now if delete the redis pod, new pod will comeup and it doesnot have the user created above by me.
why this is happening ?
how to create permanent users in redis?
Assuming your redis implementation uses the regular redis implementation, then there are two options for ACL persistence:
in the main redis configuration file, or
in an external ACL configuration file
This is determined by whether you have an aclfile configuration option set. If you're using the first version, then CONFIG REWRITE should update the server configuration; if you're using the second version, ACL SAVE is what you need. If you don't issue either of these, your ACLs will be lost when the server next restarts.

Docker for Win acme.json permissions

Traefik v1.3.1
Docker CE for Windows: 17.06.0-ce-win18 (12627)
I have the /acme folder routed to a host volume which contains the file acme.json. With the Traefik 1.3.1 update, I noticed that Traefik gets stuck in an infinite loop complaining that the "permissions 755 for /etc/traefik/acme/acme.json are too open, please use 600". The only solution I've found is to remove acme.json and let Traefik re-negotiate the certs. Unfortunately, if I need to restart the container, I have to remove acme.json again or I'm stuck with the same issue again!
My guess is that the issue lies with the Windows volume mapped to Docker but I was wondering what the recommended workaround would even be for this?
Can I change permissions on shared volumes for container-specific deployment requirements?
No, at this point, Docker for Windows does not enable you to control (chmod) the Unix-style permissions on shared volumes for deployed containers, but rather sets permissions to a default value of 0755 (read, write, execute permissions for user, read and execute for group) which is not configurable.
Traefik is not compatible with regular Windows due to the POSIX permissions check. It may work in the Windows Subsystem for Linux since that has a Unix-style permission system.
Stumbled across this issue when trying to get traefik running on Docker for Windows... ended up getting it working by adding a few lines to a dockerfile to create the acme.json and set permissions. I then built the image and despite throwing the "Docker image from Windows against a non-Windows Docker host security warning" when I checked permissions on the acme.json file it worked!
[
I setup a repo and have it auto building to the dockerhub here for further testing.
https://hub.docker.com/r/guerillamos/traefik/
https://github.com/guerillamos/traefikwin/blob/master/Dockerfile
Once I got that built I switched the image out in my docker-compose file and my DNS challenge to Cloudflare worked like a charm according to the logs.
I hope this helps someone!

Restart Kubernetes API server with different options

I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.

openshift origin - configure max pods

I'm recently started working a bit with openshift and it looks promising so far, but I keep running into issues and mostly finding outdated documentation or look at the completely wrong place.
For example, I have currently an openshift installation of ~150 cores, based on a couple of servers and some of these nodes have only 4 cores and others have 48.
I would like to modify all my nodes to have pods = 1.5 * cores or so.
Is this possible?
I tried to use:
oc edit node node0
and change pods from the default 40 to say 6, but sadly oc never saves my values and always resets itself back to the default of 40.
kind regards
my openshift information:
oc v1.0.7-2-gd775557-dirty
kubernetes v1.2.0-alpha.1-1107-g4c8e6f4
installation done using ansible, single master, external dns.
Max pods per node is set on the node - you can add in the stanza to the node config YAML file to set it:
kubeletArguments:
max-pods:
- "100"
The string is important - this stanza passes arguments directly to the Kubelet invocation (so any arg you can pass to a Kubelet you can pass via this config)

Remotely create a vhost on a docker container running rabbitmq

I have a Vagrantfile that does 2 important things; firstly pulls and runs dockerfile/rabbitmq, then builds from a custom Dockerfile that runs an application which assumes a vhost on the rabbitmq server, let's say "/foo".
The problem is the vhost is not there.
The container with rabbitmq is running successfully, the app is linked to it using --link as the built image is run. Using the environment variables docker sets I can hit the server. But somewhere in the middle of these operations I need to create the vhost as my connection is refused, i assume because "/foo" is not there.
How can I get the vhost onto the rabbit server?
Thanks
note - using the webadmin is not an option, this has to be done programatically.
You can put default_vhost in /etc/rabbitmq/rabbitmq.config: http://www.rabbitmq.com/configure.html
It will then be created on the first run. (Stop and delete the mnesia directory if has been started already)
There are few ways to get desired configuration:
Export/import whole configuration with rabbitmqadmin - Management Plugin CLI tool.
or
Use HTTP API from management plugin
or
Use rabbitmqctl cli tool to manage access control.
BTW according to docs in here: https://www.rabbitmq.com/vhosts.html
You can du this via curl by using:
curl -u userename:pa$sw0rD -X PUT http://rabbitmq.local:15672/api/vhosts/vh1
So probably it doesnt matter you are doing this remotely or not..