Issue to use CLI commands to manage RabbitMQ Message broker on Openshift - rabbitmq

My image of rabbitmq in openshift is deployed and works perfectly fine.
I manage this image in the Openshift web tool.
However, when I want to use some administration CLI tools like rabbitmqctl to manage the node ( https://www.rabbitmq.com/rabbitmqctl.8.html ), I get the following error :
‘Only root or rabbitmq should run rabbitmqctl’
I tried :
To add the permission to modify the rabbitmq server files to the root group, which is not permitted :
$ chgrp -R 0 /var/lib/rabbitmq chgrp: changing group of '/var/lib/rabbitmq': Operation not permitted
To connect as root in the container, but I cant.
The rabbitmq-plugins command actually works, I can enable the different plugins with the CLI tool.
Any Idea?

Update - I found a solution here :
https://docs.openshift.com/container-platform/3.11/creating_images/guidelines.html#openshift-specific-guidelines
I need to modify my dockerfile, as described in the link.

Related

error Performing erlang migration to another node

I have 2 nodes rabbit2 and rabbit3 everything is working fine until i start cluster
then I did the command
scp -r rabbit2:/var/lib/rabbitmq/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie .
and after successfully transferring the failed nodes
enter image description here
Maybe the cookie file not used because the file permission setting or other cookie file has been used for priority reason.
Do you know how to run rabbitmq in erlang console mode?
If you can, enter the console first, check the problem by command.
erlang cookie check function

azdata erroring out due to config file error

All,
The admin setup a 3 node AKS cluster today. I got the kube/config file update by running the az command
az aks get-credentials --name AKSBDCClus --resource-group AAAA-Dev-RG --subscription AAAA-Subscription.
I was able to run all the kubectl commands fine but when I tried setting up the SQLServer 2019 BDC by running azdata bdc create it gave me an error Failed to complete kube config setup.
Since it was something to do with azdata and kubectl I checked the azdata logs and this is what I see in the azdata.log.
Loading default kube config from C:\Users\rgn\.kube\config
Invalid kube-config file. Expected all values in kube-config/contexts list to have 'name' key
Thinking the config file could have got corrupted I tried running az aks get-credentials --name AKSBDCClus --resource-group AAAA-Dev-RG --subscription AAAA-Subscription.
This time I got whole lot of error
The client 'rgn#mycompany.com' with object id 'XXXXX-28c3-YYYY-ZZZZ-AQAQAQd'
does not have authorization to perform action 'Microsoft.ContainerService/managedClusters/listClusterUserCredential/action'
over scope '/subscriptions/Subscription-ID/resourceGroups/
ResourceGroup-Dev-RG/providers/Microsoft.ContainerService/managedClusters/AKSCluster' or the scope is invalid. If access was recently granted, please refresh your credentials.
I logged out and logged back into azure and retried but got the same errors as above. I was able to even stop the VM Scale before I logged for the day. Everything works fine but I'm unable to run azdata script.
Can someone point me in the right direction.
Thanks,
rgn
Turns out that the config file was bad. I deleted the file and ran "az aks get-credentials" (after the necessary permissions to run it) and it worked. The size of old config is 19kb but the new one is 10k.
I guess, I might have messed it up while testing "az aks get-credentials"

Zeek cluster fails with pcap_error: socket: Operation not permitted (pcap_activate)

I'm trying to setting up a Zeek IDS cluster (v.3.2.0-dev.271) on 3 Ubuntu 18.04 LTS hosts to no avail - running zeek deploy command fails with the following output:
fatal error: problem with interface ens3 (pcap_error: socket: Operation not permitted (pcap_activate))
I have followed the official documentation (which is pretty generic at best) and set up passwordless SSH authentication between the zeek nodes.
I also preemptively created the /usr/local/zeek path on all hosts and gave the zeek user full permissions on that directory. The documentation says The Zeek user must be able to either create this directory or, where it already exists, must have write permission inside this directory on all hosts.
The documentation also says that on the worker nodes this user must have access to the target network interface in promiscuous mode.
My zeek user is a sudoer AND a member of netdev group on all 3 nodes. Yet, the cluster deployment fails. Apparently, when zeekctl establishes the SSH connection to the workers it cannot get a hold of the network interfaces and set caps.
Eventually I was able to successfully run the cluster by following this article - however it requires you to set up the entire cluster as root, which I would like to avoid if at all possible.
So my question is, is there anything blatantly obvious that I am missing? To the best of my knowledge this setup should work, otherwise I don't know how to force zeekctl to run 'sudo' in front of every SSH command it is supposed to run on the workers, or how to satisfy this requirement.
Any guidance will be greatly appreciated, thanks!
I was experiencing the same error for my standalone setup. Found this question from googling it. More googling the error brought me to a few blogs including one in which the comments mentioned the same error. The author mentioned giving the binaries permissions using setcap:
$sudo setcap cap_net_raw,cap_net_admin=eip /usr/local/zeek/bin/zeek
$sudo setcap cap_net_raw,cap_net_admin=eip /usr/local/zeek/bin/zeekctl
After running them both, my instance of zeek is now running successfully.
Source: https://www.ericooi.com/zeekurity-zen-part-i-how-to-install-zeek-on-centos-8/#comment-1586
So, just in case someone else stumbles upon the same issue - I figured out what was happening.
I streamlined the cluster deployment with Ansible (using 'become' directive at task level) and did not elevate when running the handlers responsible for issuing the zeekctl deploy command.
Once I did, the Zeek Cluster deployment succeeded.

how to change cassandra docker config

I installed cassandra from cassandra hub and its running successfully.
root#localhost$ docker ps | grep cassandra
2925664e3391 cassandra:2.1.14 "/docker-entrypoin..." 5 months ago Up 23 minutes 0.0.0.0:7000-7001->7000-7001/tcp, 0.0.0.0:7199->7199/tcp, 0.0.0.0:9042->9042/tcp, 0.0.0.0:9160->9160/tcp, 0.0.0.0:32779->7000/tcp, 0.0.0.0:32778->7001/tcp, 0.0.0.0:32777->7199/tcp, 0.0.0.0:32776->9042/tcp, 0.0.0.0:32775->9160/tcp
I am connected my application with this cassandra. I need to use password authentication to connect to cassandra form my application.
I have to unable password authentication for this, I get the /etc/cassandra/cassandra.yaml file in docker image. I have to follow Authentication Config to enable this.
Is there way to override this changes with docker start or docker run command ?
It is not included into the piece generating the cassandra.yml file, so no. You can submit a PR modifying the relevant piece of the generation script to allow to specify auth via env variables.

Redis NOAUTH Authentication required. [tcp://127.0.0.1:6379] Laravel

I am working on a Laravel project that uses dingo package to manage some APIs. I changed CACHE_DRIVER=array variable in my .env file to CACHE_DRIVER=redis because dingo no longer support array for CACHE_DRIVER. I therefore installed redis on my system and included the package in my Laravel project by adding "predis/predis": "~1.0" in my composer.json and updating with the command composer update. Up until now everything works just fine. However, to create database table and seed them using php artisan migrate --seed, I get the error:
[Predis\Connection\ConnectionException]
SELECT failed: NOAUTH Authentication required. [tcp://127.0.0.1:6379]
Note: when I was installing redis, I added a password. I also authenticated using the two commands redis-cli to switch to redis and then AUTH mypassword. Yet when I try to seed, it still throws the same error. Please what am I doing wrong?
Thanks for any help.
I would start with setting redis password in REDIS_PASSWORD environment variable (e.g. in .env file). See https://laravel.com/docs/5.3/redis#configuration for more details about redis configuration in Laravel.
Make sure you are running your redis server when you are seeding.
Run: redis-server