Can't create RabbitMQ policies on server with mapped drives - rabbitmq

I've installed RabbitMQ on my local machine with no issues and created some queue and exchange policies.
Now I'm trying to set up RabbitMQ on a server on our network. The IT dept has drives mapped. Specifically, they have %HOMEDRIVE% set to U: and %HOMEPATH% set to "". I've read numerous places about that potentially causing issues with the Erlang cookie.
However, everything I've read states that you simply need to copy the cookie over. I've copied the .erlang.cookie file to both C:\Users[myprofile], to U:\ but neither seems to help. When I try the following (without changing %HOMEDRIVE%)
rabbitmqctl set_policy DLX ".*" "{""dead-letter-exchange"":""CM-Dead-Letter""}" --apply-to queues
I get [error] Failed to create cookie file 'u:/.erlang.cookie': enoent -- which is weird because the file is already there.
If I SET HOMEDRIVE=C: and run the same set_policy command, then I get this error:
What do I need to do to get this working on my server?

Related

error Performing erlang migration to another node

I have 2 nodes rabbit2 and rabbit3 everything is working fine until i start cluster
then I did the command
scp -r rabbit2:/var/lib/rabbitmq/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie .
and after successfully transferring the failed nodes
enter image description here
Maybe the cookie file not used because the file permission setting or other cookie file has been used for priority reason.
Do you know how to run rabbitmq in erlang console mode?
If you can, enter the console first, check the problem by command.
erlang cookie check function

gcloud compute -- Instance ssh-key metadata ignored?

Starting with a service account JSON key, I attempt to add a throwaway "foo" ssh key to the gcloud instances create metadata and then connect to the instance using vanilla ssh and the throwaway key.
Script
here.
Expected behavior
At boot, the account daemon would create a user account corresponding to the supplied ssh key.
Observed behavior
In the Cloud Console, the instance shows correctly applied ssh metadata.
ssh -i throwaway_private_key foo#${IP} fails.
Logs on the instance show:
Apr 6 16:58:34 sshkey-test-x0rmqgh7 sshd[497]: Invalid user foo from 209.6.197.126 port 39792
How do I correctly trigger the account daemon?
If not through the metadata, then what?
Thanks!
For anyone struggling with a similar issue, there is a HUGE gotcha with os-login that can lead to the problem behavior.
In a nutshell, os-login="TRUE" can be (and is likely to be) set project-wide on GCE. If that's the case, then ssh-key metadata is ignored. I only discovered this by chance from reading other issues in the Google bug tracker.
As soon as I toggled os-login, my issue went away.

Zeek cluster fails with pcap_error: socket: Operation not permitted (pcap_activate)

I'm trying to setting up a Zeek IDS cluster (v.3.2.0-dev.271) on 3 Ubuntu 18.04 LTS hosts to no avail - running zeek deploy command fails with the following output:
fatal error: problem with interface ens3 (pcap_error: socket: Operation not permitted (pcap_activate))
I have followed the official documentation (which is pretty generic at best) and set up passwordless SSH authentication between the zeek nodes.
I also preemptively created the /usr/local/zeek path on all hosts and gave the zeek user full permissions on that directory. The documentation says The Zeek user must be able to either create this directory or, where it already exists, must have write permission inside this directory on all hosts.
The documentation also says that on the worker nodes this user must have access to the target network interface in promiscuous mode.
My zeek user is a sudoer AND a member of netdev group on all 3 nodes. Yet, the cluster deployment fails. Apparently, when zeekctl establishes the SSH connection to the workers it cannot get a hold of the network interfaces and set caps.
Eventually I was able to successfully run the cluster by following this article - however it requires you to set up the entire cluster as root, which I would like to avoid if at all possible.
So my question is, is there anything blatantly obvious that I am missing? To the best of my knowledge this setup should work, otherwise I don't know how to force zeekctl to run 'sudo' in front of every SSH command it is supposed to run on the workers, or how to satisfy this requirement.
Any guidance will be greatly appreciated, thanks!
I was experiencing the same error for my standalone setup. Found this question from googling it. More googling the error brought me to a few blogs including one in which the comments mentioned the same error. The author mentioned giving the binaries permissions using setcap:
$sudo setcap cap_net_raw,cap_net_admin=eip /usr/local/zeek/bin/zeek
$sudo setcap cap_net_raw,cap_net_admin=eip /usr/local/zeek/bin/zeekctl
After running them both, my instance of zeek is now running successfully.
Source: https://www.ericooi.com/zeekurity-zen-part-i-how-to-install-zeek-on-centos-8/#comment-1586
So, just in case someone else stumbles upon the same issue - I figured out what was happening.
I streamlined the cluster deployment with Ansible (using 'become' directive at task level) and did not elevate when running the handlers responsible for issuing the zeekctl deploy command.
Once I did, the Zeek Cluster deployment succeeded.

Cannot ssh into Google-Engine, connecting in a loop

I am unable to connect through SSH to my GCE instance. I was connecting without any problem, the only think I changes was my user name through top right corner of the browser then selected Change Linux Username.
When I try to ssh into my google engine via browser, I keep having following message in a endless loop:
When I try to ssh via cloud shell I also get following error message, (serial console output):
Permission denied (publickey).
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
[Q] Is there any way to fix this problem? Since I have no access to the engine now, I don't know what to do.
However you could always get back access through serial console then from there you could internally y troubleshoot user/ssh issue.
1) $ gcloud compute instances add-metadata [INSTANCE_NAME] --metadata=serial-port-enable=1
You can then connect to the instance through the serial port
NOTE:The root password have must been already set in order to use the serial port
2)
$ gcloud compute connect-to-serial-port [INSTANCE_NAME]
If you never set the root password you could set it by adding a startup-script to your instance that will set a password as root by running the below command :
NOTE: the instance must be rebooted in order to run the startup script.
3) $ gcloud compute instances add-metadata [instance name] --metadata startup-script='echo "root:YourPasswdHere" | chpasswd'
Reboot the instance run the command on the step "2)" authenticate your self as root with the password that you set on the startup script in the step "3)" .
I had the same problem, It took me several days to figure out what was happening in my case.
To find out, I created a new instance from scratch and started making all modifications I've done to those that eventually couldn't connect to, one by one, exiting the ssh connection and re entering so as to test it.
I've tried it a couple of times, in both cases, the connection was impossible after uninstalling python (I only needed 3.7 version so I was uninstalling all others and installing that one I needed).
My command for uninstalling it was
sudo apt purge python2.7-minimal
and
sudo apt purge python3.5-minimal
I don't know if it was specifically because of deleting python, or because of using purge (in which case this problem might reproduce if using purge with another program).
I don't even know why would this affect ssh connection.
Could it be that google cloud is somehow using destination python for the ssh web?
In any case, if you are experiencing this problem try to avoid uninstalling anything from the base VM.

rabbitMQ guest login failed

I have setted up rabbitMQ and its management plubin in windows,
I found rabbitmq.config file with "EXAMPLE FILE" type in the path of
" ...AppData\Roaming\RabbitMQ " and " C:\Program Files (x86)\RabbitMQ Server\rabbitmq_server-3.3.1\etc "
I add the line {loopback_users, []} into this rabbitmq.config file, and restart the windows service, but still can't login from another computer with guest/guest
Am I editing the wrong config file?
Here is some relevant discussion:
How to access RabbitMq publicly
http://www.rabbitmq.com/access-control.html
RabbitMQ service can't read the configuration file, this is the problem. So, your configuration file is not loaded.
The path "..AppData\Roaming\RabbitMQ" is valid only if you execute rabbitmq-server.bat from the command prompt and not if you execute a service.
In order to work with Windows Service you have to configure the environment variable RABBITMQ_CONFIG_FILE in windows.
Open Control Panel > System > Advanced > Environment Variables and then add:
RABBITMQ_CONFIG_FILE
path_your_configuration_file
as:
Then you have to uninstall and re-install rabbitmq and it works.
Please read this discussion
I tried on windows7 with rabbitmq 3.3.1, it works corretly using guest/guest.
My configuration file is:
[{, [{loopback_users, []}]}].
A combination of the prior post and the comment from Jon Egerton was key to getting my Windows configuration working for the guest account remotely. Here are steps I took:
Set an environment variable: RABBITMQ_BASE (I set mine to
c:\RabbitMQData)
Create the directory and create the rabbitmq.config file as explained in the previous post.
Uninstall RabbitMQ (As mentioned already, don’t skip this step. Stopping and starting RabbitMQ won’t do the trick)
Reinstall RabbitMQ and verify the RabbitMQ Server service started.
Verify that the directory specified by RABBITMQ_BASE contains the db and log sub-directories.
Install the RabbitMQ_Management plug-in from the command line.
Verify that you can now logon as the guest account using the host’s IP address or host name.