I installed by using
brew update and brew install rabbitmq.
Then tried to stop the server and start it again (I tried to install by downloding the source before).
rabbitmqctl stop but I get
`Stopping and halting node rabbit#localhost ...
Error: unable to connect to node rabbit#localhost: nodedown
DIAGNOSTICS
===========
nodes in question: [rabbit#localhost]
hosts, their running nodes and ports:
- localhost: [{rabbit,61707},{rabbitmqctl33002,62384}]
current node details:
- node name: 'rabbitmqctl33002#Dev-MacBook-Air'
- home dir: /Users/ohad
- cookie hash: aeUlHJghkW6Yr7EMVbRJTg==`
which I reccon means there is no proccess, so I tried starting again:
rabbitmq-server
and I get
ERROR: node with name "rabbit" already running on "localhost"
DIAGNOSTICS
===========
nodes in question: [rabbit#localhost]
hosts, their running nodes and ports:
- localhost: [{rabbit,61707},{rabbitmqprelaunch33047,62398}]
current node details:
- node name: 'rabbitmqprelaunch33047#Dev-MacBook-Air'
- home dir: /Users/ohad
- cookie hash: aeUlHJghkW6Yr7EMVbRJTg==
Related
I'm trying to create a two node rabbitmq cluster. I've setup the rabbitmq server successfully on both nodes. After replacing the slave's node cookie with the master's .erlang.cookie. System is unable to start rabbitmq app.
I'm executing the following commands:
nohup rabbitmq-server restart &
rabbitmqctl start_app
/sbin/service rabbitmq-server stop
nohup is generating the following logs in /var/lib/rabbitmq/nohup.log
ERROR: node with name "rabbit" already running on "ip-172-31-83-71"
ERROR: node with name "rabbit" already running on "ip-172-31-83-71"
ERROR: node with name "rabbit" already running on "ip-172-31-83-71"
Also terminal is following the following error:
Error: unable to perform an operation on node 'rabbit#ip-172-31-83-71'. Please see diagnostics information and suggestions below.
DIAGNOSTICS
===========
attempted to contact: ['rabbit#ip-172-31-83-71']
rabbit#ip-172-31-83-71:
* connected to epmd (port 4369) on ip-172-31-83-71
* epmd reports: node 'rabbit' not running at all
no other nodes on ip-172-31-83-71
* suggestion: start the node
Current node details:
* node name: 'rabbitmqcli-26458-rabbit#ip-172-31-83-71'
* effective user's home directory: /var/lib/rabbitmq
* Erlang cookie hash: aoPchC2KIy7esHVGVNLP4w==
I've also tried by revert the .erlang.cookie and in this case it's working fine. Can anyone please guide me what I'm missing?
Similar to RabbitMQ has Nodedown Error. But, this is for Ubuntu 16.04 the working solution, posted below, differs from the windows one as well.
Something has gone wrong with my rabbitmq server. Trying to start the application gives an error:
$sudo rabbitmqctl start_app
Starting node rabbit#daniel ...
Error: unable to connect to node rabbit#daniel: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#daniel]
rabbit#daniel:
* connected to epmd (port 4369) on daniel
* epmd reports: node 'rabbit' not running at all
no other nodes on daniel
* suggestion: start the node
current node details:
- node name: 'rabbitmq-cli-6647#daniel'
- home dir: /var/lib/rabbitmq
- cookie hash: T1R4ztWXXH1w2IQe+fui9g==
Currently the only way I know of solving this is uninstalling/reinstalling rabbitmq. But, I'm hoping a more sensible solution is possible...
This is the important part of that message:
node 'rabbit' not running at all
You need to start RabbitMQ with systemctl start rabbitmq-server. You should also check the logs to see why it wasn't running in the first place.
Try to run it with sudo rabbitmq-server
I keep getting this error every time I try to do something with RabbitMQ:
attempted to contact: [fdbvhost#FORTE]
fdbvhost#FORTE:
* connected to epmd (port 4369) on FORTE
* epmd reports: node 'fdbvhost' not running at all
no other nodes on FORTE
* suggestion: start the node
current node details:
- node name: 'rabbitmq-cli-54#FORTE'
- home dir: C:\Users\Jesus
- cookie hash: iuRlQy0F81aBpoY9aQqAzw==
This is the output I get when I run rabbitmqctl -n fdbvhost status or /rabbitmqctl -n fdbvhost list_vhosts.
I've tried rabbitmqctl -n fdbvhost start which gives me the following output:
Error: could not recognise command
Usage:
rabbitmqctl [-n <node>] [-t <timeout>] [-q] <command> [<command options>]
...
So this doesn't start it. I cannot find anything about starting a node in the documentation. How do I actually start my node/vhost?
Try running the following command from the RabbitMQ's installation sbin directory
rabbitmq-server start -detached
This should start the broker node if it was stopped for some reason.
Check if you have RabbitMQ installed as a service in the /etc/init.d/ folder
sudo su # might be needed
cd /etc/init.d/
ls . | grep rabbit
The output should be rabbitmq-server
If that's the case, then, try restarting your service with:
sudo service rabbitmq-server restart
For mac users
To Start
brew services start rabbitmq
To Restart
brew services restart rabbitmq
To Stop
brew services stop rabbitmq
To Know the status of the server
brew services info rabbitmq
I've installed NFS 0.30 (last version) stack using catalog.
The options are:
MOUNT_DIR /
MOUNT_OPTS proto=tcp,port=2049,nfsvers=4
NFS_SERVER xxx.xxx.xxx.xxx (digitalocean droplet public ip)
The container starts normally and seems to be working fine. So, then I try to create a simple stack using the NFS with this docker-compose:
version: '2'
services:
web:
image: nginx
volumes:
- bar:/var/bar
volumes:
bar:
driver: rancher-nfs
And I get this error:
(Expected state running but got error: Error response from daemon: create aaaa_bar_8fa9a: VolumeDriver.Create: Failed nsenter -t 11437 -n mount -o proto=tcp,port=2049,nfsvers=4 xx.xx.xx.xx:/ /tmp/5ht8d)
First of all the last version of NFS is 4.2 and the last version of the catalog is v0.4.0.
with the variable MOUNT_DIR you define where is the export point on your server, you are sure you want mount in the root dir?
You have create NFS export?
After a NFS Export is created test by upgrading your service with new variable mount point (for example /mnt/<environment_name>/)
Running my test with InSpec I am unable to test if the httpd is enabled and running.
InSpec test
describe package 'httpd' do
it { should be_installed }
end
describe service 'httpd' do
it { should be_enabled }
it { should be_running }
end
describe port 80 do
it { should be_listening }
end
The output for kitchen verify is:
System Package
✔ httpd should be installed
Service httpd
✖ should be enabled
expected that `Service httpd` is enabled
✖ should be running
expected that `Service httpd` is running
Port 80
✖ should be listening
expected `Port 80.listening?` to return true, got false
Test Summary: 1 successful, 3 failures, 0 skipped
Recipe for httpd installation:
if node['platform'] == 'centos'
# do centos installation
package 'httpd' do
action :install
end
execute "chkconfig httpd on" do
command "chkconfig httpd on"
end
execute 'apache start' do
command '/usr/sbin/httpd -DFOREGROUND &'
action :run
end
I do not know what I am doing wrong.
More info
CentOS version on docker instance
kitchen exec --command 'cat /etc/centos-release'
-----> Execute command on default-centos-72.
CentOS Linux release 7.2.1511 (Core)
Chef version installed in my host
Chef Development Kit Version: 1.0.3
chef-client version: 12.16.42
delivery version: master (83358fb62c0f711c70ad5a81030a6cae4017f103)
berks version: 5.2.0
kitchen version: 1.13.2
UPDATE 1: Kitchen yml with driver attributes
The platform has the configuration recommended by coderanger :
---
driver:
name: docker
use_sudo: false
provisioner:
name: chef_zero
verifier: inspec
platforms:
- name: centos-7.2
driver:
platform: rhel
run_command: /usr/lib/systemd/systemd
provision_command:
- /bin/yum install -y iniscripts net-tools wget
suites:
- name: default
run_list:
- recipe[apache::default]
verifier:
inspec_tests:
- test/integration
attributes:
And it is the output when run kitchen test:
... some docker steps...
Step 16 : RUN echo ssh-rsa\ AAAAB3NzaC1yc2EAAAADAQABAAABAQDIp1HE9Zbtl3zAH2KKL1mVzb7BU1WxK7mi5xpIxNRBar7EZAAzxi1pVb1JwUXFSCVoAmUyfn/lBsKlgXnUD49pKrqkeLQQW7NoG3uCFiXBUTof8nFVuLYtw4CTiAudplyMvu5J7HQIP1Hve1caY27tFs/kpkQaXHCEuIkqgrM2rreMKK0n8im9b36L2SwWyM/GwqcIS1z9mMttid7ux0\+HOWWHqZ\+7gumOauh6tLRbtjrm3YYoaIAMyv945MIX8BFPXSQixThBVOlXGA9iTwUZWjU6WvZThxVFkKPR9KZtUTuTCT7Y8\+wFtQ/9XCHpPR00YDQvS0Vgdb/LhZUDoNqV\ kitchen_docker_key >> /home/kitchen/.ssh/authorized_keys
---> Using cache
---> c0e6b9e98d6a
Successfully built c0e6b9e98d6a
d486d7ebfe000a3138db06b1424c943a0a1ee7b2a00e8a396cb8c09f9527fb4b
0.0.0.0:32841
Waiting for SSH service on localhost:32841, retrying in 3 seconds
Waiting for SSH service on localhost:32841, retrying in 3 seconds
Waiting for SSH service on localhost:32841, retrying in 3 seconds
Waiting for SSH service on localhost:32841, retrying in 3 seconds
.....
You cannot, at least not out of the box. This is one area where kitchen-docker shows its edges. We try to pretend that a container is like a tiny VM but in reality it isn't, and one notable place where the pretending breaks down is init systems. With CentOS 7, it uses systemd. It is possible to get systemd to run inside the container (see https://github.com/poise/yolover-example/blob/master/.kitchen.yml#L17-L33) but not all features are supported and it can generally be a bit odd :-/ That example should be enough to make your tests work though. For completeness, CentOS 6 uses Upstart which just flat out won't run inside Docker so no love there either.