memcached dead but subsys locked - crash

service memcached restart yields:
stopping memcached: [failed]
starting memcached: [ ok ]
service memcached status yields:
memcached dead but subsys locked
ls inside /var/lock/subsys/ shows a file named memcached
ls inside /var/run/ shows no pid file named memcached
there is another folder named memcached in here but there is nothing in that folder.
rm /var/lock/subsys/memcached gets rid of the memcached lock file
service restart memcached yeilds:
stopping memcached: [failed]
starting memcached: [ ok ]
service memcached status yields:
memcached dead but subsys locked
what am I doing wrong?
EDIT: I'd like to add that I've searched for this before posting and I'm either already doing the steps listed in said post or that post is years old.

Is there another process binding to TCP/11211?
Perhaps you tried to start the memcached service as a non-privileged user and it failed with:
$ service memcached start
Starting memcached: [ OK ]
touch: cannot touch ‘/var/lock/subsys/memcached’: Permission denied
After that, service memcached status will falsely report that memcached is not running:
$ service memcached status
memcached dead but subsys locked
But it is, and it is binding to port 11211, in order to check for this you can use:
$ fuser -n tcp 11211
11211/tcp: 4439
Or:
$ pgrep -l memcached
4439 memcached
Memcached will fail to start because it cannot bind to 11211, as the running instance is already bound to it. Unfortunately there are some systems (I'm looking at you, CENTOS) where it may not leave any useful hint at /var/log/messages or /var/log/syslog. That is why many of the previous answers to this question that fiddle with the binding address will look like they solved the problem.
How do you fix it?
Since service stop memcached will not work, you have to kill it:
$ pkill memcached
Or this (where 4439 is the pid you found in the previous step):
$ kill 4439
Then you can do it right, using sudo:
$ sudo service memcached start
Starting memcached: [ OK ]
$ service memcached status
memcached (pid 6643) is running...

Solved this problem by typing the following commands in terminal:
1) su (becoming root).
2) killall -9 memcached (killing memcached).
3) /etc/init.d/memcached start (starting memcached by hands).
Alternatively: 3) service memcached start.

check /etc/sysconfig/memcached
make sure the OPTIONS="-l 127.0.0.1" is correct

Remove -l from OPTION.
e.g., Instead of
OPTION="-l 2.2.2.2"
try using
OPTION="2.2.2.2"
This worked for me.

To resolve this problem, run the following script as root
rm /var/run/memcached/memcached.pid
rm /var/lock/subsys/memcached
service memcached start

Removing and reinstalling memcached is what worked for me:
[acool#acool super-confidential-dir]$ sudo yum remove memcached
...
[acool#acool super-confidential-dir]$ sudo yum install memcached
After the above commands and starting it I got:
[acool#acool super-confidential-dir]$ sudo service memcached status
memcached dead but pid file exists
At that point I killed it and removed the pid file:
[acool#acool super-confidential-dir]$ sudo killall -s 9 memcached
...
[acool#acool super-confidential-dir]$ sudo rm /var/run/memcached/memcached.pid
And finally started it and checked its status:
[acool#acool super-confidential-dir]$ sudo service memcached start
...
[acool#acool super-confidential-dir]$ sudo service memcached status
memcached (pid 13804) is running...
And then I was happy again.
Good luck.

In my case I wanted to use memcache through the socket with
OPTIONS="-t 8 -s /run/memcached/memcached.sock -a 0777 -U 0"
copied from another OS, and get the same problem.
Then I realized that I just forgot, that in my OS /run/ doesn't exist. That's it. Just check your path, hah

Related

redis-server in ubuntu14.04: Bind address already in use

I started redis server on ubuntu by typing this on terminal: $redis-server
This results in following > http://paste.ubuntu.com/12688632/
aruns ~ $ redis-server
27851:C 05 Oct 15:16:17.955 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
27851:M 05 Oct 15:16:17.957 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
27851:M 05 Oct 15:16:17.957 # Server can't set maximum open files to 10032 because of OS error: Operation not permitted.
27851:M 05 Oct 15:16:17.958 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
27851:M 05 Oct 15:16:17.958 # Creating Server TCP listening socket *:6379: bind: Address already in use
How can I fix this problem, it there any manual or automated process to fix this binding.
$ ps aux | grep redis
Find the port that its running on.. In my case..
MyUser 8821 0.0 0.0 2459704 596 ?? S 4:54PM 0:03.40 redis-server *:6379
And then close the port manually
$ kill -9 8821
Re-run redis
$ redis-server
sudo service redis-server stop
I solved this problem on Mac by just typing redis-cli shutdown, after this just
re open the terminal and type redis-server and it will work .
for me, after lots of problems, this solved my issue:
root#2c2379a99b47:/home/ ps -aux | grep redis
redis 3044 0.0 0.0 37000 8780 ? Ssl 14:59 0:00 /usr/bin/redis-server *:6379
after finding redis, kill it!
root#2c2379a99b47:/home# sudo kill -9 3044
root#2c2379a99b47:/homek# sudo service redis-server restart
Stopping redis-server: redis-server.
Starting redis-server: redis-server.
root#2c2379a99b47:/home# sudo service redis-server status
redis-server is running
So as it says, the process is already running so the best to do is to stop it, analyse and restart it and todo so here are the following commands :
redis-cli ping #should return 'PONG'
And this solved my issue:
$ ps -ef |grep redis
root 6622 4836 0 11:07 pts/0 00:00:00 grep redis
redis 6632 1 0 Jun23 ? 04:21:50 /usr/bin/redis-server *:6379
Locate redis process, and stop it!
$ kill -9 6632
$ service redis restart
Stopping redis-server: [ OK ]
Starting redis-server: [ OK ]
$ service redis status
Otherwise if all this doesn't work just try to type redis-cli
Hope it helps :)
This works for me:
$ killall redis-server
And combining everything in one line:
$ killall redis-server; redis-server
I read the documentation on http://www.redis.io , I opened the redis.conf file to configure the redis-server, its located at /etc/redis/redis.conf
$ sudo subl /etc/redis/redis.conf
Instead of sublime editor you can use editor of your choice, viz. nano, vi, emacs, vim, gedit.
In this file I uncommented the #bind 127.0.0.1 line. Hence, instead of 0.0.0.0:6379 now its 127.0.0.1:6379
Restart the redis server
$ sudo service redis-server restart
It will state, The server is now ready to accept connections on port 6379
This will put your server up, For any more detailed configuration and settings you can follow this redis-server on ubuntu
I prefer to use the command param -ef,
ps -ef|grep redis
the -efmeans
-A Display information about other users' processes, including those
without controlling terminals.
-e Identical to -A.
-f Display the uid, pid, parent pid, recent CPU usage, process start
time, controlling tty, elapsed CPU usage, and the associated com-
mand. If the -u option is also used, display the user name
rather then the numeric uid. When -o or -O is used to add to the
display following -f, the command field is not truncated as se-
verely as it is in other formats.
then kill the pid
kill -9 $pid
You may try
$ make
then
$ sudo cp src/redis-cli /usr/local/bin/ on terminal to install the redis and it's redis-cli command.
finally, you can use the redis-cli shutdown command. Hope this answer could help you.
Killing the process that was running after booting in the OS worked for me. To prevent redis from starting at startup in Ubuntu OS:
sudo systemctl disable redis-server
In my case, I tried several times to kill the port manually and didn't work. So I took the easy path, reinstallation and worked like charm after that. If you're in Debian/Ubuntu:
sudo apt remove redis-server // No purge needed
sudo apt update
sudo apt install redis-server // Install once again
sudo systemctl status redis-server // Check status of the service
redis-server // initializes redis
Not the most technical-wise path, but nothing else worked.
It may also happen if you installed Redis via snap and are trying to run it from somewhere else.
If this is the case, you can stop the service via sudo snap stop redis.
I'm not sure, but when I first time installed redis and faced this message, turned out that's due to the redis-server first of all takes configure parameters or path/to/redis.conf, so when I passed nothing after "redis-server" it was trying to execute default redis.conf (bind 127.0.0.1, port 6379 ...) thereby overwrite the existing default redis.conf (which contains same "bind" and "port"!!). That's why I've seen this error, but it's possibly you have another reasons
The problem shows that the default port that redis uses 6379is already in use by some other process.
So simply change the port of redis server
redis-server --port 7000 will start a Redis server using port number 7000.
and then
redis-cli -p 7000 - Now use this to make your client listen at this port.

/var/run/redis/redis.pid exists, process is already running or crashed

Redis went quite on me.
user#mycomputer:~$ redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
I try to restart the service by doing this
sudo /etc/init.d/redis_6379 stop
/var/run/redis/redis.pid exists, process is already running or crashed
But no luck. Logs didn't show an error as well.
Got it fixed by backing up the redis.rdp file mine is located at
/var/lib/redis
check your config file "/etc/redis/redis.conf" for the rdp file's location and do this
sudo mv /var/lib/redis/redis.rdp /var/lib/redis/redis_backup.rdp
Then recreate the the redis.rdp file
sudo touch redis.rdp
Run the redis-server with the conf and it should work
sudo redis-server /etc/redis/redis.conf
Get it fixed in a tidy way: Recreate the the redis.rdp file as suggested here in one of answer, will purge all the cache recorded so far and redis will start up fresh with no cache data.
This is a warning message to notify system crash / improper shutdown: "/var/run/redis/redis.pid exists, process is already running or crashed"
Just delete /var/run/redis/redis.pid file and restart the server again.
Note: You might have lost latest cache changes due to untidy shutdown, which weren't flushed into the disk. This data loss can be minimized using frequent disk flush configuration in redis conf file(in my case it is #/etc/redis/6379.conf)
save 900 1
save 300 10
save 60 10000
Or try AOF persistence, more details [here][1]
Depends on how you installed redis, the pid can be found on /var/run/redis_6379.pid.
What happened is that redis crashed, but the pid is still there. So you just have to delete it.
sudo rm -f /var/run/redis_6379.pid
Then start redis again:
sudo /etc/init.d/redis_6379 start
If you can't find it, I suggest installing redis "more properly". Follow redis quickstart guide in the Installing Redis more properly section.
You can find it here:
https://redis.io/topics/quickstart
Run the redis-server with config.
sudo redis-server redis.conf

Redis Daemon not creating a PID file

The Redis startup script is supposed to create a pid file at startup, but I've confirmed all the settings I can find, and no pid file is ever created.
I installed redis by:
$ yum install redis
$ chkconfig redis on
$ service redis start
In my config file (/etc/redis.conf) I checked to make sure these were enabled:
daemonize yes
pidfile /var/run/redis/redis.pid
And in the startup script (/etc/init.d/redis) there is:
exec="/usr/sbin/$name"
pidfile="/var/run/redis/redis.pid"
REDIS_CONFIG="/etc/redis.conf"
[ -e /etc/sysconfig/redis ] && . /etc/sysconfig/redis
lockfile=/var/lock/subsys/redis
start() {
[ -f $REDIS_CONFIG ] || exit 6
[ -x $exec ] || exit 5
echo -n $"Starting $name: "
daemon --user ${REDIS_USER-redis} "$exec $REDIS_CONFIG"
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $name: "
killproc -p $pidfile $name
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
These are the settings that came by default with the install. Any idea why no pid file is created? I need to use it for Monit.
(The system is RHEL 6.4 btw)
For those experiencing on Debian buster:
Editing
nano /etc/systemd/system/redis.service
and adding this line below redis [Service]
ExecStartPost=/bin/sh -c "echo $MAINPID > /var/run/redis/redis.pid"
It suppose to look like this:
[Service]
Type=forking
ExecStart=/usr/bin/redis-server /etc/redis/redis.conf
ExecStop=/bin/kill -s TERM $MAINPID
ExecStartPost=/bin/sh -c "echo $MAINPID > /var/run/redis/redis.pid"
PIDFile=/run/redis/redis-server.pid
then:
sudo systemctl daemon-reload
sudo systemctl restart redis.service
Check redis.service status:
sudo systemctl status redis.service
The pid file now should appear.
On my Ubuntu 18.04, I was getting the same error.
Error reported by redis (on /var/log/redis/redis-server.log):
# Creating Server TCP listening socket ::1:6379: bind: Cannot assign requested address
This is because I've disabled IPv6 on this host and redis-server package (version 5:4.0.9-1) for Ubuntu comes with:
bind 127.0.0.1 ::1
Editing /etc/redis/redis.conf and removing the ::1 address solves the problem. Example:
bind 127.0.0.1
Edit: As pointed out in the comments (thanks to #nicholas-vasilaki and #tommyalvarez), by default redis only allows connections from localhost. Commenting all the line, using:
# bind 127.0.0.1 ::1
works, but makes redis listen from the network (not only from localhost).
More details can be found in redis configuration file.
Problem was that the user redis did not have permission to create the pid file (or directory it was in). Fix:
sudo mkdir /var/run/redis
sudo chown redis /var/run/redis
Then I killed and restarted redis and sure enough, there was redis.pid
In CentOs 7 i need to add to the file:
$ vi /usr/lib/systemd/system/redis.service
The next line:
ExecStartPost=/bin/sh -c "echo $MAINPID > /var/run/redis/redis.pid"
And then restart the service:
$ sudo systemctl daemon-reload
$ sudo systemctl restart redis.service
Reference:
CentOs 7: Systemd & PID File
i had a similar problem on Debian Buster, systemd complains about the missing PID file, even though the file exists and redis is running.
on my system the solution using "echo $MAINPID > /run/redis/redis.pid" works by accident, although/because the real PID file is set to /run/redis/redis-server.pid (spot the different filenames!) and on my system the content of /run/redis/redis.pid (the one of the echo) was empty.
in a discussion on systemd-devel#lists.freedesktop.org someone writes:
... systemd will add the MAINPID environment variable any time it
knows what the main PID is. It learns this by reading the PID file ...
So by the time ExecStartPost runs, the main PID may or may not be
known.
having an empty MAINPID environment variable can be even harmful: if you notice the different PID filenames in the suggested solution, and correct it, you may end up in a situation where the PID file written by redis gets overwritten by an empty file. this happened to me, the result was that systemctl start redis.service never finished.
i also noticed that another server with 100% same OS and configuration, but different hardware did not have this problem.
my conclusion is that it just hits some sort of race condition, systemd seems to look for a PID file just a little too early. on my system, whatever command i used as ExecStartPost, it will add enough delay to make the error disappear.
therefore a solution is to use "sleep 1" (sleep 0.1 works too, but 1 second may be on the safe side):
ExecStartPost=/bin/sleep 1
/etc/systemd/system/redis.service now looks like:
[Service]
Type=forking
ExecStart=/usr/bin/redis-server /etc/redis/redis.conf
ExecStartPost=/bin/sleep 1
ExecStop=/bin/kill -s TERM $MAINPID
PIDFile=/run/redis/redis-server.pid
...
an alternative solution is to use "supervised systemd":
/etc/redis/redis.conf:
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised systemd
override the redis-server.service file using:
systemctl edit redis-server.service
and enter the following:
[Service]
Type=notify
reload the service and the error should be gone:
sudo systemctl restart redis.service
sudo systemctl status redis.service
Here from 2018
Before start, I am on Ubuntu 18.04.I wrote this if anyone comes here
by searching same error.
In my case error is the same but problem is so different. No solutions that proposed here worked.
So I checked logs if they are exist and looked for is there anything useful. Found them on;
cat /var/log/redis/redis-server.log
Searched logs and found that problem is that another service is listening same port.
2963:C 21 Sep 11:07:33.007 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2963:C 21 Sep 11:07:33.008 # Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=2963, just started
2963:C 21 Sep 11:07:33.008 # Configuration loaded
2974:M 21 Sep 11:07:33.009 # Creating Server TCP listening socket 127.0.0.1:6379: bind: Address already in use
I checked who is listening.
netstat anp | grep 6379
Found it.
tcp6 0 0 :::6379 :::* LISTEN 3036/docker-proxy
It was docker image of redis that installed by another tool
root#yavuz:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a6a94d401700 redis:3.2 "docker-entrypoint.s…" 20 hours ago Up 3 hours 0.0.0.0:6379->6379/tcp incubatorsuperset_redis_1
So I stopped docker image
root#yavuz:~# docker stop incubatorsuperset_redis_1
And redis-server started without problem.
root#yavuz:~# systemctl start redis-server
root#yavuz:~# systemctl status redis-server
● redis-server.service - Advanced key-value store
Active: active (running) since Fri 2018-09-21 11:10:34 +03; 1min 49s ago
Process: 3671 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCCESS)
For CentOS:
In my case name of Redis server is redis.service, start it edit
systemctl edit redis.service
Add this:
[Service]
ExecStartPost=/bin/sh -c "echo $MAINPID > /var/run/redis/redis.pid"
PIDFile=/var/run/redis/redis.pid
Im my case it create file: /etc/systemd/system/redis.service.d/override.conf
After restart service:
systemctl daemon-reload
systemctl restart redis
And the pid file is:
cat /var/run/redis/redis.pid
=> 19755
sudo nano /etc/redis/redis.conf
Inside the file, find the supervised directive. This directive allows you to declare an init system to manage Redis as a service, providing you with more control over its operation. The supervised directive is set to no by default. Since you are running Ubuntu, which uses the systemd init system, change this to systemd.
My default, Redis does not run as a daemon, and that is why it does not create a pid file. If you look at /etc/redis/redis.conf, it says so explicitly under General.
#By default Redis does not run as a daemon. Use 'yes' if you need it...
daemonize no
So all you need to do is to change it to daemonize yes
For people struggling with getting it to work on Ubuntu 18.04 you need to edit /etc/redis/redis.conf and update the pidfile declaration to following:
pidfile "/var/run/redis/redis-server.pid"
Ubuntu 18. /var/run/redis had the wrong permissions:
drwxr-sr-x 2 redis redis 60 Apr 27 12:22 redis
Changed to 755 (drwxrwxr-x) and the pid file now appears.

Error with rabbit-mq server

I am trying to setup OpenStack on Ubuntu 12.04 using devstack. Now, the error I am getting is:
Setting up rabbitmq-server (2.7.1-0ubuntu4) ...
Starting rabbitmq-server: FAILED - check /var/log/rabbitmq/startup_{log, _err}
rabbitmq-server.
invoke-rc.d: initscript rabbitmq-server, action "start" failed.
dpkg: error processing rabbitmq-server (--configure):
subprocess installed post-installation script returned error exit status 1
No apport report written because MaxReports is reached already
Errors were encountered while processing:
rabbitmq-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
++ err_trap
++ local r=100
++ set +o xtrace
stack.sh failed
Any idea why am I getting this error?
I had this issue twice, when either hostname or ip address in the hosts file didn't match.
Therefore, check that you provide the correct ip address and hostname in the /etc/hosts file
Run sudo cat /etc/hostname to see your hostname
Output:
yoursite
Run sudo nano /etc/hosts
File contains:
127.0.0.1 yoursite
As you see from cat /etc/hostname, hostname is the same as in the /etc/hosts:
Run sudo rabbitmq-server start to start the rabbitmq-server
Try deleting the folder /var/lib/rabbitmq and re-running ./stack.sh
If that doesn't work either, run the following after stach.sh fails:
chown -R rabbitmq:rabbitmq /var/lib/rabbitmq
chown -R rabbitmq:rabbitmq /var/log/rabbitmq
service rabbitmq-server restart
and check the status of rabbitmq using "rabbitmqctl status"
Similar thing happen to me. Rabbit depends on being able to resolve a hostname, run this:
echo "127.0.0.1 $(hostname -s)" | sudo tee -a /etc/hosts
This way works for me.
First go to
sudo vim /etc/hosts
and set
127.0.0.1 <hostname>
then open firewall
sudo rabbitmq-plugins enable rabbitmq_management
sudo service rabbitmq-server restart
For a clean environment, this will not happen. You must run devstack for several times, and one of them failed but you didn't get it cleaned.
run command pf -ef | grep rabbitmq, kill all rabbitmq processes. then it would be fine to run ./stack.sh
it is highly recommended to run ./unstack.sh && ./clean.sh before ./stack.sh
Just to be sure, take a look to your local network
ip add
If there's no lo network, then you should enable it:
ifconfig lo up
Then restart the server again and let's see if it works again now
systemctl start rabbitmq-server
I had the same problem though my /etc/hosts and DNS were OK. I suspect that SystemV init script was started too early when the network was not ready yet. I rewrote the startup script to systemd on CentOS 7.8 and it seems to work well now.
[Unit]
Description=RabbitMQ
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
RuntimeDirectory=rabbitmq
PrivateTmp=true
Restart=on-failure
RestartSec=10
WorkingDirectory=/opt/data/rabbitmq/
User=rabbitmq
Group=rabbitmq
ExecStart=/opt/app/rabbitmq/default/sbin/rabbitmq-server
ExecStop=/opt/app/rabbitmq/default/sbin/rabbitmqctl stop
ExecStop=/bin/sh -c "while ps -p $MAINPID >/dev/null 2>&1; do sleep 1; done"
StandardOutput=journal
StandardError=inherit
[Install]
WantedBy=multi-user.target

rabbitmq refusing to start

I have installed rabbitmq on ubuntu and trying to start it using rabbitmq-server start, however, I'm getting this error:
Activating RabbitMQ plugins ...
0 plugins activated:
node with name "rabbit" already running on "mybox"
diagnostics:
- nodes and their ports on mybox: [{rabbit,38618},
{rabbitmqprelaunch13346,41776}]
- current node: rabbitmqprelaunch13346#mybox
- current node home dir: /var/lib/rabbitmq
- current node cookie hash: 8QRKGluOJOcZ4AAkEdFwQg==
so I try to stop it or restart it using service rabbitmq-server restart but I get the following error: Restarting rabbitmq-server: RabbitMQ is not running
The server's host name hostname -s is mybox.
How do I stop the currently running instance, or at least, how do I manage it? I have no access to it and yet I'm not able to run rabbitmq properly.
Thank you.
Rabbitmq is set to start automatically after it's installed.
I don't think it is configured run with the service command.
To see the status of rabbitmq
sudo rabbitmqctl status
To stop the rabbitmq
sudo rabbitmqctl stop
(Try the status command again to see that it's stopped).
To start it again, the recommended method is
sudo invoke-rc.d rabbitmq-server start
These all work with the vanilla ubuntu install using apt-get
Still not working?
If you've tried unsuccessfully to start or restart rabbitmq, check to see how many processes are running.
ps -ef | grep rabbit
There should be 5 processes running as the user rabbitmq.
If you have more, particularly if they're running as other users (such as root, or your own user) you should stop these processes.
The cleanest way is probably to reboot your machine.
rabbitmq-server refuses to start if the hostname -s value has changed.
The solution suggested here is only for test/development environments.
I had to delete the database to fix it locally.
i.e empty folder /var/lib/rabbitmq (ubuntu) or /usr/local/var/lib/rabbitmq/(mac)
I had similar problem but these suggestions didn't work for me(restart too). When I run rabbitmq-server command, I get a response like that:
$/ rabbitmq-server
BOOT FAILED
===========
Error description:
{error,{cannot_log_to_file,"/var/log/rabbitmq/rabbit#haber01.log",
{error,eacces}}}
....
When I checked permissions of /var/log/rabbitmq/rabbit#haber01.log file, I saw that group has not write permisson for that file. So I gave permission to group with that command:
/var/log/rabbigmq/$ chmod g+w *
then problem has gone!
Maybe this answer help someone.
Seems like the Mnesia database was corrupted. Had to delete it to get sudo service rabbitmq-server start going again !
$ sudo rm -rf /var/lib/rabbitmq/mnesia/
Also make sure that any stray rabbit processes are killed before clearing out
$ ps auxww | grep rabbit | awk '{print $2}' | sudo xargs kill -9
And then
$ sudo service rabbitmq-server start
If you use celery, your queues could reach max size and rabbit won't start because of that. Maybe you wouldn't even able to use rabbitmqctl, so if you can afford to clean the queues, just remove
/var/lib/rabbitmq/mnesia/rabbit#<host>/queues
on unix (look for mnesia DB path on your system).
Be careful: this will remove everything you have in rabbit, so this is a last solution ever.
Have a look what is in the log of the node that you are trying to start. It will be in /var/log/rabbitmq/
It was selinux in my case, rabbit could not bind to its ports.
My brew version of rabbitmq refused to start (after working fine for years without modification by me) too.
$ cat /usr/local/etc/rabbitmq/rabbitmq-env.conf
CONFIG_FILE=/usr/local/etc/rabbitmq/rabbitmq
NODE_IP_ADDRESS=127.0.0.1
NODENAME=rabbit#localhost
RABBITMQ_LOG_BASE=/usr/local/var/log/rabbitmq
I edited out rabbit# on NODENAME and brew services restart rabbitmq started working again.
If the standard stop and start are not working, list the rabbitmq processes that are running using
ps aux | grep rabbitmq
Kill the beam.smp process using
kill -9 {process id}
and start the rabbitmq-server again.