Can't stop foreman - redis

I have the following Profile that I use with foreman to do development work for a heroku site:
web: gunicorn project_name.wsgi -b 0.0.0.0:$PORT
worker: python manage.py rqworker default
redis: redis-server
Everything worked great until I added the redis line. While the app runs fine, I cannot kill foreman with control-c -- it just keeps running. The only way I can kill foreman is by killing the redis-server process.
How can I get foreman to respond (and stop) to the control-c?

This usually happens because redis or memcached won't shut down. So I have just created a script that I run to kill the development environment. Currently it is:
#!/bin/bash
redis-cli SHUTDOWN
killall memcached

Related

How to make redis-server process run in front ground?

When I run redis by redis-server CONFIG_FILE, the process will be run in the background. If I run it without CONFIG_FILE parameter, it will not run in background. How can I make it run in front ground with a configuration file? It is useful when send this command to a docker. The docker container will stop running if the process is running in the background.
Try to set daemonize to 'no' in your CONFIG_FILE.

How to run a Redis server AND another application inside Docker?

I created a Django application which runs inside a Docker container. I needed to create a thread inside the Django application so I used Celery and Redis as the Celery Database.
If I install redis in the docker image (Ubuntu 14.04):
RUN apt-get update && apt-get -y install redis-server
RUN pip install redis
The Redis server is not launched: the Django application throws an exception because the connection is refused on the port 6379. If I manually start Redis, it works.
If I start the Redis server with the following command, it hangs :
RUN redis-server
If I try to tweak the previous line, it does not work either :
RUN nohup redis-server &
So my question is: is there a way to start Redis in background and to make it restart when the Docker container is restarted ?
The Docker "last command" is already used with:
CMD uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
RUN commands are adding new image layers only. They are not executed during runtime. Only during build time of the image.
Use CMD instead. You can combine multiple commands by externalizing them into a shell script which is invoked by CMD:
CMD start.sh
In the start.sh script you write the following:
#!/bin/bash
nohup redis-server &
uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
When you run a Docker container, there is always a single top level process. When you fire up your laptop, that top level process is an "init" script, systemd or the like. A docker image has an ENTRYPOINT directive. This is the top level process that runs in your docker container, with anything else you want to run being a child of that. In order to run Django, a Celery Worker, and Redis all inside a single Docker container, you would have to run a process that starts all three of them as child processes. As explained by Milan, you could set up a Supervisor configuration to do it, and launch supervisor as your parent process.
Another option is to actually boot the init system. This will get you very close to what you want since it will basically run things as though you had a full scale virtual machine. However, you lose many of the benefits of containerization by doing that :)
The simplest way altogether is to run several containers using Docker-compose. A container for Django, one for your Celery worker, and another for Redis (and one for your data store as well?) is pretty easy to set up that way. For example...
# docker-compose.yml
web:
image: myapp
command: uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
links:
- redis
- mysql
celeryd:
image: myapp
command: celery worker -A myapp.celery
links:
- redis
- mysql
redis:
image: redis
mysql:
image: mysql
This would give you four containers for your four top level processes. redis and mysql would be exposed with the dns name "redis" and "mysql" inside your app containers, so instead of pointing at "localhost" you'd point at "redis".
There is a lot of good info on the Docker-compose docs
use supervisord which would control both processes. The conf file might look like this:
...
[program:redis]
command= /usr/bin/redis-server /srv/redis/redis.conf
stdout_logfile=/var/log/supervisor/redis-server.log
stderr_logfile=/var/log/supervisor/redis-server_err.log
autorestart=true
[program:nginx]
command=/usr/sbin/nginx
stdout_events_enabled=true
stderr_events_enabled=true

want to run redis-server in background nonstop

I have downloaded redis-2.6.16.tar.gz file and i installed sucessfully. After installed i run src/redis-server it worked fine.
But i don't want manually run src/redis-server everytime, rather i want redis-server running as background process continuously.
So far after installed i did following tasks:
1. vim redis.conf and i changed to
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes
But same result i found. What mistake i did?
After redis run in background. I will run juggernaut also as background process with following command.
nohup node server.js
But i am not able to make redis run in background. Please provide some solution.
Since Redis 2.6 it is possible to pass Redis configuration parameters using the command line directly. This is very useful for testing purposes.
redis-server --daemonize yes
Check if the process started or not:
ps aux | grep redis-server
I think the best way is to use Redis' config file:
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes
Set daemonize to yes in the config file. Say the file is ~/.redis/redis.conf, then just run
$ redis-server ~/.redis/redis.conf
And it just works.
Or you can simply run it as src/redis-server redis.conf&
For windows:
Step 1: Install redis as a service
redis-server --server-install
Step 2: Run background
redis-server --server-start
To run redis server in background and ignore output .
nohup redis-server &
To check the server
ps aux | grep redis-server
To Kill server
sudo service redis-server stop

Keep scrapyd running

I have scrapy and scrapyd installed on a debian machine. I log in to this server using a ssh-tunnel. I then start scrapyd by going:
scrapyd
Scrapyd starts up fine and I then open up another ssh-tunnel to the server and schedule my spider with:
curl localhost:6800/schedule.json -d project=myproject -d spider=myspider
The spider runs nicely and everything is fine.
The problem is that scrapyd stops running when I quit the session where I started up scrapyd. This prevents me from using cron to schdedule spiders with scrapyd since scrapyd isn't running when the cronjob is launched.
My simple question is: How do I keep scrapyd running so that it doesn't shut down when I quit the ssh session.
Run it in a screen session:
$ screen
$ scrapyd
# hit ctrl-a, then d to detach from that screen
$ screen -r # to re-attach to your scrapyd process
You might consider launching scrapyd with supervisor.
And there is a good .conf script available as a gist here:
https://github.com/JallyHe/scrapyd/blob/master/supervisord.conf
How about ?
$ sudo service scrapyd start

rabbitmq refusing to start

I have installed rabbitmq on ubuntu and trying to start it using rabbitmq-server start, however, I'm getting this error:
Activating RabbitMQ plugins ...
0 plugins activated:
node with name "rabbit" already running on "mybox"
diagnostics:
- nodes and their ports on mybox: [{rabbit,38618},
{rabbitmqprelaunch13346,41776}]
- current node: rabbitmqprelaunch13346#mybox
- current node home dir: /var/lib/rabbitmq
- current node cookie hash: 8QRKGluOJOcZ4AAkEdFwQg==
so I try to stop it or restart it using service rabbitmq-server restart but I get the following error: Restarting rabbitmq-server: RabbitMQ is not running
The server's host name hostname -s is mybox.
How do I stop the currently running instance, or at least, how do I manage it? I have no access to it and yet I'm not able to run rabbitmq properly.
Thank you.
Rabbitmq is set to start automatically after it's installed.
I don't think it is configured run with the service command.
To see the status of rabbitmq
sudo rabbitmqctl status
To stop the rabbitmq
sudo rabbitmqctl stop
(Try the status command again to see that it's stopped).
To start it again, the recommended method is
sudo invoke-rc.d rabbitmq-server start
These all work with the vanilla ubuntu install using apt-get
Still not working?
If you've tried unsuccessfully to start or restart rabbitmq, check to see how many processes are running.
ps -ef | grep rabbit
There should be 5 processes running as the user rabbitmq.
If you have more, particularly if they're running as other users (such as root, or your own user) you should stop these processes.
The cleanest way is probably to reboot your machine.
rabbitmq-server refuses to start if the hostname -s value has changed.
The solution suggested here is only for test/development environments.
I had to delete the database to fix it locally.
i.e empty folder /var/lib/rabbitmq (ubuntu) or /usr/local/var/lib/rabbitmq/(mac)
I had similar problem but these suggestions didn't work for me(restart too). When I run rabbitmq-server command, I get a response like that:
$/ rabbitmq-server
BOOT FAILED
===========
Error description:
{error,{cannot_log_to_file,"/var/log/rabbitmq/rabbit#haber01.log",
{error,eacces}}}
....
When I checked permissions of /var/log/rabbitmq/rabbit#haber01.log file, I saw that group has not write permisson for that file. So I gave permission to group with that command:
/var/log/rabbigmq/$ chmod g+w *
then problem has gone!
Maybe this answer help someone.
Seems like the Mnesia database was corrupted. Had to delete it to get sudo service rabbitmq-server start going again !
$ sudo rm -rf /var/lib/rabbitmq/mnesia/
Also make sure that any stray rabbit processes are killed before clearing out
$ ps auxww | grep rabbit | awk '{print $2}' | sudo xargs kill -9
And then
$ sudo service rabbitmq-server start
If you use celery, your queues could reach max size and rabbit won't start because of that. Maybe you wouldn't even able to use rabbitmqctl, so if you can afford to clean the queues, just remove
/var/lib/rabbitmq/mnesia/rabbit#<host>/queues
on unix (look for mnesia DB path on your system).
Be careful: this will remove everything you have in rabbit, so this is a last solution ever.
Have a look what is in the log of the node that you are trying to start. It will be in /var/log/rabbitmq/
It was selinux in my case, rabbit could not bind to its ports.
My brew version of rabbitmq refused to start (after working fine for years without modification by me) too.
$ cat /usr/local/etc/rabbitmq/rabbitmq-env.conf
CONFIG_FILE=/usr/local/etc/rabbitmq/rabbitmq
NODE_IP_ADDRESS=127.0.0.1
NODENAME=rabbit#localhost
RABBITMQ_LOG_BASE=/usr/local/var/log/rabbitmq
I edited out rabbit# on NODENAME and brew services restart rabbitmq started working again.
If the standard stop and start are not working, list the rabbitmq processes that are running using
ps aux | grep rabbitmq
Kill the beam.smp process using
kill -9 {process id}
and start the rabbitmq-server again.