Celery not automatically starting on Amazon Linux AMI 2013.03.1 - apache

I can't seem to automatically bootup my celeryd script located in /etc/init.d/celeryd everytime my Amazon Linux AMI 2013.03.1 machine is booted. I have to automatically do /etc/init.d/celeryd start . However, it boots perfectly and work right away.
Any ideas? I tried
sudo chkconfig /etc/init.d/celeryd on

You need to write a simple startup script:
Create a file called celeryd.sh
vim /etc/init.d/celeryd.sh
Inside that file:
#!/bin/sh
##Starts the celery on boot up###
/etc/init.d/celeryd start
Change permission:
chmod +x celeryd.sh
Done.
You can do init 6 and test, if it works or not.
More on : http://www.cyberciti.biz/tips/linux-how-to-run-a-command-when-boots-up.html

Related

How to attach Bucket to Google Compute Engine VM on Startup?

I would like to, on startup, copy contents of my bucket to the VM with the Container Optimized OS. When the server shuts down I'd like to save the changes back to the bucket.
I've tried making a startup script
#!/bin/bash
toolbox
gsutil cp -r gs://my-bucket/
However, this causes the VM to fail on startup despite this script working if I run it manually.
I think I found a reasonable solution. My script has changed to
#! /bin/bash
toolbox --bind=/home/username/bucket-folder:/my-bucket <<< "gsutil cp -r /my-bucket/* gs://my-bucket"
So what happens is we need to call toolbox --bind to bind a folder from the server to the toolbox container. Then we use <<< to pass the whole command to the container when it starts up so we copy to the newly bound directory so it goes back to the server.
Now when I bound the directory in my docker container, everything is there!
I just tried:
#! /bin/bash
gsutil cp -r gs://my-bucket /
And it worked for me. What is the toolbox command that you are executing previously?
Anyway you can see what is failing in the Serial Port Output.
EDIT: In the Container Optimized OS this does not work as this OS does not have the gsutil package preinstalled. Refer to #DanBaba answer.

Docker container immediately exits when started after system reboot

I'm starting my custom docker container (OpenSuse, PHP, Apache, some add-ons) this way:
docker build --build-arg http_proxy=http://user:pwd#ip:port -t prefix/myapp myapp
create --name=myapp --hostname=myapp-p 80:80 -v ${PWD}/myapp:/srv/www/myapp prefix/myapp
docker start myapp
This works perfectly. I can stop and later start the container. However, if I reboot my host system (Windows 10), I'm not able to start the container again. When I try to, the container immediately exits.
How can this be? As stated above, I use the -p and -v flags to map ports and mount a directory.
This is the output of...
docker logs myapp
-> httpd (pid 1) already running
May or may not be your problem (the logs will be telling), but I ran into an issue with docker on windows where the container tries to start before the file system is ready, which causes an error with the volume mounts. I never found a great solution aside from running a task that verifies the volume mount and restarts the container if it failed.

How can I start automatic Logstash,filebeat, metricbeat, redis, stunnel in RHEL 7

i am new to shell scripting.
I need to write a script where i need to start following processes while startup of the Linux server.Below are the processes
logstash, filebeat, metricbeat, redis, stunnel
What is the location to place the script so in order to run it while booting of Linux server.is it etc/init.d?
thanks
To run a script at boot, you need to place your script in ‘/etc/rc.local’ file with absolute path and name of your created script after the sh command. Also, make sure that the script is executable. For example,
sh ‘/path/to/your/script’
Also, in RHEL 7, there is a command to start any service at boot.
Command:
systemctl enable <service-name>
This will start the service at boot.

How to run a Redis server AND another application inside Docker?

I created a Django application which runs inside a Docker container. I needed to create a thread inside the Django application so I used Celery and Redis as the Celery Database.
If I install redis in the docker image (Ubuntu 14.04):
RUN apt-get update && apt-get -y install redis-server
RUN pip install redis
The Redis server is not launched: the Django application throws an exception because the connection is refused on the port 6379. If I manually start Redis, it works.
If I start the Redis server with the following command, it hangs :
RUN redis-server
If I try to tweak the previous line, it does not work either :
RUN nohup redis-server &
So my question is: is there a way to start Redis in background and to make it restart when the Docker container is restarted ?
The Docker "last command" is already used with:
CMD uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
RUN commands are adding new image layers only. They are not executed during runtime. Only during build time of the image.
Use CMD instead. You can combine multiple commands by externalizing them into a shell script which is invoked by CMD:
CMD start.sh
In the start.sh script you write the following:
#!/bin/bash
nohup redis-server &
uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
When you run a Docker container, there is always a single top level process. When you fire up your laptop, that top level process is an "init" script, systemd or the like. A docker image has an ENTRYPOINT directive. This is the top level process that runs in your docker container, with anything else you want to run being a child of that. In order to run Django, a Celery Worker, and Redis all inside a single Docker container, you would have to run a process that starts all three of them as child processes. As explained by Milan, you could set up a Supervisor configuration to do it, and launch supervisor as your parent process.
Another option is to actually boot the init system. This will get you very close to what you want since it will basically run things as though you had a full scale virtual machine. However, you lose many of the benefits of containerization by doing that :)
The simplest way altogether is to run several containers using Docker-compose. A container for Django, one for your Celery worker, and another for Redis (and one for your data store as well?) is pretty easy to set up that way. For example...
# docker-compose.yml
web:
image: myapp
command: uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
links:
- redis
- mysql
celeryd:
image: myapp
command: celery worker -A myapp.celery
links:
- redis
- mysql
redis:
image: redis
mysql:
image: mysql
This would give you four containers for your four top level processes. redis and mysql would be exposed with the dns name "redis" and "mysql" inside your app containers, so instead of pointing at "localhost" you'd point at "redis".
There is a lot of good info on the Docker-compose docs
use supervisord which would control both processes. The conf file might look like this:
...
[program:redis]
command= /usr/bin/redis-server /srv/redis/redis.conf
stdout_logfile=/var/log/supervisor/redis-server.log
stderr_logfile=/var/log/supervisor/redis-server_err.log
autorestart=true
[program:nginx]
command=/usr/sbin/nginx
stdout_events_enabled=true
stderr_events_enabled=true

want to run redis-server in background nonstop

I have downloaded redis-2.6.16.tar.gz file and i installed sucessfully. After installed i run src/redis-server it worked fine.
But i don't want manually run src/redis-server everytime, rather i want redis-server running as background process continuously.
So far after installed i did following tasks:
1. vim redis.conf and i changed to
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes
But same result i found. What mistake i did?
After redis run in background. I will run juggernaut also as background process with following command.
nohup node server.js
But i am not able to make redis run in background. Please provide some solution.
Since Redis 2.6 it is possible to pass Redis configuration parameters using the command line directly. This is very useful for testing purposes.
redis-server --daemonize yes
Check if the process started or not:
ps aux | grep redis-server
I think the best way is to use Redis' config file:
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes
Set daemonize to yes in the config file. Say the file is ~/.redis/redis.conf, then just run
$ redis-server ~/.redis/redis.conf
And it just works.
Or you can simply run it as src/redis-server redis.conf&
For windows:
Step 1: Install redis as a service
redis-server --server-install
Step 2: Run background
redis-server --server-start
To run redis server in background and ignore output .
nohup redis-server &
To check the server
ps aux | grep redis-server
To Kill server
sudo service redis-server stop