How to run a command when Docker container restarts - apache

I'm new to using Docker and docker-compose so apologies if I have some of the terminology wrong.
I've been provided with a Dockerfile and docker-compose.yml and have successfully got the images built and container up and running (by running docker-compose up -d), but I would like to update things to make my process a bit easier as occasionally I need to restart Apache on the container (WordPress) by accessing it using:
docker exec -it 89a145b5ea3e /bin/bash
Then typing:
service apache2 restart
My first problem is that there are two other services that I need to run for my project to work correctly and these don't automatically restart when I run the above service apache2 restart command.
The two commands I need to run are:
service memcached start
service cron start
I would like to know how to always run these commands when apache2 is restart.
Secondly, I would like to configure my Dockerfile or docker-compose.yml (not sure where I'm supposed to be adding this) so that this behaviour is baked in to the container/image when it is built.
I've managed to install the services by adding them to my Dockerfile but can't figure out how to get these services to run when the container is restart.
Below are the contents for relevant files:
Dockerfile:
FROM wordpress:5.1-php7.3-apache
RUN yes | apt-get update -y \
&& apt-get install -y vim \
&& apt-get install -y net-tools \
&& apt-get install -y memcached \
&& apt-get install -y cron
docker-compse.yml
version: "3.3"
services:
db:
image: mysql:5.7
volumes:
- ./db_data:/var/lib/mysql:consistent
ports:
- "3303:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: vagrant
MYSQL_DATABASE: wp_database
MYSQL_USER: root
MYSQL_PASSWORD: vagrant
wordpress:
container_name: my-site
build: .
depends_on:
- db
volumes:
- ./my-site-wp:/var/www/html/:consistent
ports:
- "8001:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: vagrant
WORDPRESS_DB_NAME: wp_database
volumes:
db_data:
my-site-wp:

...occasionally I need to restart Apache on the container (WordPress)...
Don't do that. It's a really, really bad habit. You're treating the container like a server where you go in and fix things that break. Think of it like it's a single application -- if it breaks, restart the whole dang thing.
docker-compose restart wordpress
Or restart the whole stack, even.
docker-compose restart
Treat your containers like cattle not pets:
Simply put, the “cattle not pets” mantra suggests that work shouldn’t grind to a halt when a piece of infrastructure breaks, nor should it take a full team of people (or one specialized owner) to nurse it back to health. Unlike a pet that requires love, attention and more money than you ever wanted to spend, your infrastructure should be made up of components you can treat like cattle – self-sufficient, easily replaced and manageable by the hundreds or thousands. Unlike VMs or physical servers that require special attention, containers can be spun up, replicated, destroyed and managed with much greater flexibility.)

Per each container in the compose file, you can add a run command flag in the yaml which will run a command AFTER your container has started. This will run during every start up. On the other hand, commands in the Dockerfile will only run when the image is being built. Ex:
db:
image: mysql:5.7
volumes:
- ./db_data:/var/lib/mysql:consistent
command: # bash command goes here
ports:
- "3303:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: vagrant
MYSQL_DATABASE: wp_database
MYSQL_USER: root
MYSQL_PASSWORD: vagrant
However, this is not what you are after. Why would you mess with a container from another container? The depends_on flag should restart the downstream services. It seems your memcache instance isn't docked and hence, you are trying to fit it in the application level logic, which is the antithesis of Docker. This code should be in the infra level from the machine or the orchestrator (eg. Kubernetes).

Related

docker volume, configuration files aren't generated

Same kind of issue than : what causes a docker volume to be populated?
I'm trying to share configuration file of apache in /etc/apache2 with my host, and file aren't generated automatically within the shared folder.
As minimal example:
Dockerfile
FROM debian:9
RUN apt update
#Install apache
RUN apt install -y apache2 apache2-dev
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]
docker-compose.yml
version: '2.2'
services:
apache:
container_name: apache-server
volumes:
- ./log/:/var/log/apache2
- ./config:/etc/apache2/ #remove it will let log generating files
image: httpd-perso2
build: .
ports:
- "80:80"
With this configuration, nor ./config nor ./log will be filled with files/folders generated by the container, even if log files should have some error (getting apache-server | The Apache error log may have more information.)
If I remove the ./config volume, apache log files will be generated properly. Any clue for which reason this can append ? How can I share apache config file ?
Having the same issue with django settings file, seem to be related to config file generated by an application.
What I tried :
- using VOLUME in Dockerfile
- running docker-compose as root or chmod 777 on folders
- Creating file within the container to those directory to see if they are created on the host (and they did)
- On host, creating shared folder chown by the user (chown by root if they are automatically generated)
- Trying with docker run, having exactly the same issue.
For specs:
- Docker version 19.03.5
- using a VPS with debian buster
- docker-compose version 1.25.3
Thanks for helping.

Docker stack: Apache service will not start

Apache service in docker stack never starts (or, to be more accurate, keeps restarting). Any idea what's going on?
The containers are the ones in: https://github.com/adrianharabula/lampstack.git
My docker-compose.yml is:
version: '3'
services:
db:
image: mysql:5.7
volumes:
- ../db_files:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: toor
#MYSQL_DATABASE: testdb
#MYSQL_USER: docky
#MYSQL_PASSWORD: docky
ports:
- 3306:3306
p71:
# depends_on:
# - db
image: php:7.1
build:
context: .
dockerfile: Dockerfile71
links:
- db
volumes:
- ../www:/var/www/html
- ./php.ini:/usr/local/etc/php/conf.d/php.ini
- ./virtualhost-php71.conf:/etc/apache2/sites-available/001-virtualhost-php71.conf
- ../logs/71_error.log:/var/www/71_error.log
- ../logs/71_access.log:/var/www/71_access.log
environment:
DB_HOST: db:3306
DB_PASSWORD: toor
ports:
- "81:80"
pma:
depends_on:
- db
image: phpmyadmin/phpmyadmin
And I start it with:
docker stack deploy -c docker-compose.yml webstack
db and pma services start correctly, but p71 service keeps restarting
docker service inspect webstack_p71
indicates:
"UpdateStatus": {
"State": "paused",
"StartedAt": "2018-01-19T16:28:17.090936496Z",
"CompletedAt": "1970-01-01T00:00:00Z",
"Message": "update paused due to failure or early termination of task 45ek431ssghuq2tnfpduk1jzp"
}
As you can see by the docker-composer.yml I already commented out the service dependency to avoid failures if dependencies are not met at first run.
$ docker service logs -f webstack_p71
$ docker service ps --no-trunc webstack_p71
What should I do to get that Apache/PHP (p71) service running?
All the containers work when run independently:
$ docker build -f Dockerfile71 -t php71 .
$ docker run -d -p 81:80 php71:latest
First of all, depends_on option doesn't work in swarm mode of version 3.(refer this issue)
In short...
depends_on is a no-op when used with docker stack deploy. Swarm mode
services are restarted when they fail, so there's no reason to delay
their startup. Even if they fail a few times, they will eventually
recover.
Of course, even though depends_on doesn't work, p71 should work properly since it would restart after the failure.
Thus, I think there is some error while running p71 service. that would be why the service keeps restarting. however, I am not sure what is happening inside the service only with the information you offer.
You could check the trace by checking log.
$ docker service logs -f webstack_p71
and error message
$ docker service ps --no-trunc webstack_p71 # check ERROR column

How to run a Redis server AND another application inside Docker?

I created a Django application which runs inside a Docker container. I needed to create a thread inside the Django application so I used Celery and Redis as the Celery Database.
If I install redis in the docker image (Ubuntu 14.04):
RUN apt-get update && apt-get -y install redis-server
RUN pip install redis
The Redis server is not launched: the Django application throws an exception because the connection is refused on the port 6379. If I manually start Redis, it works.
If I start the Redis server with the following command, it hangs :
RUN redis-server
If I try to tweak the previous line, it does not work either :
RUN nohup redis-server &
So my question is: is there a way to start Redis in background and to make it restart when the Docker container is restarted ?
The Docker "last command" is already used with:
CMD uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
RUN commands are adding new image layers only. They are not executed during runtime. Only during build time of the image.
Use CMD instead. You can combine multiple commands by externalizing them into a shell script which is invoked by CMD:
CMD start.sh
In the start.sh script you write the following:
#!/bin/bash
nohup redis-server &
uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
When you run a Docker container, there is always a single top level process. When you fire up your laptop, that top level process is an "init" script, systemd or the like. A docker image has an ENTRYPOINT directive. This is the top level process that runs in your docker container, with anything else you want to run being a child of that. In order to run Django, a Celery Worker, and Redis all inside a single Docker container, you would have to run a process that starts all three of them as child processes. As explained by Milan, you could set up a Supervisor configuration to do it, and launch supervisor as your parent process.
Another option is to actually boot the init system. This will get you very close to what you want since it will basically run things as though you had a full scale virtual machine. However, you lose many of the benefits of containerization by doing that :)
The simplest way altogether is to run several containers using Docker-compose. A container for Django, one for your Celery worker, and another for Redis (and one for your data store as well?) is pretty easy to set up that way. For example...
# docker-compose.yml
web:
image: myapp
command: uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
links:
- redis
- mysql
celeryd:
image: myapp
command: celery worker -A myapp.celery
links:
- redis
- mysql
redis:
image: redis
mysql:
image: mysql
This would give you four containers for your four top level processes. redis and mysql would be exposed with the dns name "redis" and "mysql" inside your app containers, so instead of pointing at "localhost" you'd point at "redis".
There is a lot of good info on the Docker-compose docs
use supervisord which would control both processes. The conf file might look like this:
...
[program:redis]
command= /usr/bin/redis-server /srv/redis/redis.conf
stdout_logfile=/var/log/supervisor/redis-server.log
stderr_logfile=/var/log/supervisor/redis-server_err.log
autorestart=true
[program:nginx]
command=/usr/sbin/nginx
stdout_events_enabled=true
stderr_events_enabled=true

Reflecting code changes in docker containers

I have a basic hello world Node application written on express. I have just dockerised this application by creating a basic dockerfile in the applications root directory. I created a docker image, and then ran that image to run it in a running container
# Dockerfile
FROM node:0.10-onbuild
RUN npm install
EXPOSE 3000
CMD ["node", "./bin/www"]
sudo docker build -t docker-express
sudo docker run --name test-container -d -p 80:3000 docker-express
I can access the web application. My question is.. When I made code changes to my application, eg change 'hello world' to 'hello bob', my changes are not reflected within the running container.
What is a good development workflow to update changes in the container? Surely I shouldn't have to delete and rebuild the images after each change?
Thank you :)
Check out the section on Sharing Volumes. You should be able to share your host volume with the docker container and then any time you need a change you can just restart the server (or have something restart it for you!).
Your command would look something like: sudo docker run -v /src/webapp:/webapp --name test-container -d -p 80:3000 docker-express
Which mounts /src/webapp (on the host) to /webapp (in the container).

redis-server auto restart

i have a question here..
i already install redis-server on my CentOS vps for my wordpress frontend cache..
but i have a little problem here..
sometimes my redis-server was closed/disconnected suddenly, and i must have to restart it manually using command
/etc/init.d/redis-server start
my question is..
how to auto start the redis-server if my redis-server down or crashed suddenly..
i install redis using this tutorial
http://www.saltwebsites.com/2012/install-redis-245-service-centos-6
great thanks before
how to auto start the redis-server if my redis-server down or crashed
suddenly
Try to look at tools like upstart or monit which can be used to respawn redis if it dies unexpectedly.
If you use docker with docker-compose:
version: '3'
services:
redis:
container_name: "redis"
image: redis:5.0.3-alpine
command: ["redis-server", "--appendonly", "yes"]
hostname: redis
ports:
- "6339:6379"
volumes:
- /opt/docker/redis:/data
restart:
always
look at the last line:
restart: always
I Don't know exactly with centos with linux
sudo systemctl enable redis_6379
The above worked like charm ,The above will create a symlink in /etc/systemd/system/redis_6379.service.