Same kind of issue than : what causes a docker volume to be populated?
I'm trying to share configuration file of apache in /etc/apache2 with my host, and file aren't generated automatically within the shared folder.
As minimal example:
Dockerfile
FROM debian:9
RUN apt update
#Install apache
RUN apt install -y apache2 apache2-dev
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]
docker-compose.yml
version: '2.2'
services:
apache:
container_name: apache-server
volumes:
- ./log/:/var/log/apache2
- ./config:/etc/apache2/ #remove it will let log generating files
image: httpd-perso2
build: .
ports:
- "80:80"
With this configuration, nor ./config nor ./log will be filled with files/folders generated by the container, even if log files should have some error (getting apache-server | The Apache error log may have more information.)
If I remove the ./config volume, apache log files will be generated properly. Any clue for which reason this can append ? How can I share apache config file ?
Having the same issue with django settings file, seem to be related to config file generated by an application.
What I tried :
- using VOLUME in Dockerfile
- running docker-compose as root or chmod 777 on folders
- Creating file within the container to those directory to see if they are created on the host (and they did)
- On host, creating shared folder chown by the user (chown by root if they are automatically generated)
- Trying with docker run, having exactly the same issue.
For specs:
- Docker version 19.03.5
- using a VPS with debian buster
- docker-compose version 1.25.3
Thanks for helping.
Related
I'm new to using Docker and docker-compose so apologies if I have some of the terminology wrong.
I've been provided with a Dockerfile and docker-compose.yml and have successfully got the images built and container up and running (by running docker-compose up -d), but I would like to update things to make my process a bit easier as occasionally I need to restart Apache on the container (WordPress) by accessing it using:
docker exec -it 89a145b5ea3e /bin/bash
Then typing:
service apache2 restart
My first problem is that there are two other services that I need to run for my project to work correctly and these don't automatically restart when I run the above service apache2 restart command.
The two commands I need to run are:
service memcached start
service cron start
I would like to know how to always run these commands when apache2 is restart.
Secondly, I would like to configure my Dockerfile or docker-compose.yml (not sure where I'm supposed to be adding this) so that this behaviour is baked in to the container/image when it is built.
I've managed to install the services by adding them to my Dockerfile but can't figure out how to get these services to run when the container is restart.
Below are the contents for relevant files:
Dockerfile:
FROM wordpress:5.1-php7.3-apache
RUN yes | apt-get update -y \
&& apt-get install -y vim \
&& apt-get install -y net-tools \
&& apt-get install -y memcached \
&& apt-get install -y cron
docker-compse.yml
version: "3.3"
services:
db:
image: mysql:5.7
volumes:
- ./db_data:/var/lib/mysql:consistent
ports:
- "3303:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: vagrant
MYSQL_DATABASE: wp_database
MYSQL_USER: root
MYSQL_PASSWORD: vagrant
wordpress:
container_name: my-site
build: .
depends_on:
- db
volumes:
- ./my-site-wp:/var/www/html/:consistent
ports:
- "8001:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: vagrant
WORDPRESS_DB_NAME: wp_database
volumes:
db_data:
my-site-wp:
...occasionally I need to restart Apache on the container (WordPress)...
Don't do that. It's a really, really bad habit. You're treating the container like a server where you go in and fix things that break. Think of it like it's a single application -- if it breaks, restart the whole dang thing.
docker-compose restart wordpress
Or restart the whole stack, even.
docker-compose restart
Treat your containers like cattle not pets:
Simply put, the “cattle not pets” mantra suggests that work shouldn’t grind to a halt when a piece of infrastructure breaks, nor should it take a full team of people (or one specialized owner) to nurse it back to health. Unlike a pet that requires love, attention and more money than you ever wanted to spend, your infrastructure should be made up of components you can treat like cattle – self-sufficient, easily replaced and manageable by the hundreds or thousands. Unlike VMs or physical servers that require special attention, containers can be spun up, replicated, destroyed and managed with much greater flexibility.)
Per each container in the compose file, you can add a run command flag in the yaml which will run a command AFTER your container has started. This will run during every start up. On the other hand, commands in the Dockerfile will only run when the image is being built. Ex:
db:
image: mysql:5.7
volumes:
- ./db_data:/var/lib/mysql:consistent
command: # bash command goes here
ports:
- "3303:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: vagrant
MYSQL_DATABASE: wp_database
MYSQL_USER: root
MYSQL_PASSWORD: vagrant
However, this is not what you are after. Why would you mess with a container from another container? The depends_on flag should restart the downstream services. It seems your memcache instance isn't docked and hence, you are trying to fit it in the application level logic, which is the antithesis of Docker. This code should be in the infra level from the machine or the orchestrator (eg. Kubernetes).
When i'm running sudo docker-compose up inside my dir, i get this error. I'm trying to make a container, that host a php website, where you can do whoami on it.
Thanks
(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
| no listening sockets available, shutting down
| AH00015: Unable to open logs
Dockerfile:
FROM ubuntu:16.04
RUN apt update
RUN apt install -y apache2 php libapache2-mod-php
RUN useradd -d /home/cp/ -m -s /bin/nologin cp
WORKDIR /home/cp
COPY source .
USER cp
ENTRYPOINT service apache2 start && /bin/bash
docker-compose.yml
version: '2'
services:
filebrowser:
build: .
ports:
- '8000:80'
stdin_open: true
tty: true
volumes:
- ./source:/var/www/html
- ./logs:/var/log/apache2
There's a long-standing general rule in Unix-like operating systems that only the root user can open "low" ports 0-1023. Since you're trying to run Apache on the default HTTP port 80, but you're running it as a non-root user, you're getting the "permission denied" error you see.
The absolute easiest answer here is to use a prebuilt image that has PHP and Apache preinstalled. The Docker Hub php image includes a variant of this. You can use a simpler Dockerfile:
FROM php:7.4-apache
# Has Apache, mod-php preinstalled and a correct CMD already,
# so the only thing you need to do is
COPY source /var/www/html
# If you want to run as a non-root user, you can specify
RUN useradd -r -U cp
ENV APACHE_RUN_USER cp
ENV APACHE_RUN_GROUP cp
With the matching docker-compose.yml
version: '3' # version 2 vs 3 doesn't really matter
services:
filebrowser:
build: .
ports:
- '8000:80'
volumes:
- ./logs:/var/log/apache2
If you want to build things up from scratch, the next easiest option would be the Apache User directive: have your container start as root (so it can bind to port 80) but then instruct Apache to switch to the unprivileged user once it's started up. The standard php:...-apache image has an option to do this on its own which I've shown above.
I have reports generated in gradle container for my selenium tests, I am trying to copy the files from docker container to local host. As a work around, I have used docker cp to copy files from container to my local and it works. How to achieve it with docker-compose volumes.
Below is my docker-compose.yml
version: "3 "
services:
selenium-hub:
image: selenium/hub
container_name: selenium-hub_compose
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome-debug
container_name: selenium-chrome
depends_on:
- selenium-hub
ports:
- "5900"
environment:
- http_proxy=http://x.x.x.x:83
- https_proxy=http://x.x.x.x:83
- HUB_HOST=selenium-hub
- HUB_PORT=4444
gradle:
image: gradle:jdk8
container_name: selenium-gradle
build:
context: .
dockerfile: dockerfile
I run the command docker-compose up -> it runs the selenium tests and generates the report in the container.
Can anyone help on this?
The normal way to pass data from container to host is using docker volumes.
In short you specify a host directory and map it to the directory inside container. And that directory should be used to save your test reports
services:
selenium-hub:
image: selenium/hub
container_name: selenium-hub_compose
ports:
- "4444:4444"
volumes:
- ./path/to/report/folder:/host/reports
See docker documentation
https://docs.docker.com/compose/compose-file/#/volumes-volumedriver
Similar question:
How do I mount a host directory as a volume in docker compose
Power off the machine in Virtual box -> Change the Advanced settings in Virtual box
Goto Shared Folders in Virtual box
Give Path :: C:\DockerResults : Give a logical name for the Folder name
Restart the machine in DockerTerminal with the below command
docker-machine restart default
After machine is started open the Virtual box
Create a directory in the Virtual machine : sudo mkdir /Results
Mount the directory to the local windows machine by executing the below command in virtual box:
Sudo mount –t vboxsf DockerResults /Results
Add volumes as below in docker-compose file
volumes:
- /DockerResults:/home/Reports/
when building docker apache image, the building fail in this step :
Step n/m : COPY httpd-foreground /usr/local/bin/
ERROR: Service 'apache' failed to build: COPY failed: stat
/var/lib/docker/tmp/docker-builder511740141/httpd-foreground: no such
file or directory
this is my docker_compose.yml file
version: '3'
services:
mysql:
image: mysql:5.7
container_name: mysql_octopus_dev
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: app
MYSQL_USER: root
MYSQL_PASSWORD: root
apache:
build: .
container_name: apache_octopus_dev
volumes:
- .:/var/www/html/
ports:
- "8000:80"
depends_on:
- mysql
this is my docker file
FROM debian:jessie-backports
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
#RUN groupadd -r www-data && useradd -r --create-home -g www-data www-data
...
COPY httpd-foreground /usr/local/bin/
EXPOSE 80
CMD ["httpd-foreground"]
any help please?
Paths in a Dockerfile are always relative to the the context directory. The context directory is the positional argument passed to docker build (often .).
I should place the httpd-foreground file in the same folder of dockerfile.
From : https://github.com/docker/for-linux/issues/90
We have to install Apache, copy configuration files, and then start and configure it to run.
Here is the playbook written thus far:
---
- hosts: example
tasks:
- name:Install Apache
command: yum install --quiet -y httpd httpd-devel
- name:Copy configuration files
command:>
cp httpd.conf /etc/httpd/conf/httpd.conf
- command:>
cp httpd-vshosts.conf /etc/httpd/conf/httpd-vshosts.conf
- name:Start Apache and configure it to run
command: service httpd start
- command: chkconfig httpd on
However, when I run the command: ansible-playbook playbook.yml, I recieve this:
error: ERROR! Syntax Error while loading YAML.
The error appears to have been in '/etc/ansible/playbook.yml': line 3, column 1, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- hosts: example
tasks:
^ here
I have tried messing with whitespace and re-arranging things, but I still recieve this error. I am sure it's something small I am missing here, but it is bugging me to no end! Any help would be appreciated! Thanks so much.
Regarding your error message, you need to pay attention to indentation. This is how it should look like:
---
- hosts: example
tasks:
- name: Install Apache
command: yum install --quiet -y httpd httpd-devel
- name: Copy configuration files
command: cp httpd.conf /etc/httpd/conf/httpd.conf
- command: cp httpd-vshosts.conf /etc/httpd/conf/httpd-vshosts.conf
- name: Start Apache and configure it to run
command: service httpd start
- command: chkconfig httpd on
But this is rather an example of how not to use Ansible.
I assume you just started to use Ansible and want to verify things, but instead of running everything with command module, you should rather take advantage of native modules, like yum or service. Have a look at the examples in the linked documentation pages.
Also take notice that in your example some tasks have names some don't. For example these are two different tasks (the first one with a name, the seconds one without):
- name: Copy configuration files
command: cp httpd.conf /etc/httpd/conf/httpd.conf
- command: cp httpd-vshosts.conf /etc/httpd/conf/httpd-vshosts.conf
More appropriate naming should be:
- name: Copy one configuration file
command: cp httpd.conf /etc/httpd/conf/httpd.conf
- name: Copy another configuration file
command: cp httpd-vshosts.conf /etc/httpd/conf/httpd-vshosts.conf
Another problem: this command will fail as there should be no httpd-vshosts.conf in the current directory on the target machine:
- command: cp httpd-vshosts.conf /etc/httpd/conf/httpd-vshosts.conf
You must provide the full path.