"no such file or directory" when running Docker image - apache

I'm new to Docker and trying to create an image with owncloud 7 on centos 6.
I've created a Dockerfile. I've built an image.
Everything goes right except that when I run the image :
docker run -i -t -d -p 80:80 vfoury/owncloud7:v3
I get the error :
Cannot start container a7efd9be6a225c19089a0f5a5c92f53c4dd1887e8cf26277d3289936e0133c69:
exec: "/etc/init.d/mysqld start && /etc/init.d/httpd start":
stat /etc/init.d/mysqld start && /etc/init.d/httpd start: no such file or directory
If I run the image with /bin/bash
docker run -i -t -p 80:80 vfoury/owncloud7:v3 /bin/bash
then I can run the command
/etc/init.d/mysqld start && /etc/init.d/httpd start
and it works.
Here is my Dockerfile content :
# use the centos6 base image
FROM centos:centos6
MAINTAINER Vincent Foury
RUN yum -y update
# Install SSH server
RUN yum install -y openssh-server
RUN mkdir -p /var/run/sshd
# add epel repository
RUN yum install epel-release -y
# install owncloud 7
RUN yum install owncloud -y
# Expose les ports 22 et 80 pour les rendre accessible depuis l'hote
EXPOSE 22 80
# Modif owncloud conf to allow any client to access
COPY owncloud.conf /etc/httpd/conf.d/owncloud.conf
# start httpd and mysql
CMD ["/etc/init.d/mysqld start && /etc/init.d/httpd start"]
Any help would be greatly appreciated
Vincent F.

After many tests, here is the Dockerfile that works to install ouwncloud (without MySQL):
# use the centos6 base image
FROM centos:centos6
RUN yum -y update
# add epel repository
RUN yum install epel-release -y
# install owncloud 7
RUN yum install owncloud -y
EXPOSE 80
# Modif owncloud conf to allow any client to access
COPY owncloud.conf /etc/httpd/conf.d/owncloud.conf
# start httpd
CMD ["/usr/sbin/apachectl","-D","FOREGROUND"]
then
docker build -t <myname>/owncloud
then
docker run -i -t -p 80:80 -d <myname>/owncloud
then you should be able to open
http://localhost/owncloud
in your browser

I think this is because you're trying to use && within the Dockerfile CMD instruction.
If you intend to run multiple services within a Docker container, you may want to check Supervisor. It enables you to run multiple daemons within the container. Check the Docker documentation at https://docs.docker.com/articles/using_supervisord/.
Alternatively you could ADD a simple bash script to start the two daemons and then set the CMD to use the bash file you added.

The issue is that your CMD argument contains shell operations, but you're using the exec-form of CMD instead of the shell-form. The exec-form passes the arguments to one of the exec functions, which will not interpret the shell operations. The shell-form passes the arguments to sh -c.
Replace
CMD ["/etc/init.d/mysqld start && /etc/init.d/httpd start"]
with
CMD /etc/init.d/mysqld start && /etc/init.d/httpd start
or
CMD ["sh", "-c", "/etc/init.d/mysqld start && /etc/init.d/httpd start"]
See https://docs.docker.com/reference/builder/#cmd.

Related

Why Molecule is not able to start a docker container (Failed to create temporary directory)

I found similar case here, that I am using molecule to test my ansible roles, but for some reason it is skipping "creation" part and gives error like:
fatal: [rabbitmq]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" && echo ansible-tmp-1638541586.6239848-828-250053975102429=\"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" ), exited with result 1", "unreachable": true}
It is skipping the create process: Skipping, instances already created. However, nothing is running:
name#EEW00438:~/.cache$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
name#EEW00438:~/.cache$
what I tried:
molecule destroy
molecule reset
restart
rm -rf ~/.cache/
changed remote_tmp to /tmp/.ansible/ in /etc/ansible/ansible.cfg
reinstall molecule
This issue is only with one role.
UPDATE:
it is failing on step:
mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623 `\" && echo ansible-tmp-1638782939.31706-2913-12516475286623=
mkdir: cannot create directory ‘"/home/user/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623"’: No such file or directory
I stumbled upon this issue as well.
When you create the role you need to create it as molecule init role --driver-name docker ns.myrole to enable docker. Be sure to install the docker driver too if you haven't pip install --upgrade molecule-docker
So if you need to tweak the container that runs, edit molecule.yml. It defaults to centos. I switched to ubuntu in there, an created a Dockerfile to provision the container with things that need to exist.
molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
platforms:
- name: instance
image: ubuntu:22.04 # this is required but ignored since I specify a `dockerfile`
pre_build_image: false
dockerfile: Dockerfile
provisioner:
name: ansible
verifier:
name: ansible
For example, Ubuntu 22.04 doesn't use python anymore, so I added an alias at the end of what molecule renders so that Ansible can use python and have it redirect to python3
FROM ubuntu:22.04
RUN if [ $(command -v apt-get) ]; then export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y python3 sudo bash ca-certificates iproute2 python3-apt aptitude && apt-get clean && rm -rf /var/lib/apt/lists/*; \
elif [ $(command -v dnf) ]; then dnf makecache && dnf --assumeyes install /usr/bin/python3 /usr/bin/python3-config /usr/bin/dnf-3 sudo bash iproute && dnf clean all; \
elif [ $(command -v yum) ]; then yum makecache fast && yum install -y /usr/bin/python /usr/bin/python2-config sudo yum-plugin-ovl bash iproute && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \
elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python3 sudo bash iproute2 && zypper clean -a; \
elif [ $(command -v apk) ]; then apk update && apk add --no-cache python3 sudo bash ca-certificates; \
elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python3 sudo bash ca-certificates iproute2 && xbps-remove -O; fi
RUN echo 'alias python=python3' >> ~/.bashrc
It's been years since I last used Molecule, and I must say... it's gone downhill. It used to be easy/clear/direct to get things working. Sigh. I guess I should stick to containers and force the migration off VMs sooner!
The problem may be caused by a Docker context change performed at the start of Docker Desktop. Despite this, Molecule does create a container, but in an inactive context.
At startup, Docker Desktop automatically switches the context from default to desktop-linux [1]. The active context determines which containers are available from CLI.
The context cannot be set in the molecule, i.e. the default context is always used to create containers [2].
$ molecule create --scenario-name test
... # The output with the error is skipped because it duplicates the output from the question
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
desktop-linux * moby unix:///home/bkarpov/.docker/desktop/docker.sock
$ docker context use default
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a71bfd28992f geerlingguy/docker-ubuntu2004-ansible "bash -c 'while true…" 5 minutes ago Up 5 minutes some-instance
$ molecule login --scenario-name test
INFO Running test > login
root#some-instance:/#
Solutions
Switch the context back to default manually
docker context use default
This solution is suitable for one-time execution, since the context will need to be switched every time Docker Desktop is started. Docker Desktop service will continue to work using the desktop-linux context.
Issue with the request to add context switching to Docker Desktop - https://github.com/docker/roadmap/issues/47
Stop Docker Desktop
systemctl --user stop docker-desktop
Stopping the Docker Desktop service will automatically switch to the default context.
Set DOCKER_CONTEXT so that Docker Desktop does not change the context in the current shell
export DOCKER_CONTEXT=default
systemctl --user restart docker-desktop
When stopping, the context returns to default, and when starting, it does not switch to desktop-linux.
References
https://docs.docker.com/desktop/install/ubuntu/#launch-docker-desktop
https://github.com/ansible-community/molecule-docker#faq

Unable to start docker - httpd (pid 1) already running

I have hosted one docker with PHP in a shared server of our office environments. Previously it was working fine without any issue. All the users were able to access the site via port forwarding to 8080. Here is my docker file details -
# Choose Repo from Docker Hub
FROM centos:latest
# Provide details of maintainer
MAINTAINER ritu
#Install necessary software
RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum -y install http://rpms.remirepo.net/enterprise/remi-release-7.rpm
RUN yum -y install yum-utils
RUN yum-config-manager --enable remi-php56
RUN yum -y install php php-mcrypt php-cli php-gd php-curl php-mysql php-ldap php-zip php-fileinfo php-devel php-pear make gcc systemtap-sdt-devel httpd unzip postfix
RUN export PHP_DTRACE=yes
RUN curl -sS https://getcomposer.org/installer | php
RUN mv -f composer.phar /usr/local/bin/composer
RUN chmod +x /usr/local/bin/composer
RUN composer require phpmailer/phpmailer
COPY phpinfo.php /var/www/html/
COPY php.ini /var/www/
COPY httpd.conf /var/www/
RUN cp -f /var/www/httpd.conf /etc/httpd/conf/
COPY *.rpm /var/www/
#Install & Configure OCI for PHP
COPY oci8-2.0.12.tgz /
RUN tar -xvf oci8-2.0.12.tgz
RUN yum -y localinstall /var/www/*.rpm --nogpgcheck
COPY client.sh /etc/profile.d/
RUN chmod +x /etc/profile.d/client.sh
RUN cp -f /var/www/php.ini /etc/
COPY php_oci8_int.h oci8-2.0.12/
COPY Log_Check.zip /
RUN unzip Log_Check.zip
RUN cp -a -R /Log_Check/* /var/www/html/
WORKDIR /oci8-2.0.12
RUN phpize
RUN ./configure --with-oci8=/usr/lib/oracle/12.2/client64
RUN cp -f /usr/include/oracle/12.2/client64/*.h /oci8-2.0.12/include/
RUN make
RUN make install
RUN ls /var/www/html/
RUN rm -rf /var/run/apache2/apache2.pid
#Expose necessary ports
EXPOSE 80
EXPOSE 1521
EXPOSE 25
#Provide Entrypoint
CMD ["-D", "FOREGROUND"]
ENTRYPOINT ["/usr/sbin/httpd"]
Suddenly one of my friend added another docker with same port 8080 in the same server. After that my docker got stopped. with below error -
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.18.0.3. Set the 'ServerName' directive globally to suppress this message
httpd (pid 1) already running
After several hours of googling and after trying lots of commands, I found that its easy to remove the entire container as well as images from the server. Hence I removed all containers with docker rm followed by image deletion with docker rmi. Again i have recreated the docker image on my local system (its working here) and transferred to server. Again I tried to run the docker. But faced same issue again.
Unable to find out the cause & solution. Need some help.
first remove ENTRYPOINT from your Dockerfile and just use:
CMD [ "/usr/sbin/httpd", "-X" ]
the warning regarding AH00558 is comming from your configuration and it i complaining about you do not use www.test.com you can ignore that for now and apache will still working. if you want to read more see this

Apache Tomcat 8 not starting within a docker container

I am experimenting with Docker and am very new to it. I am struck at a point for a long time and am not getting a way through and hence came up with this question here...
Problem Statement:
I am trying to create an image from a docker file containing Apache and lynx installation. Once done I am trying to access tomcat on 8080 of the container which is in turn forwarded to the 8082 of the host. But when running the image I never get tomcat started in the container.
The Docker file
FROM ubuntu:16.10
#Install Lynx
Run apt-get update
Run apt-get install -y lynx
#Install Curl
Run apt-get install -y curl
#Install tools: jdk
Run apt-get update
Run apt-get install -y openjdk-8-jdk wget
#Install apache tomcat
Run groupadd tomcat
Run useradd -s /bin/false -g tomcat -d /opt/tomcat tomcat
Run cd /tmp
Run curl -O http://apache.mirrors.ionfish.org/tomcat/tomcat- 8/v8.5.12/bin/apache-tomcat-8.5.12.tar.gz
Run mkdir /opt/tomcat
Run tar xzvf apache-tomcat-8*tar.gz -C /opt/tomcat --strip-components=1
Run cd /opt/tomcat
Run chgrp -R tomcat /opt/tomcat
Run chmod -R g+r /opt/tomcat/conf
Run chmod g+x /opt/tomcat/conf
Run chown -R tomcat /opt/tomcat/webapps /opt/tomcat/work /opt/tomcat/temp opt/tomcat/logs
Run cd /opt/tomcat/bin
Expose 8080
CMD /opt/tomcat/bin/catalina.sh run && tail -f /opt/tomcat/logs/catalina.out
When the image is built I tried running the container by the two below methods
docker run -d -p 8082:8080 imageid tail -f /dev/null
While using the above, container is running but tomcat is not started inside the container and hence not accessible from localhost:8082. Also I do not see anything if I perform docker logs longcontainerid
docker run -d -p 8082:8080 imageid /path/to/catalina.sh start tail -f /dev/null
I see tomcat started when I do docker logs longconatainrid
While using the above the container is started and stopped immediately and is not running as I can see from docker ps and hence again not accessible from localhost:8082.
Can anyone please tell me where I am going wrong?
P.s. I searched a lot on the internet but could not get the thing right. Might be there is some concept that i am not getting clearly.
Looking at the docker run command documentation, the doc states that any command passed to the run will override the original CMD in your Dockerfile:
As the operator (the person running a container from the image), you can override that CMD instruction just by specifying a new COMMAND
1/ Then when you run:
docker run -d -p 8082:8080 imageid tail -f /dev/null
The container is run with COMMAND tail -f /dev/null, the original command starting tomcat is overridden.
To resolve your problem, try to run:
docker run -d -p 8082:8080 imageid
and
docker log -f containerId
To see if tomcat is correctly started.
2/ You should not use the start argument with catalina.sh. Have a look at this official tomcat Dokerfile, the team uses :
CMD ["catalina.sh", "run"]
to start tomcat (when you use start, docker ends container at the end of the shell script and tomcat will start but not maintain a running process).
3/ Finally, why don't you use tomcat official image to build your container? You could just use the :
FROM tomcat:latest
directive at the beginning of your Dockerfile, and add you required elements (new files, webapps war, settings) to the docker image.

I would like to set up rfc5766-turn-server in Ubuntu 14.04, can anyone give me the set of steps listed all together ? I am doing it in AWS EC2

I have tried to install and set up rfc5766-turn-server in AWS EC2 but unable to do it as I do not see a proper flow of work or command line for that, can someone help me about this ? I need to set it up in Ubuntu 14.04
do an ssh login to your ec2 instance, then run the below commands for installing and starting the turn server.
commands for installing turnserver:
sudo apt-get update
sudo apt-get install make gcc libssl-dev libevent-dev wget -y # for installing modules required by turn server
mkdir ~/turn && cd ~/turn # creating temp directory
wget turnserver.open-sys.org/downloads/v3.2.5.9/turnserver-3.2.5.9.tar.gz # downloading the TURN source code
tar -zxvf *.gz # extract
cd turn*
make
sudo make install # installing the rfc5766
cd ../.. && rm -rf turn # cleaning up
command for starting the TURN server:
turnserver -a -o -v -n -u user:root -p 3478 -L INT_IP -r someRealm -X EXT_IP/INT_IP
assumptions:
your ip, internal ip = EXT_IP, INT_IP
desired port for listening: 3478
single credential username:password = user:root
realm: someRealm
in your WebRTC app, you can use trun server like:
{
url: 'turn:user#EXT_IP:3478',
credential: 'root'
}

Docker CentOS image does not auto start httpd

I'm trying to run a simple Docker image with Apache and a PHP program. It works fine if I run
docker run -t -i -p 80:80 my/httpd /bin/bash
then manually start Apache
service httpd start
however I cant get httpd to start automatically when running
docker run -d -p 80:80 my/httpd
Apache will startup then container exists. I have tried a bunch of different CMDs in my docker file
CMD /etc/init.d/httpd start
CMD ["service" "httpd" "start"]
CMD ["/bin/bash", "/etc/init.d/httpd start"]
ENTRYPOINT /etc/init.d/httpd CMD start
CMD ./start.sh
start.sh is
#!/bin/bash
/etc/init.d/httpd start
However every-time docker instance will exist after apache starts
Am I missing something really obvious?
You need to run apache (httpd) directly - you should not use init.d script.
Two options:
you have to run apache in foreground: /usr/sbin/apache2 -DFOREGROUND ... (or /usr/sbin/httpd in CentOS)
you have to start all services (including apache configured as auto-run) by executing /sbin/init as entrypoint.
Add this line in the bottom of your Dockerfile to run Apache in the foreground on CentOS
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Simple Dockerfile to run httpd on centOS
FROM centos:latest
RUN yum update -y
RUN yum install httpd -y
ENTRYPOINT ["/usr/sbin/httpd","-D","FOREGROUND"]
Commands for building images and running container
Build
docker build . -t chttpd:latest
Running container using new image
docker container run -d -p 8000:80 chttpd:latest
In the end of Dockerfile, insert below command to start httpd
# Start httpd
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]
This works for me
ENTRYPOINT /usr/sbin/httpd -D start && /bin/bash