singularity returns a permission denied - singularity-container

I would like to build a singularity container for an application shipped via AppImage. To do so, I build the following def file:
Bootstrap: docker
From: debian:bullseye-slim
%post
apt-get update -y
apt-get install -y wget unzip fuse libglu1 libglib2.0-dev libharfbuzz-dev libsm6 dbus
cd /opt
wget https://www.ill.eu/fileadmin/user_upload/ILL/3_Users/Instruments/Instruments_list/00_-_DIFFRACTION/D3/Mag2Pol/Mag2Pol_v5.0.2.AppImage
chmod u+x Mag2Pol_v5.0.2.AppImage
%runscript
exec /opt/Mag2Pol_v5.0.2.AppImage
I build the container using singularity build -f test.sif test.def command. The build runs OK but when running the sif file using ./test.sif I get an /.singularity.d/runscript: 3: exec: /opt/Mag2Pol_v5.0.2.AppImage: Permission denied error. Looking inside the container using a singularity shell command shows that the /opt/Mag2Pol_v5.0.2.AppImage executable belongs to root. I guess that it is the source of the problem but I do not know how to solve it. Would you have any idea ?

Related

Singularity container from conda environment

I want to build a container from my conda environment following this post. However, I get the following error: '/bin/sh: 1: cannot create ~/.bashrc: Directory nonexistent'. I am using a vagrant VM to build my image and would be grateful for any help.
Editing the .bashrc, aside from failing, will not be helpful as the shell loaded by singularity is explicitly --norc. You want to use the $SINGULARITY_ENVIRONMENT variable in %post to have the values available.
Something along these lines:
%post
# You may need to install some pre-reqs your host system has installed outside of conda, e.g.
# apt update && apt install -y build-essential make zlib
ENV_NAME=$(head -1 environment.yml | cut -d' ' -f2)
echo ". /opt/conda/etc/profile.d/conda.sh" >> $SINGULARITY_ENVIRONMENT
echo "conda activate $ENV_NAME" >> $SINGULARITY_ENVIRONMENT
. /opt/conda/etc/profile.d/conda.sh
conda env create -f environment.yml -p /opt/conda/envs/$ENV_NAME
I listed a few libraries that you probably have installed in your current machine that might not be installed in the slim docker image. You can install them via apt or conda, depending on your preference. If it does happen though, it'll be specific to your environment.yml and host OS, so you'll have to iterate through until the build succeeds.

Unable to start docker - httpd (pid 1) already running

I have hosted one docker with PHP in a shared server of our office environments. Previously it was working fine without any issue. All the users were able to access the site via port forwarding to 8080. Here is my docker file details -
# Choose Repo from Docker Hub
FROM centos:latest
# Provide details of maintainer
MAINTAINER ritu
#Install necessary software
RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum -y install http://rpms.remirepo.net/enterprise/remi-release-7.rpm
RUN yum -y install yum-utils
RUN yum-config-manager --enable remi-php56
RUN yum -y install php php-mcrypt php-cli php-gd php-curl php-mysql php-ldap php-zip php-fileinfo php-devel php-pear make gcc systemtap-sdt-devel httpd unzip postfix
RUN export PHP_DTRACE=yes
RUN curl -sS https://getcomposer.org/installer | php
RUN mv -f composer.phar /usr/local/bin/composer
RUN chmod +x /usr/local/bin/composer
RUN composer require phpmailer/phpmailer
COPY phpinfo.php /var/www/html/
COPY php.ini /var/www/
COPY httpd.conf /var/www/
RUN cp -f /var/www/httpd.conf /etc/httpd/conf/
COPY *.rpm /var/www/
#Install & Configure OCI for PHP
COPY oci8-2.0.12.tgz /
RUN tar -xvf oci8-2.0.12.tgz
RUN yum -y localinstall /var/www/*.rpm --nogpgcheck
COPY client.sh /etc/profile.d/
RUN chmod +x /etc/profile.d/client.sh
RUN cp -f /var/www/php.ini /etc/
COPY php_oci8_int.h oci8-2.0.12/
COPY Log_Check.zip /
RUN unzip Log_Check.zip
RUN cp -a -R /Log_Check/* /var/www/html/
WORKDIR /oci8-2.0.12
RUN phpize
RUN ./configure --with-oci8=/usr/lib/oracle/12.2/client64
RUN cp -f /usr/include/oracle/12.2/client64/*.h /oci8-2.0.12/include/
RUN make
RUN make install
RUN ls /var/www/html/
RUN rm -rf /var/run/apache2/apache2.pid
#Expose necessary ports
EXPOSE 80
EXPOSE 1521
EXPOSE 25
#Provide Entrypoint
CMD ["-D", "FOREGROUND"]
ENTRYPOINT ["/usr/sbin/httpd"]
Suddenly one of my friend added another docker with same port 8080 in the same server. After that my docker got stopped. with below error -
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.18.0.3. Set the 'ServerName' directive globally to suppress this message
httpd (pid 1) already running
After several hours of googling and after trying lots of commands, I found that its easy to remove the entire container as well as images from the server. Hence I removed all containers with docker rm followed by image deletion with docker rmi. Again i have recreated the docker image on my local system (its working here) and transferred to server. Again I tried to run the docker. But faced same issue again.
Unable to find out the cause & solution. Need some help.
first remove ENTRYPOINT from your Dockerfile and just use:
CMD [ "/usr/sbin/httpd", "-X" ]
the warning regarding AH00558 is comming from your configuration and it i complaining about you do not use www.test.com you can ignore that for now and apache will still working. if you want to read more see this

docker file to run automation test in JS files

I am trying to create a docker file to run selenium tests for a java script based project. Below is my docker file so far:
#base image
FROM selenium/standalone-chrome
#access to the project within docker container - Bundle app source
COPY ./seleniumTest/project /app
# Install Node.js
RUN sudo apt-get update
RUN sudo apt-get install --yes curl
RUN curl --silent --location https://deb.nodesource.com/setup_8.x | sudo bash -
#binding
EXPOSE 8080
#Define runtime
ENTRYPOINT /app/login.test.js
while running with $ docker run -p 4000:80 lamgadekamal/dockertest
returns: Unable to find image 'lamkam/dockertest:latest' locally docker: Error response from daemon: manifest for lamkam/dockertest:latest not found. Could not figure out why am I getting this?
I suspect that you need to build your image first, since the image cannot be found.
Run this command from the same directory where your Dockerfile is located. This will build the image.
docker build -t lamgadekamal/dockertest .
You can then verify that the image exists by running docker images
EDIT: After looking at this again, it appears that you are trying to run the wrong image. You are trying to run lamgadekamal/dockertest, but you built the image with the tag lamkam/dockertest? Seems like you have a typo. I would suggest running docker images to see exactly what is there, but in all likelihood, you need to run lamkam/dockertest.
docker run -p 4000:80 lamkam/dockertest

Apache Tomcat 8 not starting within a docker container

I am experimenting with Docker and am very new to it. I am struck at a point for a long time and am not getting a way through and hence came up with this question here...
Problem Statement:
I am trying to create an image from a docker file containing Apache and lynx installation. Once done I am trying to access tomcat on 8080 of the container which is in turn forwarded to the 8082 of the host. But when running the image I never get tomcat started in the container.
The Docker file
FROM ubuntu:16.10
#Install Lynx
Run apt-get update
Run apt-get install -y lynx
#Install Curl
Run apt-get install -y curl
#Install tools: jdk
Run apt-get update
Run apt-get install -y openjdk-8-jdk wget
#Install apache tomcat
Run groupadd tomcat
Run useradd -s /bin/false -g tomcat -d /opt/tomcat tomcat
Run cd /tmp
Run curl -O http://apache.mirrors.ionfish.org/tomcat/tomcat- 8/v8.5.12/bin/apache-tomcat-8.5.12.tar.gz
Run mkdir /opt/tomcat
Run tar xzvf apache-tomcat-8*tar.gz -C /opt/tomcat --strip-components=1
Run cd /opt/tomcat
Run chgrp -R tomcat /opt/tomcat
Run chmod -R g+r /opt/tomcat/conf
Run chmod g+x /opt/tomcat/conf
Run chown -R tomcat /opt/tomcat/webapps /opt/tomcat/work /opt/tomcat/temp opt/tomcat/logs
Run cd /opt/tomcat/bin
Expose 8080
CMD /opt/tomcat/bin/catalina.sh run && tail -f /opt/tomcat/logs/catalina.out
When the image is built I tried running the container by the two below methods
docker run -d -p 8082:8080 imageid tail -f /dev/null
While using the above, container is running but tomcat is not started inside the container and hence not accessible from localhost:8082. Also I do not see anything if I perform docker logs longcontainerid
docker run -d -p 8082:8080 imageid /path/to/catalina.sh start tail -f /dev/null
I see tomcat started when I do docker logs longconatainrid
While using the above the container is started and stopped immediately and is not running as I can see from docker ps and hence again not accessible from localhost:8082.
Can anyone please tell me where I am going wrong?
P.s. I searched a lot on the internet but could not get the thing right. Might be there is some concept that i am not getting clearly.
Looking at the docker run command documentation, the doc states that any command passed to the run will override the original CMD in your Dockerfile:
As the operator (the person running a container from the image), you can override that CMD instruction just by specifying a new COMMAND
1/ Then when you run:
docker run -d -p 8082:8080 imageid tail -f /dev/null
The container is run with COMMAND tail -f /dev/null, the original command starting tomcat is overridden.
To resolve your problem, try to run:
docker run -d -p 8082:8080 imageid
and
docker log -f containerId
To see if tomcat is correctly started.
2/ You should not use the start argument with catalina.sh. Have a look at this official tomcat Dokerfile, the team uses :
CMD ["catalina.sh", "run"]
to start tomcat (when you use start, docker ends container at the end of the shell script and tomcat will start but not maintain a running process).
3/ Finally, why don't you use tomcat official image to build your container? You could just use the :
FROM tomcat:latest
directive at the beginning of your Dockerfile, and add you required elements (new files, webapps war, settings) to the docker image.

Installation of sql relay fails

I just created a docker container and tried to install SQL Relay inside it.
I've checked the prerequisites here and followed the installation documents here.
However, at the end of make install of sqlrelay, I saw an error like this:
update-rc.d: /etc/init.d/sqlrelay: file does not exist
update-rc.d: /etc/init.d/sqlrcachemanager: file does not exist
make[1]: *** [install] Error 1
make[1]: Leaving directory `/sqlrelay-0.66.0/init'
make: *** [install-init] Error 2
What might be wrong with my installation?
Here's the docker file I used to start my installation:
FROM ubuntu:trusty
RUN apt-get update && \
apt-get install libxml2-dev libpcre3 libpcre3-dev libmysqld-dev -y
RUN apt-get install mysql-server libmysqlclient-dev -y
# sql relay prerequisites
RUN apt-get install g++ make perl php5-dev python-dev ruby-dev \
tcl-dev openjdk-7-jdk erlang-dev nodejs-dev node-gyp mono-devel \
libmariadbclient-dev libpq-dev firebird-dev libfbclient2 libsqlite3-dev \
unixodbc-dev freetds-dev mdbtools-dev -y
COPY rudiments-0.56.0.tar.gz /
COPY sqlrelay-0.66.0.tar.gz /
EXPOSE 80
Here are the outputs of ./configure, make, and make install inside sqlrelay-0.66.0 folder:
configure_log
make_log
make_install_log
If you need more information of my installation process, just let me know. I can provide it.
I think you should use ADD instead of COPY in your lines such as
COPY rudiments-0.56.0.tar.gz /
Your COPY just copies the .tar.gz, but does not unpack them
as with ADD
If the <src> parameter of ADD is an archive in a recognised compression format, it will be unpacked
This is extracted from
What is the difference between the `COPY` and `ADD` commands in a Dockerfile?
I have recently hit the same issue. The issue I found was that the init Makefile was incorrectly detecting the use of systemctl on Ubuntu Trusty and putting the scripts there. Later on the script would try to find the scripts in init.d and fail.
The solution is to edit the Makefile: sqlrelay-X.X.X/init/Makefile
Replace:
install:
if ( test -d "/lib/systemd/system" ); \
With:
install:
if ( test -d "/lib/systemd/system_x" ); \
Make a similar change to the uninstall option later in the script and it will now correctly install on Ubuntu.