Singularity container - Binding a folder in a seperate partition - singularity-container

I have a private group project folder (let's call it data_dir) on a high performance cluster where I don't have root privileges. The folder is in a seperate partition.
I have a singularity container where I need to access data_dir. Official documentation says -B flag is to bind path, but I can't access the folder within the container using -B. This is what I tried so far:
XXXXXX  login1[~/work/subcam] master ⦿ ➜ readlink data
/gpfs/projects/oceanvideo/data
XXXXXX  login1[~/work/subcam] master ⦿ ➜ singularity run -B $(readlink data):$(pwd)/data container.sif
WARNING: skipping mount of /local_scratch: no such file or directory
________ _______________
___ __/__________________________________ ____/__ /________ __
__ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / /
_ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ /
/_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/
You are running this container as user with ID 21530 and group 21500,
which should map to the ID and group for your user on the Docker host. Great!
tf-docker ~/work/subcam > cd data
bash: cd: data: No such file or directory
tf-docker ~/work/subcam > cd /gpfs/
tf-docker /gpfs > ls
work
tf-docker /gpfs > cd projects
bash: cd: projects: No such file or directory
How can I access data_dir with the container?

-B is the correct way to mount directories in the container. A few options:
If /gpfs/projects/oceanvideo/data is itself a symlink, it will not resolve inside the container and give that error code. readlink will only resolve a single level. Find the original, non-linked path and use that with -B.
If that is not the case, run with singularity -vv run ... to see if there is more information on why the directory is not being mounted.
Make sure that the cluster allows user binds

Related

How to access \\wsl$\othercontainer\some\file from within a WSL container?

From Windows, I can access the file systems of all the WSL containers from under \\wsl$.
And from inside a WSL container, I can access the windows C:\ drive as /mnt/c.
But how can I access another container's drive from inside a WSL container?
I'm trying to access \\wsl$\othercontainer\some\file from inside a WSL container.
wslpath can normally convert Windows file paths to paths accessible from WSL:
WSL2#~» wslpath 'C:\Windows\System32\drivers\etc\hosts'
/mnt/c/Windows/System32/drivers/etc/hosts
But it doesn't work for:
WSL2#~» wslpath '\\wsl$\othercontainer\some\file'
wslpath: \\wsl$\othercontainer\some\file
WSL2#~» echo $?
1
And of course:
WSL2#~» ls -l '\\wsl$\othercontainer\some\file'
ls: cannot access '\\wsl$\othercontainer\some\file': No such file or directory
This answer provided the answer:
sudo mkdir /mnt/othercontainer
sudo mount -t drvfs '\\wsl$\othercontainer' /mnt/othercontainer
ls -l /mnt/othercontainer/some/file
NOTE: It looks like symbolic links aren't supported. When one is encountered, we get an error like:
$ ls -l /mnt/othercontainer/bin
ls: cannot read symbolic link '/mnt/othercontainer/bin': Function not implemented
lrwxrwxrwx 1 root root 7 Apr 23 2020 /mnt/othercontainer/bin

How to install apache module in docker container at the correct location

I have the following docker file:
FROM wodby/apache:2.4
MAINTAINER NAME EMAIL
ENV http_proxy 'http://xxx.xxx.xxx.de:80'
ENV https_proxy 'http://xxx.xxx.xxx.xxx:80'
ENV APP_ROOT="/var/www/html" \
APACHE_DIR="/usr/local/apache2"
WORKDIR /usr/local/apache2
USER root
RUN ls
RUN set -x \
&& apk add apache-mod-auth-kerb
CMD ["tail", "-f", "/dev/null"]
My intention is to add the apache-mod-auth-kerb module to my container.
Base Image is alpine but wodby/apache inherits from wodby/http which is Debian.
Somehow the module is installed under /usr/lib/apache2 but the apache in wodby/apache seems to load its modules from /usr/local/apache2/modules.
I don't think the solution is to move the module per cp or symlink?
Here are the links to the base dockerfiles:
https://github.com/wodby/httpd
https://github.com/wodby/apache
How can I make sure that the module and config are put in the correct location? I think the problem might be the difference between the used Linux distros.
Any hints?
The docker-library/httpd (Maintained by Docker) supports alpine and Debian based images.
Since wodby/httpd is forked from docker-library/httpd, you can see files Debian related Dockerfile but they only support alpine based images as per the README.md file.
Even images woby/apache are alpine based.
For modules, you can create a conf file as shown below
mod_auth_kerb.conf
LoadModule auth_kerb_module /usr/lib/apache2/mod_auth_kerb.so
Dockerfile
FROM wodby/apache:2.4
MAINTAINER NAME EMAIL
ENV http_proxy 'http://xxx.xxx.xxx.de:80'
ENV https_proxy 'http://xxx.xxx.xxx.xxx:80'
ENV APP_ROOT="/var/www/html" \
APACHE_DIR="/usr/local/apache2"
WORKDIR /usr/local/apache2
USER root
RUN ls
RUN set -x \
&& apk add apache-mod-auth-kerb
COPY mod_auth_kerb.conf /usr/local/apache2/conf/conf.d/mod_auth_kerb.conf
You can check them
bash-4.4# httpd -M | grep auth_kerb_module
auth_kerb_module (shared)

File conflict error during rpm install

I have two rpms xinstrument.rpm and xlog.rpm.
First the xinstrument.rpm should be installed followed by the xlog.rpm.
Both these rpms create and copy data into /opt/xinstrument-control/ directory.
But problem is when we install the second rpm xlog.rpm then we get conflict errors
Preparing... ################################# [100%]
file /opt/xinstrument-control from install of xlog_x86_64 conflicts with file from package xinstrument.x86_64
.
.
.
.
xinstrument.rpm when installed gives the following permission to the xinstrument-control directory
# ls -l /opt/
total 0
drwxr-xr-x 1 root users 38 May 24 14:34 xinstrument-control
while xlog.rpm when installed gives the following permission to the xinstrument-control directory
# ls -l /opt/
total 16
drwxr-xr-x 6 root sys 4096 May 16 05:43 xinstrument-control
Seeing the permissions and ownership of the directory is there any problem in it which is leading to conflict?
What else might be the cause of the conflict and how to resolve it?
Only one can own that directory. Don't have both include it in their %files stanza.

'gradlew.bat' is not recognized as an internal or external command

So i am trying to start a react native android project in windows 10 based on the Getting Start React Native.
I am stuck at the last step->react-native run-android
ps: Im using genymotion for my android emulator marshmallow 6.0
Got it,
Add the file location to environment variable PATH
C:\Users\user\AppData\Local\Android\Sdk\tools\templates\gradle\wrapper
Another version of the #Kai Jie findings, that works for me.
Compiling previous answers I did the following to get Android SDK and gradle working (You need gradle working anyway to compile your Android project):
Prerequisites. You have gradle installed in the folders like I found on my computer. Please, check it:
C:\Program Files\Android\Android Studio\gradle\gradle-X.X\
Set a new system variable (Control Panel\System and Security\System Advance system ->settings->environment variables-system variables). Do not forget to change a gradle version.
GRADLE_HOME C:\Program Files\Android\Android Studio\gradle\gradle-X.X\
Add the following path to system paths (Control Panel\System and Security\System Advance system ->settings->environment variables-system variables):
%GRADLE_HOME%\bin
You might want to REBOOT your computer, to make sure, that the system sees the variables.
Check if gradle works properly with the terminal commands
$ gradle -v
If the environment variables are set. I found that you need to actually start gradle. Then you can use the gradlew command.
If you are learning from Spring.io. Their site says to just run
./gradlew build && java -jar build/libs/gs-spring-boot-0.1.0.jar
That will fail unless you first run
gradle
Intellij had me in the correct directory so I just deleted
./
and run
gradlew build && java -jar build/libs/gs-spring-boot-0.1.0.jar
The app started right up.
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.1.6.RELEASE)
if your gradlew.bat file is being generated but delete itself after a few second than it's a problem of your android studio .
steps to follow :
1 . open your android studio and click on configure.
2. Go to setting there.
3.Click on Build,Execution,Deployment => Build Tools => Gradle.
4. check Gradle user home if it's pointing to .gradle file location or not .If it's not than give the correct path of .gradle file like this in my case C:/Users/Your_PC_Name/.gradle .
5. And than create new reactproject.
You probably have to do something like:
./gradlew assembleDebug
Add code below to the bottom of build.gradle file
task createWrapper(type: Wrapper) {
gradleVersion = '4.9'
};
then run gradle createWrapper in terminal
This will generate gradlew.bat file and corrospondinf gradlew files
in my case, I had a & sign in the name of the folder that I had put in the flutter project. removing it solved the problem for me.

httpd in docker cannot start

I'm trying to install HTTPD in docker, I wrote a dockerfile like this:
FROM centos
VOLUME /var/log/httpd
VOLUME /etc/httpd
VOLUME /var/www/html
# Update Yum Repostory
RUN yum clean all && \
yum makecache fast && \
yum -y update && \
yum -y install httpd
RUN yum clean all
EXPOSE 80
CMD /usr/sbin/httpd -D BACKGROUND && tail -f /var/log/httpd/access_log
it works if I run the image without host volumes, but failed if I use parameter:
--volume /data/httpd/var/www/html:/var/www/html --volume /data/httpd/var/log:/var/log --volume /data/httpd/etc:/etc/httpd
the error message is:
httpd: Could not open configuration file /etc/httpd/conf/httpd.conf: No such file or directory
I checked the mount point which is empty:
# ll /data/httpd/etc/
total 0
But if I don't use "volume" by default docker copys over files to a temp folder:
# ll /var/lib/docker/volumes/04f083887e503c6138a65b300a1b40602d227bb2bbb58c69b700f6ac753d1c34/_data
total 4
drwxr-xr-x. 2 root root 35 Nov 3 03:16 conf
drwxr-xr-x. 2 root root 78 Nov 3 03:16 conf.d
drwxr-xr-x. 2 root root 4096 Nov 3 03:16 conf.modules.d
lrwxrwxrwx. 1 root root 19 Nov 3 03:16 logs -> ../../var/log/httpd
lrwxrwxrwx. 1 root root 29 Nov 3 03:16 modules -> ../../usr/lib64/httpd/modules
lrwxrwxrwx. 1 root root 10 Nov 3 03:16 run -> /run/httpd
So I'm confused, why docker refused to copy them to the named location? and how to fix this problem?
This is a documented behavior indeed:
Volumes are initialized when a container is created. If the container’s
base image contains data at the specified mount point, that existing data
is copied into the new volume upon volume initialization. (Note that this
does not apply when mounting a host directory.)
i.e. when you mount the /etc/httpd volume --volume /data/httpd/etc:/etc/httpd, no data will be copied.
You can also see https://github.com/docker/docker/pull/9092 for a more detailed discussion on why it works this way (in case you are interested).
A usual workaround for this is to copy your initial data, to the volume folder (from within the container), inside your ENTRYPOINT or CMD script,
in case it is empty.
Note that your initial dataset must be kept outside the volume folder (e.g. as .tar file in /opt), for this to work, as the volume folder will be shadowed by the host folder mounted over it.
Given below is a sample Dockerfile and Script, which demonstrate the behavior:
Sample Dockerfile
FROM debian:stable
RUN mkdir -p /opt/test/; touch /opt/test/initial-data-file
VOLUME /opt/test
Sample script (try various volume mappings)
#Build image
>docker build -t volumetest .
Sending build context to Docker daemon 2.56 kB
Step 0 : FROM debian:stable
---> f7504c16316c
Step 1 : RUN mkdir -p /opt/test/; touch /opt/test/initial-data-file
---> Using cache
---> 1ea0475e1a18
Step 2 : VOLUME /opt/test
---> Using cache
---> d8d32d849b82
Successfully built d8d32d849b82
#Implicit Volume mapping (as defined in Dockerfile)
>docker run --rm=true volumetest ls -l /opt/test
total 0
-rw-r--r-- 1 root root 0 Nov 4 18:26 initial-data-file
#Explicit Volume mapping
> docker run --rm=true --volume /opt/test volumetest ls -l /opt/test/
total 0
-rw-r--r-- 1 root root 0 Nov 4 18:26 initial-data-file
#Explicitly Mounted Volume
>mkdir test
>docker run --rm=true --volume "$(pwd)/test/:/opt/test" volumetest ls -l /opt/test
total 0
And here is a simple entrypoint script, illustrating a possible workaround:
#!/bin/bash
VOLUME=/opt/test
DATA=/opt/data-volume.tar.gz
if [[ -n $(find "$VOLUME" -maxdepth 0 -empty) ]]
then
echo Preseeding VOLUME $VOLUME with data from $DATA...
tar -C "$VOLUME" -xvf "$DATA"
fi
"$#"
add the following to the Dockerfile
COPY data-volume.tar.gz entrypoint /opt/
ENTRYPOINT ["/opt/entrypoint"]
First run:
>docker run --rm=true --volume "$(pwd)/test/:/opt/test" volumetest ls -l /opt/test
Preseeding VOLUME /opt/test with data from /opt/data-volume.tar.gz...
preseeded-data
total 0
-rw-r--r-- 1 1001 users 0 Nov 4 18:43 preseeded-data
Subsequent runs:
>docker run --rm=true --volume "$(pwd)/test/:/opt/test" volumetest ls -l /opt/test
ls -l /opt/test
total 0
-rw-r--r-- 1 1001 users 0 Nov 4 18:43 preseeded-data
Note, that the volume folder will only be populated with data,
if it was completely empty before.