Unable to build writable singularity container - singularity-container

I am trying to build a writable singularity container with the command sudo singularity build --writable my_container.img docker://image_name, but I get the error Error for command "build": unknown flag: --writable.
I decided to follow a guide from Singularity (here) to see if I could find my problem. I am using the command sudo singularity build --writable lolcow.img shub://GodloveD/lolcow, but I am getting the same error Error for command "build": unknown flag: --writable.
I am on singularity version 3.6.4.
Does anyone happen to know what might be going on?

--writable is option for running. That is for building:
sudo singularity build lolcow.img shub://GodloveD/lolcow
For running:
singularity run --fakeroot --writable lolcow.img
You also need --fakeroot to write in root accessible places.
However, It is not persistent. As reminded by "Converting SIF file to temporary sandbox..." message. That is you can write during your session but after you done it is gone:
$ singularity shell -f --writable lolcow.img
INFO: Converting SIF file to temporary sandbox...
WARNING: Skipping mount /etc/localtime [binds]: /etc/localtime doesn't exist in container
Singularity> echo test > /etc/banana
Singularity> cat /etc/banana
test
Singularity> exit
INFO: Cleaning up image...
$ singularity shell -f --writable lolcow.img
INFO: Converting SIF file to temporary sandbox...
WARNING: Skipping mount /etc/localtime [binds]: /etc/localtime doesn't exist in container
Singularity> cat /etc/banana
cat: /etc/banana: No such file or directory
Singularity>
For persistent writes for testing/developing purpose you can use --sandbox option, though you'll need to run it as root too:
$ sudo singularity build --sandbox lolcow.img shub://GodloveD/lolcow
$ sudo singularity shell --writable lolcow.img
WARNING: Skipping mount /etc/localtime [binds]: /etc/localtime doesn't exist in container
Singularity> echo test > /etc/banana
Singularity> cat /etc/banana
test
Singularity> exit
$ sudo singularity shell --writable lolcow.img
WARNING: Skipping mount /etc/localtime [binds]: /etc/localtime doesn't exist in container
Singularity> cat /etc/banana
test
Singularity> exit

Related

singularity returns a permission denied

I would like to build a singularity container for an application shipped via AppImage. To do so, I build the following def file:
Bootstrap: docker
From: debian:bullseye-slim
%post
apt-get update -y
apt-get install -y wget unzip fuse libglu1 libglib2.0-dev libharfbuzz-dev libsm6 dbus
cd /opt
wget https://www.ill.eu/fileadmin/user_upload/ILL/3_Users/Instruments/Instruments_list/00_-_DIFFRACTION/D3/Mag2Pol/Mag2Pol_v5.0.2.AppImage
chmod u+x Mag2Pol_v5.0.2.AppImage
%runscript
exec /opt/Mag2Pol_v5.0.2.AppImage
I build the container using singularity build -f test.sif test.def command. The build runs OK but when running the sif file using ./test.sif I get an /.singularity.d/runscript: 3: exec: /opt/Mag2Pol_v5.0.2.AppImage: Permission denied error. Looking inside the container using a singularity shell command shows that the /opt/Mag2Pol_v5.0.2.AppImage executable belongs to root. I guess that it is the source of the problem but I do not know how to solve it. Would you have any idea ?

Why Molecule is not able to start a docker container (Failed to create temporary directory)

I found similar case here, that I am using molecule to test my ansible roles, but for some reason it is skipping "creation" part and gives error like:
fatal: [rabbitmq]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" && echo ansible-tmp-1638541586.6239848-828-250053975102429=\"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" ), exited with result 1", "unreachable": true}
It is skipping the create process: Skipping, instances already created. However, nothing is running:
name#EEW00438:~/.cache$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
name#EEW00438:~/.cache$
what I tried:
molecule destroy
molecule reset
restart
rm -rf ~/.cache/
changed remote_tmp to /tmp/.ansible/ in /etc/ansible/ansible.cfg
reinstall molecule
This issue is only with one role.
UPDATE:
it is failing on step:
mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623 `\" && echo ansible-tmp-1638782939.31706-2913-12516475286623=
mkdir: cannot create directory ‘"/home/user/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623"’: No such file or directory
I stumbled upon this issue as well.
When you create the role you need to create it as molecule init role --driver-name docker ns.myrole to enable docker. Be sure to install the docker driver too if you haven't pip install --upgrade molecule-docker
So if you need to tweak the container that runs, edit molecule.yml. It defaults to centos. I switched to ubuntu in there, an created a Dockerfile to provision the container with things that need to exist.
molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
platforms:
- name: instance
image: ubuntu:22.04 # this is required but ignored since I specify a `dockerfile`
pre_build_image: false
dockerfile: Dockerfile
provisioner:
name: ansible
verifier:
name: ansible
For example, Ubuntu 22.04 doesn't use python anymore, so I added an alias at the end of what molecule renders so that Ansible can use python and have it redirect to python3
FROM ubuntu:22.04
RUN if [ $(command -v apt-get) ]; then export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y python3 sudo bash ca-certificates iproute2 python3-apt aptitude && apt-get clean && rm -rf /var/lib/apt/lists/*; \
elif [ $(command -v dnf) ]; then dnf makecache && dnf --assumeyes install /usr/bin/python3 /usr/bin/python3-config /usr/bin/dnf-3 sudo bash iproute && dnf clean all; \
elif [ $(command -v yum) ]; then yum makecache fast && yum install -y /usr/bin/python /usr/bin/python2-config sudo yum-plugin-ovl bash iproute && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \
elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python3 sudo bash iproute2 && zypper clean -a; \
elif [ $(command -v apk) ]; then apk update && apk add --no-cache python3 sudo bash ca-certificates; \
elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python3 sudo bash ca-certificates iproute2 && xbps-remove -O; fi
RUN echo 'alias python=python3' >> ~/.bashrc
It's been years since I last used Molecule, and I must say... it's gone downhill. It used to be easy/clear/direct to get things working. Sigh. I guess I should stick to containers and force the migration off VMs sooner!
The problem may be caused by a Docker context change performed at the start of Docker Desktop. Despite this, Molecule does create a container, but in an inactive context.
At startup, Docker Desktop automatically switches the context from default to desktop-linux [1]. The active context determines which containers are available from CLI.
The context cannot be set in the molecule, i.e. the default context is always used to create containers [2].
$ molecule create --scenario-name test
... # The output with the error is skipped because it duplicates the output from the question
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
desktop-linux * moby unix:///home/bkarpov/.docker/desktop/docker.sock
$ docker context use default
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a71bfd28992f geerlingguy/docker-ubuntu2004-ansible "bash -c 'while true…" 5 minutes ago Up 5 minutes some-instance
$ molecule login --scenario-name test
INFO Running test > login
root#some-instance:/#
Solutions
Switch the context back to default manually
docker context use default
This solution is suitable for one-time execution, since the context will need to be switched every time Docker Desktop is started. Docker Desktop service will continue to work using the desktop-linux context.
Issue with the request to add context switching to Docker Desktop - https://github.com/docker/roadmap/issues/47
Stop Docker Desktop
systemctl --user stop docker-desktop
Stopping the Docker Desktop service will automatically switch to the default context.
Set DOCKER_CONTEXT so that Docker Desktop does not change the context in the current shell
export DOCKER_CONTEXT=default
systemctl --user restart docker-desktop
When stopping, the context returns to default, and when starting, it does not switch to desktop-linux.
References
https://docs.docker.com/desktop/install/ubuntu/#launch-docker-desktop
https://github.com/ansible-community/molecule-docker#faq

%files section of Singularity recipe non-intuitively copies files to wrong bind location

I am working on CentOS 8 and am using Singularity 3.6.2. I have a Singularity recipe file :
BootStrap: yum
OSVersion: 8
MirrorURL: http://mirror.centos.org/centos-8/8/BaseOS/x86_64/os/
Include: yum
%files
/gpfs0/home1/group/user/path/to/some.rpm /tmp
%post
ls /tmp
echo "Hello from inside the container"
When I run :
$ sudo singularity build test.simg tmp
INFO: Starting build...
INFO: Skipping GPG Key Import
INFO: Adding owner write permission to build path: /tmp/rootfs-4db1e756-22a8-11eb-bb20-34800d2d90f0
INFO: Copying /gpfs0/home1/group/user/path/to/some.rpm to /tmp/rootfs-4db1e756-22a8-11eb-bb20-34800d2d90f0/tmp
INFO: Running post scriptlet
+ ls /tmp
qtsingleapp-RStudi-c679-6387e228-lockfile
rootfs-4db1e756-22a8-11eb-bb20-34800d2d90f0
rootfs-b10ad12c-229a-11eb-85a3-34800d2d90f0
+ echo 'Hello from inside the container'
Hello from inside the container
INFO: Creating SIF file...
According to the Singularity documentation
In the default configuration, the system default bind points are $HOME , /sys:/sys , /proc:/proc, /tmp:/tmp,
Question :
Why is the %files section putting my rpm in /tmp/rootfs-4db1e756-22a8-11eb-bb20-34800d2d90f0/tmp and not in /tmp? That seems to contradict the documentation. This is also different from the behavior observed with Singularity v2.5.1
Also, how would I access said file. The long 'hash-like' part of the path seems to change depending on the build?
I don't have an answer reconciling the documentation with where the %files section is actually putting the files, however I do have an answer for how to access the files copied. You need to use ${SINGULARITY_CONTAINER} in the %post section.
E.g.
$ cat Singularity
BootStrap: yum
OSVersion: 8
MirrorURL: http://mirror.centos.org/centos-8/8/BaseOS/x86_64/os/
Include: yum
%files
# Will need to use environmental variables to copy the code to
/gpfs0/home/group/user/path/to/some.rpm /tmp
%post
ls ${SINGULARITY_CONTAINER}/tmp
echo "Hello from inside the container"
When building yields :
$ sudo singularity build tmp.simg tmp
INFO: Starting build...
INFO: Skipping GPG Key Import
INFO: Adding owner write permission to build path: /tmp/rootfs-e2a3fbb4-242b-11eb-a267-34800d2d90f0
INFO: Copying /gpfs0/home/group/user/path/to/some.rpm to /tmp/rootfs-e2a3fbb4-242b-11eb-a267-34800d2d90f0/tmp
INFO: Running post scriptlet
+ ls /tmp/rootfs-e2a3fbb4-242b-11eb-a267-34800d2d90f0/tmp
some.rpm
+ echo 'Hello from inside the container'
Hello from inside the container
INFO: Creating SIF file...

How to make nvm run from within a script influence the environment of the calling shell?

When I run nvm from within a shell script, it doesn't seem to impact the environment of the calling shell:
$ node -v
v4.1.1
$ env | grep -i node
MANPATH=/home/ubuntu/.nvm/versions/node/v4.1.1/share/man:/usr/local/rvm/rubies/ruby-2.2.1/share/man:/usr/local/man:/usr/local/share/man:/usr/share/man:/usr/local/rvm/man
NVM_PATH=/home/ubuntu/.nvm/versions/node/v4.1.1/lib/node
PATH=/home/ubuntu/.nvm/versions/node/v4.1.1/bin:/usr/local/rvm/gems/ruby-2.2.1/bin:/usr/local/rvm/gems/ruby-2.2.1#global/bin:/usr/local/rvm/rubies/ruby-2.2.1/bin:/mnt/shared/bin:/home/ubuntu/workspace/node_modules/.bin:/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/mnt/shared/sbin:/opt/gitl:/opt/go/bin:/mnt/shared/c9/app.nw/bin:/usr/local/rvm/bin
NVM_NODEJS_ORG_MIRROR=https://nodejs.org/dist
NODE_PATH=/mnt/shared/lib/node_modules
NVM_BIN=/home/ubuntu/.nvm/versions/node/v4.1.1/bin
$
$ cat test
#!/bin/bash
. ~/.nvm/nvm.sh
nvm use 0.10.40
nvm alias default 0.10.40
echo NVM_PATH=$NVM_PATH
echo MANPATH=$MANPATH
echo PATH=$PATH
echo NVM_BIN=$NVM_BIN
$ ./test
Now using node v0.10.40 (npm v1.4.28)
default -> 0.10.40 (-> v0.10.40)
NVM_PATH=/home/ubuntu/.nvm/v0.10.40/lib/node
MANPATH=/home/ubuntu/.nvm/v0.10.40/share/man:/usr/local/rvm/rubies/ruby-2.2.1/share/man:/usr/local/man:/usr/local/share/man:/usr/share/man:/usr/local/rvm/man
PATH=/home/ubuntu/.nvm/v0.10.40/bin:/usr/local/rvm/gems/ruby-2.2.1/bin:/usr/local/rvm/gems/ruby-2.2.1#global/bin:/usr/local/rvm/rubies/ruby-2.2.1/bin:/mnt/shared/bin:/home/ubuntu/workspace/node_modules/.bin:/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/mnt/shared/sbin:/opt/gitl:/opt/go/bin:/mnt/shared/c9/app.nw/bin:/usr/local/rvm/bin
NVM_BIN=/home/ubuntu/.nvm/v0.10.40/bin
$
$ node -v
v4.1.1
$ env | grep -i node
MANPATH=/home/ubuntu/.nvm/versions/node/v4.1.1/share/man:/usr/local/rvm/rubies/ruby-2.2.1/share/man:/usr/local/man:/usr/local/share/man:/usr/share/man:/usr/local/rvm/man
NVM_PATH=/home/ubuntu/.nvm/versions/node/v4.1.1/lib/node
PATH=/home/ubuntu/.nvm/versions/node/v4.1.1/bin:/usr/local/rvm/gems/ruby-2.2.1/bin:/usr/local/rvm/gems/ruby-2.2.1#global/bin:/usr/local/rvm/rubies/ruby-2.2.1/bin:/mnt/shared/bin:/home/ubuntu/workspace/node_modules/.bin:/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/mnt/shared/sbin:/opt/gitl:/opt/go/bin:/mnt/shared/c9/app.nw/bin:/usr/local/rvm/bin
NVM_NODEJS_ORG_MIRROR=https://nodejs.org/dist
NODE_PATH=/mnt/shared/lib/node_modules
NVM_BIN=/home/ubuntu/.nvm/versions/node/v4.1.1/bin
$
What do I need to do inside script "test" so that "node -v" will give me 0.10.40 after I run "./test" ?
Note that if I open a new terminal, and type "node -v" I get 0.10.40. But for some reason, in the shell in which I executed the "test" script I seem to be stuck with 4.1.1.
Bash scripts run in their own process context that inherits its environment from the parent process. It's not possible to change the environment of the parent. See Can a shell script set environment variables of the calling shell?
But just as your script sources nvm with . ~/.nvm/nvm.sh, you could source your script, which will execute it in the context of the parent shell:
$ node -v
v4.1.1
$ ./test
Now using node v0.10.40 (npm v2.14.8)
default -> 0.10.40 (-> v0.10.40)
NVM_PATH=/Users/william/.nvm/v0.10.40/lib/node
MANPATH=/Users/william/.nvm/v0.10.40/share/man:/Users/william/.rvm/rubies/ruby-2.1.2/share/man:/usr/local/share/man:/usr/share/man:/opt/X11/share/man:/usr/local/MacGPG2/share/man:/Users/william/.rvm/share/man:/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/share/man:/Applications/Xcode.app/Contents/Developer/usr/share/man:/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/share/man
PATH=/Users/william/.nvm/v0.10.40/bin:/Users/william/.rvm/gems/ruby-2.1.2/bin:/Users/william/.rvm/gems/ruby-2.1.2#global/bin:/Users/william/.rvm/rubies/ruby-2.1.2/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/MacGPG2/bin:~/local/bin:~/bin:/Users/william/.rvm/bin:/Users/william/.rvm/bin:./node_modules/.bin:/usr/local/heroku/bin
NVM_BIN=/Users/william/.nvm/v0.10.40/bin
$ node -v
v4.1.1
$ source ./test
Now using node v0.10.40 (npm v2.14.8)
default -> 0.10.40 (-> v0.10.40)
NVM_PATH=/Users/william/.nvm/v0.10.40/lib/node
MANPATH=/Users/william/.nvm/v0.10.40/share/man:/Users/william/.rvm/rubies/ruby-2.1.2/share/man:/usr/local/share/man:/usr/share/man:/opt/X11/share/man:/usr/local/MacGPG2/share/man:/Users/william/.rvm/share/man:/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/share/man:/Applications/Xcode.app/Contents/Developer/usr/share/man:/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/share/man
PATH=/Users/william/.nvm/v0.10.40/bin:/Users/william/.rvm/gems/ruby-2.1.2/bin:/Users/william/.rvm/gems/ruby-2.1.2#global/bin:/Users/william/.rvm/rubies/ruby-2.1.2/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/MacGPG2/bin:~/local/bin:~/bin:/Users/william/.rvm/bin:/Users/william/.rvm/bin:./node_modules/.bin:/usr/local/heroku/bin
NVM_BIN=/Users/william/.nvm/v0.10.40/bin
$ node -v
v0.10.40

bundle not found via ssh

If I ssh into my VPS as the deployment user and run bundle -v I get Bundler version 1.1.5 as expected.
If I run ssh deployment#123.123.123.123 bundle -v, then I see bash: bundle: command not found
Why isn't bundle being shown running commands via ssh?
More Info
$ cat ~/.bashrc
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
if [ -d "${RBENV_ROOT}" ]; then
export PATH="${RBENV_ROOT}/bin:${PATH}"
eval "$(rbenv init -)"
fi
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
When you run:
ssh deployment#123.123.123.123
You get a login shell on the remote host, which means that your shell will run (...for bash...) .bash_profile or .profile or equivalent AS WELL AS your per-shell initialization file.
When you run:
ssh deployment#123.123.123.123 some_command
This does not start a login shell, so it only runs the per-shell initialization file (e.g., .bashrc).
The problem you've described typically means that you need something in your .profile file (typically an environment variable setting) for everything to work.