Files under /home in singularity container are not accessible - singularity-container

Could someone please let me know how one can access files in /home within a singularity container?
I created a docker image. In this image, some packages are built and installed under /home. Some of those are also added to PYTHONPATH within the docker image. If I run the image, then a docker container is created. Within this container I can access all files under /home and use the Python modules that I added. This is a fully working docker image.
I wanted to use the packages and Python modules on a HPC system. So, I converted the docker image to a singularity image. Then, I used the singularity shell <image_name.sif> command to access the shell in the container. After that I see the prompt below.
Singularity> cat /etc/*-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.6 LTS"
NAME="Ubuntu"
VERSION="18.04.6 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.6 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
Singularity>
The host OS on the HPC system is Red Hat Linux. Since the /etc/*-release command shows Ubuntu, it seems like the /etc directory is the one inside the container. This looks reasonable. However, when I type ls /home, then I see the contents of /home on the host OS. Howe could I find the files in /home within the container?
If I type any commands to run the packages installed in /home within the container, then the singularity shell prints command not found. Also, if I run the Python interpreter, then I cannot import any modules installed within the container. Although the Python version matches the one in the container, the modules are not located. The PYTHONPATH includes paths like /home/<a_directory_name>, but the Python interpreter cannot locate the modules. Even though the docker image is fully functional, the corresponding singularity image is completely useless.
How could I use the packages and Python modules installed in /home in the singularity container?

By default Singularity automatically mounts $HOME into the container, which will shadow anything that was installed there during image creation.
To skip this, use the --no-home flag when running your singularity command. Additional options, such as mounting home to a different location, are described in the online and CLI documentation.

Related

Setting up GPU support in Airflow containers with Docker-compose - (GPU support with Tensorflow)

I am having some difficulties in starting airflow using docker-compose with appropriate GPU libraries to run my machine learning tasks.
The airflow-scheduler throws this error:
airflow-scheduler_1 | 2022-03-21 12:33:36.919960: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
Basically, there is no CUDA libraries installed in the /usr/local within the airflow container hence the error. I have installed nvidia-container runtime and set the deamon default runtime in deamon.json file
curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | \ sudo apt-key add - distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \ sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list sudo apt-get update
And I have managed to use the runtime:nvidia in the docker-compose.yaml file. This way within the airflow container I can see nvidia-smi. However CUDA libraries are still missing.
Is there a way to install these libraries automatically (ideally FROM tensorflow/tensorflow:latest-gpu) as these set the CUDA libraries within the container?
On the other hand, if I am not using docker-compose I can start a container with docker:
docker run -it --gpus all tensorflow/tensorflow:latest-gpu
This container has all the libraries that I need. However, I would like to use docker-compose as life will be much easier to run multiple containers and setting up all network. So I would like to avoid this approach.
Also I can use the docker in airflow and mount the docker socket to airflow container such that I can initialise a new container from the airflow. This way, I can have all the CUDA libraries also installed however, it sounds very counter-intuitive and I am having difficulties understanding why I can't set all these within the airflow container originally.
client = docker.from_env()
# run the container
response = client.containers.run(
# The container you wish to call
'tensorflow/tensorflow:latest-gpu',
# The command to run inside the container
'find / -name "libcudart.so.11.0"',
# Passing the GPU access
device_requests=[
docker.types.DeviceRequest(count=-1, capabilities=[['gpu']])
]
)
I would appreciate if you can assist me in the right direction.

How to attach Bucket to Google Compute Engine VM on Startup?

I would like to, on startup, copy contents of my bucket to the VM with the Container Optimized OS. When the server shuts down I'd like to save the changes back to the bucket.
I've tried making a startup script
#!/bin/bash
toolbox
gsutil cp -r gs://my-bucket/
However, this causes the VM to fail on startup despite this script working if I run it manually.
I think I found a reasonable solution. My script has changed to
#! /bin/bash
toolbox --bind=/home/username/bucket-folder:/my-bucket <<< "gsutil cp -r /my-bucket/* gs://my-bucket"
So what happens is we need to call toolbox --bind to bind a folder from the server to the toolbox container. Then we use <<< to pass the whole command to the container when it starts up so we copy to the newly bound directory so it goes back to the server.
Now when I bound the directory in my docker container, everything is there!
I just tried:
#! /bin/bash
gsutil cp -r gs://my-bucket /
And it worked for me. What is the toolbox command that you are executing previously?
Anyway you can see what is failing in the Serial Port Output.
EDIT: In the Container Optimized OS this does not work as this OS does not have the gsutil package preinstalled. Refer to #DanBaba answer.

Generating micropython + python code `.hex` file from the command line for the BBC micro:bit

Is it possible to generate a .hex file with MicroPython and my own python program code at a Linux command line, rather than in one of the editors?
Looking at the tag in your question, it looks like you want to use MicroPython on the BBC micro:bit, correct?
If that's the case then youu can use this Python command line tool: https://github.com/ntoll/uflash/
Instructions on how to install it and use it can be found in the README at that link.
This works with Python 2 and 3, and your Linux distribution is very likely to have at least one Python version available out-of-the-box.
If you have pip installed you can easily install it with: pip install uflash
But you can also download the source code, using git or downloading a zip file from GitHub (https://github.com/ntoll/uflash/archive/master.zip), and run it without installing anything. In this case you can execute the uFlash script with Python:
python uflash.py path_to_your_code.py
And the current version of uFlash includes the latest version of MicroPython for the micro:bit.
You can write the micropython code for the microbit in any text editor, such as vscode or vim. Save it as a .py file.
To create the .hex file, use the py2exe tool that is installed along with uflash when you install uflash using the command:
pip install uflash
To create a .hex file for a microbit micropython file called hello.py:
py2hex hello.py
This creates a file called hello.hex. This can be dragged and dropped onto your connected microbit through the file explorer. I use Nautilus and the microbit appears as 'MICROBIT'.
You can automate the creation and loading of the .hex file to the microbit using uflash, e.g.
uflash hello.py
This will create the .hex file and then load it onto an attached microbit. The .hex file will not be left on your file system though. The microbit has a habit of no longer being attached to the file system after loading a .hex file and needs to be re-attached in between builds.
Working Ubuntu 22.04 host CLI setup with Carlos Atencio's Docker to build your own firmware
After trying to setup the toolchain for a while, I finally decided to Google for a Docker image with the toolchain, and found https://github.com/carlosperate/docker-microbit-toolchain at this commit from Carlos Atencio, a Micro:Bit foundation employee, and that just absolutely worked:
# Get examples.
git clone https://github.com/bbcmicrobit/micropython
cd micropython
git checkout 7fc33d13b31a915cbe90dc5d515c6337b5fa1660
# Get Docker image.
docker pull ghcr.io/carlosperate/microbit-toolchain:latest
# Build setup to be run once.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest yt target bbc-microbit-classic-gcc-nosd#https://github.com/lancaster-university/yotta-target-bbc-microbit-classic-gcc-nosd
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest make all
sudo chmod -R +666 .
# Build one example.
tools/makecombinedhex.py build/firmware.hex examples/counter.py -o build/counter.hex
# Build all examples.
for f in examples/*; do b="$(basename "$f")"; echo $b; tools/makecombinedhex.py build/firmware.hex "$f" -o "build/${b%.py}.hex"; done
And you can then flash the example you want to run with:
cp build/counter.hex "/media/$USER/MICROBIT/"
What uflash does it to ship its own precompiled firmware.hex which is the part that requires the toolchain, and it then just uses that to build the combined hex in Python.
The cool thing is that now that we have the toolchain, we can also create examples directy in C/C++/assembly: How to compile C/C++ code into a .hex file for the BBC micro:bit? which can likely run much faster.
Previous failed attempts at setting it up myself
The Yotta package manager used by BBC Microbit bit rot almost immediately after it got was discontinued, making pip install yota approaches like: https://flames-of-code.netlify.app/blog/microbit-cpp-1/ very difficult.
The GCC gcc-arm-embedded toolchain PPA ppa:team-gcc-arm-embedded/ppa has also been discontinued: https://askubuntu.com/questions/1243252/how-to-install-arm-none-eabi-gdb-on-ubuntu-20-04-lts-focal-fossa and now you would have to download from an arm.com website.
Atencios' Docker setup explains how to do it though: https://github.com/carlosperate/docker-microbit-toolchain/blob/master/Dockerfile , the key is likely using his magically crafted requirements.txt, likely kept back from the day when things really worked, to avoid the infinitely many dependency issues of yotta. He's on Ubuntu 20.04.

Automatically create docker container and launch python script

I am working on creating an automated unit testing system which will utilise docker to test individual student assignments, written in Python, against a single unit test file.
I have created a website where students can upload their assignments but I'm a little but unsure as to how to get the automation with Docker working.
The workflow looks something like this:
A student uploads an assignment for marking
This is copied to a linux host which contains docker
The file sits here while it waits to be tested
So, say I had twenty student uploading there .py files, named as their unique student numbers, could I:
Create a Docker container which runs Ubuntu and Python
Copy the student file and unit test into this container
Run the unit test
Output the results as a text file
Copy this text file back to my webserver to display the results
Could somebody point me in the right direction to get started with this automation? I'm really just after some help of the Docker side of things, not on copying the files from my webserver to the Docker host.
Thanks.
Yes, it is possible to use Docker for that.
The Dockerfile would look like this:
FROM ubuntu
MAINTAINER xxx <user#example.org>
# update ubuntu repository
RUN DEBIAN_FRONTEND=noninteractive apt-get -y update
# install ubuntu packages
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python python-pip
# install python requirements
RUN pip install ...
# define a mount point
VOLUME /student.py
# define command for this image
CMD ["python","/student.py"]
Now, you have to build this image with docker build -t student_test ..
To start the script and grab the output you can use:
docker run --volume /path/to/s12345.py:/student.py student_test > student_results_12345.txt`.
The --volume parameter is needed, to mount a student script to the defined mount point. Also, you could start multiple containers at once.
All paths are relative to current working directory.
Checkout the following project
https://github.com/CenturyLinkLabs/buildpack-runner
Uses Heroku buildpacks to create a docker image. Crazy but a neat idea if you get it working.

How to use setfacl within a Docker container?

It seems like within the container the filesystem is mounted without 'acl', therefore 'setfacl' won't work. And it won't let me remount it either, and I can't even run 'df -h'.
I need setfacl because I make root own all the files from my websites, and I give the webserver user write permissions to only a few directories like cache, logs, etc.
What can I do?
The good news is that Docker supports ACLs.
In early releases Docker used a filesystem named AUFS which didn't support them.
You could tell Docker to use Device Mapper (LVM) for its storage, by starting your Docker daemon with the appropriate option:
docker -d --storage-driver=devicemapper --daemon=true
Source: https://groups.google.com/forum/#!topic/docker-user/165AARba2Bk
and then you were able to use setfacl in your containers.
Any reasonably recent release or Docker now uses the overlay2 storage driver, which supports that out of the box.
To check what is your storage driver:
docker info | grep Storage
df -h doesn't work for a different and unrelated reason : it relies on /etc/mtab, not present in your case. In your container, create a link from procfs, that will solve this problem:
ln -s /proc/mounts /etc/mtab