singularity extract, edit, and rebuild image - singularity-container

I have a singularity container that has been made for me (to run tensorflow on comet GPU nodes) but I need to modify the keras install for my purposes.
I understand that .simg files are not editable (and that the writable .img format is deprecated), so the process of converting to an .img file, editing, and then converting back to .simg is discouraged:
sudo singularity build --writable development.img production.simg
## make changes
sudo singularity build production2.img development.simg
It seems to me the best way might be to extract the contents (say into a sandbox), edit them, and then rebuild the sandbox into an .simg image.
I know how to do the second conversion (singularity build new-sif sandbox), but how can I do the first?
I have tried the following, but the command never finishes:
sudo singularity build tf_gpu tensorflow-gpu.simg
WARNING: Authentication token file not found : Only pulls of public images will succeed
Build target already exists. Do you want to overwrite? [N/y] y
2018/10/12 08:39:54 bufio.Scanner: token too long
INFO: Starting build...

You can easily convert between a sandbox and a production build using the following:
sudo singularity build lolcow.sif docker://godlovedc/lolcow # pulls and builds an example container
sudo singularity build --sandbox lolcow_sandbox/ lolcow.sif # converts from container to a writable sandbox
sudo singularity build lolcow2 lolcow_sandbox/ # converts from sandbox to container
So, you can edit the sandbox and then rebuild accordingly.

Related

How to turn a singularity sandbox container into a sif file ? (while preserving the sandbox)

I have built a Singularity sandbox container using this command:
sudo singularity build --sandbox ubuntu/ library://ubuntu
Now, I would like to copy/export this container as a sif file. But I cannot find how to do this in the documentation.
Any idea ?
Ok, so by reading the doc more carefully, it's apparently not possible to keep the changes made on a sandbox into a sif file, see here
sudo singularity build ubuntu.sif ubuntu/
INFO: Starting build...
INFO: Creating SIF file...
INFO: Build complete: ubuntu.sif
See https://docs.sylabs.io/guides/3.5/user-guide/build_a_container.html#converting-containers-from-one-format-to-another

Problems getting Singularity Compose to work

I wrote a small test project for Singularity Compose, consisting of a small server application, with the following YAML file:
version: "1.0"
instances:
server:
build:
context: ./server
recipe: server.recipe
ports:
- 9999:9999
When I call singularity-compose build, it successfully builds server.sif. Calling singularity-compose up also seemingly works without error, and calling singularity-compose ps results in something that looks just fine:
+ singularity-compose ps
INSTANCES NAME PID IMAGE
1 server 4176911 server.sif
However, the server application does not work, calling my test client results in it saying that there is no answer from the server.
But if I run server.sif directly without compose, everything works just fine.
Also, I tripple checked, my test application listens to port 9999, thus should be reachable from the outside.
What did I do wrong?
Edit:
I also checked whether there actually is any process listening at port 9999 by calling sudo lsof -i -P -n | grep LISTEN, this is not the case. Only when I manually start server.sif without compose it shows me the process listening.
Edit:
I went into the Singularity Compose shell and tried to start the Server application directly in there, just as a test, and it resulted in Permission denied. Not sure if that means anything.
Edit:
I now gave the application execution rights within the shell and called in there, this works. Am now trying to add execution rights in the recipe. If that works, it would be kind of strange, as the executable was build right there, and thus should already have execution rights.
Edit:
I added chmod +x in my recipe both after building Server and before executing it. Doesn't work either.
Also checked whether any bridges exist using brctl show, this is not the case.
Edit: My recipe, adjusted by the input of tsnowlan in his answer below:
Bootstrap: docker
From: ubuntu:20.04
%files
connection.cpp
connection.h
main.cpp
server.cpp
server.h
server.pro
%post
# get some basics
apt update
apt-get install -y wget
apt-get install -y software-properties-common
# get C++ compiler
apt-get install -y g++
apt-get install -y build-essential
apt-get install -y build-essential cmake
# get Qt
apt-get install -y qt5-default
# compile
qmake
make
ls
%runscript
/Server
%startscript
/Server
Again, note that the application works just fine both when compiled and startet normally and when started within a Singularity image (but without Singularity Compose).
The ls at the end of the %post block is used to verify that the Server application was build successfully.
Please share the server.recipe, as it is difficult to identify should be/is happening without it.
Without having that, my guess is that you have a %runscript in your definition file, but no %startscript. When the image is executed directly or via singularity run image.sif, the contents of %runscript determine what happens. To emulate the docker-compose style, the singularity images are started as persistent instances. In this case, the %startscript block determines what runs. If it is empty, it will just start up and sit there doing nothing. This would explain why when run by hand it works but not when using compose.

Gitlab CI / CD with Terraform + Python

On my Gitlab CI / CD, I have a terraform code that requires Python installed to use an external module.
When running terraform plan via Gitlab pipelines, I get the following error:
module.notify_slack.module.lambda.data.aws_caller_identity.current[0]: Refreshing state...
Error: can't find external program "python3"
on .terraform/modules/notify_slack.lambda/terraform-aws-lambda-1.6.0/package.tf line 3, in data "external" "archive_prepare":
3: data "external" "archive_prepare" {
ERROR: Job failed: exit code 1
What image do I need to use that contains Terraform and Python? Will I need to create my own docker image?
I know this is a bit of an old post, but I'll share my solution in case anyone else stumbles upon this problem too.
Choose an existing python image and install terraform manually - this seems to me to be the easiest solution, if pragmatism is important to you.
This is the relevant section of my .gitlab-ci.yml file:
default:
image: python:latest
before_script:
- python -V # Display version for debugging purposes only
- apt-get update -y
- apt-get install unzip wget -y
- wget https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip
- unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip
- mv terraform /usr/local/bin/
- terraform --version # Display version for debugging purposes only
The environment variable was set up in the GitLab CI/CD settings, otherwise just change it for the specific version of Terraform you want.
I was pleasantly surprised at the speed in which this installation takes place, as this clearly isn't an optimal way to do it - the best performing runner will probably use your own custom image with all of your required dependencies pre-installed - I'll leave you to decide whether its worth it for your own purposes. Nonetheless, this solution doesn't appear to be prohibitively slow.

How to install PHP extensions on Cloudbees?

I need to install mbstring (and a few other extensions) for PHP on Cloudbees. Is this possible?
Note that I'm using an updated PHP version as described here:
https://developer.cloudbees.com/bin/view/DEV/PHP+Builds
I don't think scripts have sudo access, so I can't simply use the package manager. I don't think these extensions exist as PEAR packages either. So I'm stumped.
Here is the response from Cloudbees support. Seems to work fine, just make sure you don't have any spaces in your Jenkins build path!
Our provided PHP versions don't have mbstring module activated. You will need to build your own PHP version to get it. To be sure your custom PHP build works on Cloudbees slave, you can build it with a Jenkins job on your instance (with various --with-XXX or --without-XXX options).
We are ourselves doing something like this with a script like
# Download
regex='.*(RC|alpha|beta).*'
if [[ $version =~ $regex ]]; then
wget http://downloads.php.net/dsp/php-${version}.tar.bz2
else
wget http://us3.php.net/distributions/php-${version}.tar.bz2
fi
# Unpack
tar xjf php-${version}.tar.bz2
# Build
cd php-${version}
./configure --prefix=/home/jenkins/tools/php/${php_name} \
--with-curl --with-openssl
make && make install
As a side node, you should also take care of specifying a good installation prefix with --prefix. I would choose something like /home/jenkins/tools/php/5.4/.
To store compiled php engine you could generate a tar.gz//bz2 file of target installation directory. Then, store it in your WebDAV directory, which is accessible in /private/{account}/ during a build when "Mount CloudBees DEV#cloud Private WebDav Repository" is checked.
You should add a first step to jobs requiring PHP to extract this archive. As Jenkins workspace is usually cached on DEV#Cloud, you can extract the archive only if it's not already there. That will speed up your build.

Automatically create docker container and launch python script

I am working on creating an automated unit testing system which will utilise docker to test individual student assignments, written in Python, against a single unit test file.
I have created a website where students can upload their assignments but I'm a little but unsure as to how to get the automation with Docker working.
The workflow looks something like this:
A student uploads an assignment for marking
This is copied to a linux host which contains docker
The file sits here while it waits to be tested
So, say I had twenty student uploading there .py files, named as their unique student numbers, could I:
Create a Docker container which runs Ubuntu and Python
Copy the student file and unit test into this container
Run the unit test
Output the results as a text file
Copy this text file back to my webserver to display the results
Could somebody point me in the right direction to get started with this automation? I'm really just after some help of the Docker side of things, not on copying the files from my webserver to the Docker host.
Thanks.
Yes, it is possible to use Docker for that.
The Dockerfile would look like this:
FROM ubuntu
MAINTAINER xxx <user#example.org>
# update ubuntu repository
RUN DEBIAN_FRONTEND=noninteractive apt-get -y update
# install ubuntu packages
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python python-pip
# install python requirements
RUN pip install ...
# define a mount point
VOLUME /student.py
# define command for this image
CMD ["python","/student.py"]
Now, you have to build this image with docker build -t student_test ..
To start the script and grab the output you can use:
docker run --volume /path/to/s12345.py:/student.py student_test > student_results_12345.txt`.
The --volume parameter is needed, to mount a student script to the defined mount point. Also, you could start multiple containers at once.
All paths are relative to current working directory.
Checkout the following project
https://github.com/CenturyLinkLabs/buildpack-runner
Uses Heroku buildpacks to create a docker image. Crazy but a neat idea if you get it working.