Singularity container: can we make a container out of an environment from within this environment? - singularity-container

I have an ambitious demand.
I am connected to a distant server through SSH, and I have a conda environment installed on it.
Is it possible using Singularity to create a container that "copy" this distant server and the conda env that goes with it ?
Thank

If I understand correctly, you want to copy a folder (in this case, a conda env) into your singularity container.
You can leverage the %setup section if you have ssh keys set up with this remote host. Reference for %setup: https://apptainer.org/docs/user/main/definition_files.html#sections
Something like the following:
%setup
scp remote_user#remote_host:/full/path/to/remote/conda/env ${APPTAINER_ROOTFS}/full/path/to/container/conda/env
Alternatively, you could create a tmpdir on your local system and scp the file into the tmpdir within %setup then use the %files to copy the files over to the container.

Related

Singularity sandbox file management

Fed up with struggling with lib install/dependencies problems, I'm starting working with Singularity.
Though, I'm not sure I understand precisely how it works regarding files management in the sandbox mode (not data, programs).
For example, I designed a very simple definition file that is just a "naked" Debian:
Bootstrap: library
From: debian
%post
apt-get update
I create a sandbox with this to add stuff:
sudo singularity build --sandbox Test/ naked_Debian.def
And I try to install a program. But what I don't understand is that I managed to do it, removed the sandbox directory but I think there are still files that were created during the sandbox life (in /dev, /run, /root, etc.). For example, the program that I cloned from git is now in /root of my local (independently of any container).
From what I understood, everything was in the container and should disappear if I remove the sandbox directory. Otherwise, I'm gonna leave a lot of mess with all the tests? And then I can't port the container from system to another.
If I create any new sandbox directory, the program is already there.
Cheers,
Mathieu
By default, singularity mounts $HOME to the container and uses that path as the working directory for singularity shell / exec. Since you're running the sandbox with sudo, /root is being mounted in and that's where any repos you cloned would end up if you didn't cd to a different directory. /tmp is also automatically mounted in, though that is unlikely to cause an issue since it's just temp files.
You have a few options to avoid files ending up in places you don't expect.
Disable automount of home: singularity shell --no-home ...
The default working directory is now / instead of $HOME and files are created directly in sandbox (as opposed to a mounted in directory)
If you want to get files out of the sandbox, you'll either need to copy to /tmp inside the container, and on the host OS from /tmp to the desired location
Set a different location to use as home: singularity shell --home $PWD ...
This mounts in and uses the current directory as $HOME instead of the user's $HOME on the host OS
Simpler to move files between host OS and container, but still creates files in the host OS
Don't mount system directories at all: singularity shell --contain --workdir /some/dir ...
Directories for /tmp and /var/tmp are created inside /some/dir instead of using /tmp and /var/tmp on the host. $HOME has the same path as the host and is used as the working directory, but it is empty and separate from the host OS
Complete separation from host OS, while still allowing some access between container and OS
Additional details on these options can be found in the documentation.

Having host filesystem visible to singularity container

I use singularity images that do not require any binding of the desired host path, i.e.
singularity exec image.simg IMAGE_COMMAND -i $PWD/input_file -o ./output_dir
simply works like any other command on the "input_file" in my host system, also using relative paths as in "-o".
I'm not comfortable enough with Singularity and its jargon to understand how this is made.
Is a configuration done in singularity.conf?
How is this feature called? (is it "MOUNT HOSTFS"?)
By default, both your home and current directories are mounted/bound into the image for you. You can modify this in singularity.conf. Details on the settings are available in the admin documentation.
The MOUNT HOSTFS in the config is a toggle to automatically mount all host filesystems into the image. MOUNT HOME is the corresponding setting for auto-mounting the user's HOME directory.
You can see which files/directories are currently being mounted by using the --verbose option with your singularity command.

How to export a container in Singularity

I would like to move an already built container from one machine to another. What is the proper way to migrate the container from one environment to another?
I can find here the image.export command, but this is for an older version of the software. I am using version 3.5.2.
The container I wish to export is a --sandbox container. Is something like that possible?
Singularity allows you to easily convert between a sandbox and a production build.
For example:
singularity build lolcow.sif docker://godlovedc/lolcow # pulls and builds a container
singularity build --sandbox lolcow_sandbox/ lolcow.sif # converts from container to a writable sandbox
singularity build lolcow2 lolcow_sandbox/ # converts from sandbox to container
Once you have a production SIF or SIMG, you can easily transfer the file and convert as necessary.
singularity build generates a file that you can copy between computers just like any other file. The only things it needs is the singularity binary installed on the new host server.
The difference when using --sandbox is that you get a modifiable directory instead of single file. It can still be run elsewhere, but you may want to tar it up first so you're only moving a single file. Then you can untar it and run as normal on the new host.

Exposing application's codebase to the vagrant instance

I'm trying to run an application using vagrant. I have a directory where the codebase of app is placed and the .vagrant dir that is created there after its initializing. It looks so:
[app_codebase_root]/.vagrant/machines/default/virtualbox
There is a some very short manual about what to do (https://github.com/numenta/nupic/wiki/Running-Nupic-in-a-Virtual-Machine) and I stopped at the point 9 where is said:
9) Expose [app] codebase to the vagrant instance... If you have the
codebase checkout out, you can copy or move it into the current
directory...
So it's not clear for me what to copy and where? Does it mean some place within vagrant (if yes, then which exactly?) or some another place? Or I should just make a command vagrant ssh now?
From the Vagrant documentation:
By default, Vagrant will share your project directory (the directory with the Vagrantfile) to /vagrant.
So you should find your codebase root should under /vagrant on your guest.
This is always going to be a little confusing, so you need to separate the concepts of the host system and the VM.
Let's say the shared directory (the one with the Vagrantfile) is [something]/vagrant on your host system. Copy your app directory to [something]/vagrant/nupic (or run git clone in that directory) while still in Windows. Check using Windows Explorer that you see all the source files.
In a console window, cd to [something]/vagrant and run vagrant ssh.
You are now in the VM, so everything is now the VM's filesystem. Your code is now in /vagrant/nupic. Edit .bashrc as per the instructions to point to this directory, and run the build commands.

Permissions issue with vagrant virtualbox

I have a Debian virtualbox set up with vagrant, in it i have the codebase for the project that i'm working on and i've set the folder which holds this codebase to be synced with the Host machine (which is Mac OS 10.8.4). I just learned that in order to change the permissions on any subfolders on my synced folder i must do this from the host machine, however my problem is that the application actually creates folders (and subfolders) and then expects to be able to write to them. Since the vm doesn't have the ability to chmod it's own folders these folders are not created with write access by default. How can this be achieved?
note: i've already tried using umask from both the host and the vm. it works on the host but since those changes are per terminal they don't propagate to the vm, using it on the vm doesn't work because the folders are managed by the host.
umask should be the way to go.
To make it persistent, you just need to add umask 027 (or whatever mask you want) to ~/.bash_profile for interactive login shell or ~/.bashrc for interactive non-login shell for the user who will be running the application, or just make it system wide by placing in /etc/profile.
NOTE: Ubuntu uses ~/.profile and does NOT have ~/.bash_profile.
Alternatively, set umask before running the application would probably help.