Singularity sandbox file management - singularity-container

Fed up with struggling with lib install/dependencies problems, I'm starting working with Singularity.
Though, I'm not sure I understand precisely how it works regarding files management in the sandbox mode (not data, programs).
For example, I designed a very simple definition file that is just a "naked" Debian:
Bootstrap: library
From: debian
%post
apt-get update
I create a sandbox with this to add stuff:
sudo singularity build --sandbox Test/ naked_Debian.def
And I try to install a program. But what I don't understand is that I managed to do it, removed the sandbox directory but I think there are still files that were created during the sandbox life (in /dev, /run, /root, etc.). For example, the program that I cloned from git is now in /root of my local (independently of any container).
From what I understood, everything was in the container and should disappear if I remove the sandbox directory. Otherwise, I'm gonna leave a lot of mess with all the tests? And then I can't port the container from system to another.
If I create any new sandbox directory, the program is already there.
Cheers,
Mathieu

By default, singularity mounts $HOME to the container and uses that path as the working directory for singularity shell / exec. Since you're running the sandbox with sudo, /root is being mounted in and that's where any repos you cloned would end up if you didn't cd to a different directory. /tmp is also automatically mounted in, though that is unlikely to cause an issue since it's just temp files.
You have a few options to avoid files ending up in places you don't expect.
Disable automount of home: singularity shell --no-home ...
The default working directory is now / instead of $HOME and files are created directly in sandbox (as opposed to a mounted in directory)
If you want to get files out of the sandbox, you'll either need to copy to /tmp inside the container, and on the host OS from /tmp to the desired location
Set a different location to use as home: singularity shell --home $PWD ...
This mounts in and uses the current directory as $HOME instead of the user's $HOME on the host OS
Simpler to move files between host OS and container, but still creates files in the host OS
Don't mount system directories at all: singularity shell --contain --workdir /some/dir ...
Directories for /tmp and /var/tmp are created inside /some/dir instead of using /tmp and /var/tmp on the host. $HOME has the same path as the host and is used as the working directory, but it is empty and separate from the host OS
Complete separation from host OS, while still allowing some access between container and OS
Additional details on these options can be found in the documentation.

Related

Change singularity home directory to a folder within the container

Background
I have a singularity container that was created from a docker image. The docker image has files that are meant to be in the user's home directory (e.g. in $HOME/.files). Because I don't know what the username will be, I put the files in /opt in the container and want to set the user's home to /opt.
I would like to be able to run the container with /opt as the home directory, OR somehow be able to run the container so that the home directory contains the files that already exist within the container
What I have tried:
use the --home flag : This maps a folder on the host as the home directory, rather than a folder in the container.
try overriding the $HOME environment variable with --env HOME=/opt : I get the error Overriding HOME environment variable with SINGULARITYENV_HOME is not permitted
Other questions
this question is related, but interested in mapping the container's home folder to a folder on the host machine
You can use the $HOME shell environment variable when you bind the two directories.
singularity exec -B $HOME:/opt example_container.sif touch /opt/file
Here is the documentation on singularity's bind feature

Having host filesystem visible to singularity container

I use singularity images that do not require any binding of the desired host path, i.e.
singularity exec image.simg IMAGE_COMMAND -i $PWD/input_file -o ./output_dir
simply works like any other command on the "input_file" in my host system, also using relative paths as in "-o".
I'm not comfortable enough with Singularity and its jargon to understand how this is made.
Is a configuration done in singularity.conf?
How is this feature called? (is it "MOUNT HOSTFS"?)
By default, both your home and current directories are mounted/bound into the image for you. You can modify this in singularity.conf. Details on the settings are available in the admin documentation.
The MOUNT HOSTFS in the config is a toggle to automatically mount all host filesystems into the image. MOUNT HOME is the corresponding setting for auto-mounting the user's HOME directory.
You can see which files/directories are currently being mounted by using the --verbose option with your singularity command.

Exposing application's codebase to the vagrant instance

I'm trying to run an application using vagrant. I have a directory where the codebase of app is placed and the .vagrant dir that is created there after its initializing. It looks so:
[app_codebase_root]/.vagrant/machines/default/virtualbox
There is a some very short manual about what to do (https://github.com/numenta/nupic/wiki/Running-Nupic-in-a-Virtual-Machine) and I stopped at the point 9 where is said:
9) Expose [app] codebase to the vagrant instance... If you have the
codebase checkout out, you can copy or move it into the current
directory...
So it's not clear for me what to copy and where? Does it mean some place within vagrant (if yes, then which exactly?) or some another place? Or I should just make a command vagrant ssh now?
From the Vagrant documentation:
By default, Vagrant will share your project directory (the directory with the Vagrantfile) to /vagrant.
So you should find your codebase root should under /vagrant on your guest.
This is always going to be a little confusing, so you need to separate the concepts of the host system and the VM.
Let's say the shared directory (the one with the Vagrantfile) is [something]/vagrant on your host system. Copy your app directory to [something]/vagrant/nupic (or run git clone in that directory) while still in Windows. Check using Windows Explorer that you see all the source files.
In a console window, cd to [something]/vagrant and run vagrant ssh.
You are now in the VM, so everything is now the VM's filesystem. Your code is now in /vagrant/nupic. Edit .bashrc as per the instructions to point to this directory, and run the build commands.

Permissions issue with vagrant virtualbox

I have a Debian virtualbox set up with vagrant, in it i have the codebase for the project that i'm working on and i've set the folder which holds this codebase to be synced with the Host machine (which is Mac OS 10.8.4). I just learned that in order to change the permissions on any subfolders on my synced folder i must do this from the host machine, however my problem is that the application actually creates folders (and subfolders) and then expects to be able to write to them. Since the vm doesn't have the ability to chmod it's own folders these folders are not created with write access by default. How can this be achieved?
note: i've already tried using umask from both the host and the vm. it works on the host but since those changes are per terminal they don't propagate to the vm, using it on the vm doesn't work because the folders are managed by the host.
umask should be the way to go.
To make it persistent, you just need to add umask 027 (or whatever mask you want) to ~/.bash_profile for interactive login shell or ~/.bashrc for interactive non-login shell for the user who will be running the application, or just make it system wide by placing in /etc/profile.
NOTE: Ubuntu uses ~/.profile and does NOT have ~/.bash_profile.
Alternatively, set umask before running the application would probably help.

CException Error while deploying yii application on OpenShift?

Friends, I tried to deploy my yii production application from cloud9 IDE to OpenShift while do so, I got this error message,
CException
Application runtime path "/var/lib/openshift/51dd48794382ecfd530001e8/app-root/runtime/repo/php/protected/runtime" is not valid. Please make sure it is a directory writable by the Web server process.
Even when I changed folder permissions to 775 (chmod -R 775 directory) on Cloud9 IDE and deployed again, but I get the same error coming.
It's an old question, but I just bumped into the same issue very recently.
When you extracted the "yii" package several folders were empty, "framework/protected/runtime" was one of them.
To deploy to OpenShift you need to commit the yii package to git, and the push the commit to OS. But, git won't commit empty folders, so they are not created in your deployment. You need to create some file inside those folders and add those files to your git repo before committing/pushing. The usual procedure would be to add a ".gitkeep" file to those folders (it's just a empty dummy file, so git would see those folders).
That would fix this particular error.
It may be due the ownership given to the folder.
Check the web server user group, is that directory is writable or not and also What effects a web server when we change the platform.
Hope my suggestion would be useful.
For Yii applications, the assets and protected/runtime folders are special. First, both folders must exist and writable by the server (httpd) process. Second, these two folders contains temporary files, and should be ignored by git. If these temporary files got committed, deployment in plain servers (not Openshift servers) would cause git merge conflicts. So I put these two folders in .gitignore :
php/assets/
php/protected/runtime/
In my deployment, I add a shell script to be called by openshift, creating both folders under $OPENSHIFT_DATA_DIR and creating symbolic link to both of them in the application's folders. This is the content of the shell script (.openshift/action_hooks/deploy) which I adapted from here :
#!/bin/bash
if [ ! -d $OPENSHIFT_DATA_DIR/runtime ]; then
mkdir $OPENSHIFT_DATA_DIR/runtime
fi
# remove symlink if already exists, fix problem when with gears > 1 and nodes > 1
rm $OPENSHIFT_REPO_DIR/php/protected/runtime
ln -sf $OPENSHIFT_DATA_DIR/runtime $OPENSHIFT_REPO_DIR/php/protected/runtime
if [ ! -d $OPENSHIFT_DATA_DIR/assets ]; then
mkdir $OPENSHIFT_DATA_DIR/assets
fi
rm $OPENSHIFT_REPO_DIR/php/assets
ln -sf $OPENSHIFT_DATA_DIR/assets $OPENSHIFT_REPO_DIR/php/assets
The shell script ensures the temporary folders created on each gear after openshift deployment. By default, a new directory's right are u+rwx, and it became writable by the httpd process because the gear runs httpd as the gear user (not apache or something else).