Singularity Recipe: How to access executable within container? - singularity-container

I am a beginner with Singularity.
What I want to achieve in the long run: I have a programming project with a long lists of dependencies, and I want to be able to give the program to other people in my company without there being bugs caused by missing dependencies, or wrong versions of dependencies.
The idea was now to use Singularity in order to easily provide a working environment.
In order to test this, I wrote a Hello World application which I now want to run in a container. I have a folder HelloWorld/ which contains the source code for a C++ Qt project. Then I wrote the following recipe file:
project.recipe
Bootstrap: docker
From: ubuntu:18.04
%setup
cp -R <some_folder>/HelloWorld ${SINGULARITY_ROOTFS}/HelloWorld
%post
apt update
apt-get install -y qt5-default
apt install -y g++
apt-get install -y build-essential
cd HelloWorld
qmake
make
echo "after build:"
ls
%runscript
echo "before execution:"
ls HelloWorld/
./HelloWorld/HelloWorld
where the echos and directory listings are for my current debugging process.
I can sucessfully build an image file using sudo singularity build --writable project.img project.recipe. (My debugging output shows me that the executable was build successfully.)
The problem is now that if I try to run it using ./project.img, or singularity run project.img, it won't find the executable.
Using my debugging output, I found out that the lines in %runscript use the folders outside of the container.
Tutorials like https://sylabs.io/guides/3.1/user-guide/build_a_container.html made it seem to me as if my recipe was the way to go, but apparently it isn't?
My questions:
Is there some way for me to access my executable? Am I calling it wrong?
Is the way I do it the way it is supposed to be done? Or would one normally do something like getting the executable outside of the container and then use the container to call that outside file? Or is there a different best practice?
If the executable is to be copied outside of the container after compilation, how do I do that? How do I access outside folders when I'm within %post?
Is this the best work process for what I want to achieve? Later on, my idea is that the big project is copied likewise in the container, dependencies are either installed or copied, then the project is compiled and finally its source being deleted. I also considered using a repository, but I can't have the project being in an open repository, and I don't want to store any passwords.

Firstly, use %files, don't use %setup. %setup is run as root and can directly modify the host server. You can very easily and accidentally break things without realizing it. You can get the same effect this way:
%files
some_folder/HelloWorld /HelloWorld
You are calling it wrong. In your %setup (and hopefully now in your %files) steps, you are copying the data to /HelloWorld. In your %runscript your are calling ./HelloWorld/HelloWorld which is the equivalent of $PWD/HelloWorld/HelloWorld. Since singularity automatically mounts in $PWD (as well as $HOME and some other directories), you are not calling what you're trying to call.
You don't copy the executable outside of the container, you just need to make sure what you're executing is where you think it is.
There is no access to the host filesystem in %post, you should have everything you need copied in via %files first.
That's a reasonable workflow. Having a local private repo for the code is probably a good idea for tracking your changes, but that's your call.

Related

Singularity and interior dynamic libraries

I am currently working on getting a bigger (C++) project inside a Singularity container. So far, everything works well, until I try to execute the container image, in which it won't find a dynamic library file that I previously build inside the container:
./MyProject.img
/<some path>/MyExecutable: error while loading shared libraries: libmongocxx.so._noabi: cannot open shared object file: No such file or directory
My first thought was that maybe the process of building this dependency inside the container did somehow not succeed, therefore I added ls /usr/local/lib/ at the end of the %post section of my recipe to check on that, but everything there is fine:
+ ls /usr/local/lib/
[...]
libmongocxx.so
libmongocxx.so.3.6.0
libmongocxx.so._noabi
[...]
So my next thought was that maybe the basic library folder is for some reason not a part of the environment variables of my container, so I extended the %post section with
export PATH=$PATH:/usr/local/lib/
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
still to no avail.
Is there some property of Singularity containers I am missing here? Do I need to somehow extract the dynamic library file to outside of the container? Or did I made some stupid mistake I just can't see here?
(I tagged the question only with singularity-container for now as I don't think this is anything specific to C++ here, but if somebody thinks otherwise feel free to add. My container uses Bootstrap: docker From: ubuntu:18.04, should that be relevant.)
Edit: I also explicitely gave the dynamic libraries execution rights, just in case, and printed their rights:
lrwxrwxrwx 1 root root 20 Sep 10 10:51 libmongocxx.so._noabi -> libmongocxx.so.3.6.0
Didn't work either.
My first guess is that your local environment is overwriting the variables in the image. You can use singularity run --cleanenv MyProject.img to prevent your current environment from persisting into the container. If there are variables you do want to pass in there, you can export SINGULARITYENV_SOMEVAR=foo to have SOMEVAR=foo set in the container environment.
If that doesn't do it, modify the %runscript to have a env | sort in there so you can see exactly what's set when it's attempting to run your code.

Getting an installed module to recognize changes to config files

I have a package that uses config.json for some settings it uses. I keep the package locally rather than installing it from CPAN. My problem is when I make changes to config.json, the package doesn't recognize the changes since the config file's cached elsewhere, forcing me to run zef install --force-install or delete precomp. How can I ensure that the package always recognizes updates to the config file?
When you install packages using zef, it keeps them in the filesystem, but their names are converted into sha1, something like
/home/jmerelo/.rakudobrew/moar-2018.03/install/share/perl6/site/sources/81436475BD18D66BFD96BBCEE07CCCDC0F368879
zef keeps track of them, however, and you can locate them using zef locate, for instance:
zef locate lib/Zef/CLI.pm6
You can run that from a program, for instance this way:
sub MAIN( Str $file ) {
my $location = qqx/zef locate $file/;
my $sha1 = ($location ~~ /\s+ \=\> \s+ (.+)/);
say "$file → $sha1[0]";
}
which will return pretty much the same, except it will give you the first location of the file you give it at the command line:
lib/Zef/CLI.pm6 → /home/jmerelo/.rakudobrew/moar-2018.03/install/share/perl6/site/sources/81436475BD18D66BFD96BBCEE07CCCDC0F368879
You probably need to install your config.json file in a resources directory (which is the preferred location) and then use something like that.
That said, probably actually installing a module you're testing is not the best strategy. If you're still testing things, it's probably better if you just keep it in the directory you're working with and use perl6 -I<that directory> or else use lib <that directory> is probably a better option. You can just delete that when you release, or keep it, since that only adds another directory to the search path and will not harm the released module.

What is Chef doing when I use the `s3cmd` recipe?

I am using Chef and this s3cmd cookbook.
As this tutorial says I use knife to download and tar it. I actually made s3cmd work following the tutorial instructions, but I have problems understanding where exactly the installation of s3cmd is happening?
Can anyone explain to me what Chef is doing when using the s3cmd recipe?
When you run chef-solo locally (or chef-client in a server setup) you are telling Chef to compile a set of resources defined in your cookbooks recipes to then be applied to the node you are running the command on.
A resource defines the tasks and settings for something you want to setup.
The run_list set for the node defines what cookbook recipes will be compiled.
s3cmd
In your case, your run_list will have probably been recipe[s3cmd].
This instructs Chef to look in the s3cmd cookbook and as you didn't give a specific recipe, it loads s3cmd/recipes/default.rb.
If you gave a specific recipe, like recipe[s3_cmd::other] then Chef would load the file s3_cmd/recipes/other.rb.
Chef will compile all the resources defined in the recipe(s) into a list and the run through the list applying changes as required to your system.
What s3cmd::default does
First it installs some packages (via your distributions package manager)
python, python-setuptools, python-distutils-extra, python-dateutil, s3cmd
Note: This is entirely different to what the readme says about how s3cmd is installed! Always check!
Figures out where the config should go.
if node['s3cmd']['config_dir']
home_folder = node['s3cmd']['config_dir']
else
home_folder = node['etc']['passwd'][node['s3cmd']['user']]['dir']
end
Creates the .s3cfg config file from a template in the cookbook.
template "#{home_folder}/.s3cfg" do...
What else
Cookbooks can also define their own resources and providers to create more reusable cookbooks. The resource names and attributes will be defined in cookbook/resources/blah.rb. The code for each resource action will be in cookbook/providers/blah.rb
Code can be packaged in cookbook/libraries/blah.rb and included in other Ruby files.
Run chef-solo with the --log-level DEBUG option and step through the output. Try and identify the run list compilation phase, and then where everything is being applied.

Class-Dump Installation

So this may sound like a really stupid question and I HAVE looked at the how-to from the parent website, but no matter what I do, I cannot get this program to even start to install...
I tried entering:
cd /opt/local/bin/portslocation/dports/class-dump
which returned a "this file/director doesnt exist" error, so i tried to get to it folder by folder. when i got all the way to:
cd /opt/local/bin/
i cannot go any further. when i check the contents of the bin directory, the only files i can find are (and i cannot access these apparently either):
"daemondo port portf portindex portmirror"
i have tried doing this on 2 computers so far to no avail, macports is installed on both like the website said and i am having trouble finding any support for it. please and thank you!!
Unless you're trying to develop it, why screw around? Use homebrew. As of today, and modulo a sudo here or there, you can install homebrew with
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
If this doesn't work for you, check the command line at the top of the page on homebrew.†
After this, install class-dump with
brew install class-dump
Done.
† Ping me in the comments with "command line update needed", and I'll try to keep this in sync. :)
I had issues with this for hours, but it is actually quite simple.
I downloaded the class-dump 3.3 version from CodeTheCode, Unzipped the file and copied the class-dump file to a directory. In my case the directory was /opt/theos/bin.
In terminal, I then CD to that directory using cd /opt/theos/bin
To run the class dump command line utility it is as simple as this;
./class-dump
Obviously you then need to give it it's arguments, so in my case I was using it do dump the iOS headers from the frameworks, so I used;
./class-dump -H /Developer/Library/iPhoneSimulator4.3.sdk/Frameworks/UIKit.framework/UIKit -o ~/Desktop/UIKit
Obviously I am not sure what you are using it for, but in the example above I am telling class-dump to dump header files from the directory given, and output them to /Desktop/UIKit.
The theory is carried throughout.
Have you tried getting the binary from http://www.codethecode.com/projects/class-dump/?

Why does the building from Binary files do not require Root access?

When I am in my dept's server, I cannot use commands such as "apt-get install nethack". I have to build the nethack from Binary files to get it working, at least so I have been told. I cannot understand the reason. Why do I need to build things from binaries? Why is the use of the commands, such as "apt-get", forbidden? Why do I not need Root access to build from binaries?
apt-get is a system-level command that installs packages for all users.
If you download and compile, you are only creating local "copies" of the binaries, not system-wide. If you tried to complete the install process with make install this would most likely fail because you do not have sufficient privileges to install the program for all users' access (same reason you can't run apt-get install)
When you compile a program from source, you can give it the '--prefix=~/'. This causes it to install relative to your own home directory (so binary programs typically end up in '~/bin', man pages in '~/man' etc). This poses no problems because you already have permission to write here.
Apt-get on the other hand installs the packages in the global filesystem ('/bin/', '/usr/bin/', etc), which can impact other users and so, quite rightly, require administrative access.
If you want to install some program you can use the command
apt-get source app-name
This will work even if you are not root since it only fetch the source code to the app-name and put it in the current directory, which is easier than having to track down the source and there is a better chance to get it work, since you download the version that should work on your system.
Alternatively you should bug your sysadmin to install the programs you need, since it is his job (and if you need them, chances are that the rest of your team does too).
Because apt-get will install a program system wide.
The locations to which apt-get writes installed files (/bin, /usr/bin, ...) are restricted to root access. I imagine that when you build from source you're not executing the install step of the bulid. You're going to need to set a prefix for the installation such that the packages end up somewhere you can write. This thread talks a bit about setting prefixes for apt-get and you'll probably want to set your prefix to something like
~/software/
and then add the resulting bin directories to your PATH.