%files section of Singularity recipe non-intuitively copies files to wrong bind location - singularity-container

I am working on CentOS 8 and am using Singularity 3.6.2. I have a Singularity recipe file :
BootStrap: yum
OSVersion: 8
MirrorURL: http://mirror.centos.org/centos-8/8/BaseOS/x86_64/os/
Include: yum
%files
/gpfs0/home1/group/user/path/to/some.rpm /tmp
%post
ls /tmp
echo "Hello from inside the container"
When I run :
$ sudo singularity build test.simg tmp
INFO: Starting build...
INFO: Skipping GPG Key Import
INFO: Adding owner write permission to build path: /tmp/rootfs-4db1e756-22a8-11eb-bb20-34800d2d90f0
INFO: Copying /gpfs0/home1/group/user/path/to/some.rpm to /tmp/rootfs-4db1e756-22a8-11eb-bb20-34800d2d90f0/tmp
INFO: Running post scriptlet
+ ls /tmp
qtsingleapp-RStudi-c679-6387e228-lockfile
rootfs-4db1e756-22a8-11eb-bb20-34800d2d90f0
rootfs-b10ad12c-229a-11eb-85a3-34800d2d90f0
+ echo 'Hello from inside the container'
Hello from inside the container
INFO: Creating SIF file...
According to the Singularity documentation
In the default configuration, the system default bind points are $HOME , /sys:/sys , /proc:/proc, /tmp:/tmp,
Question :
Why is the %files section putting my rpm in /tmp/rootfs-4db1e756-22a8-11eb-bb20-34800d2d90f0/tmp and not in /tmp? That seems to contradict the documentation. This is also different from the behavior observed with Singularity v2.5.1
Also, how would I access said file. The long 'hash-like' part of the path seems to change depending on the build?

I don't have an answer reconciling the documentation with where the %files section is actually putting the files, however I do have an answer for how to access the files copied. You need to use ${SINGULARITY_CONTAINER} in the %post section.
E.g.
$ cat Singularity
BootStrap: yum
OSVersion: 8
MirrorURL: http://mirror.centos.org/centos-8/8/BaseOS/x86_64/os/
Include: yum
%files
# Will need to use environmental variables to copy the code to
/gpfs0/home/group/user/path/to/some.rpm /tmp
%post
ls ${SINGULARITY_CONTAINER}/tmp
echo "Hello from inside the container"
When building yields :
$ sudo singularity build tmp.simg tmp
INFO: Starting build...
INFO: Skipping GPG Key Import
INFO: Adding owner write permission to build path: /tmp/rootfs-e2a3fbb4-242b-11eb-a267-34800d2d90f0
INFO: Copying /gpfs0/home/group/user/path/to/some.rpm to /tmp/rootfs-e2a3fbb4-242b-11eb-a267-34800d2d90f0/tmp
INFO: Running post scriptlet
+ ls /tmp/rootfs-e2a3fbb4-242b-11eb-a267-34800d2d90f0/tmp
some.rpm
+ echo 'Hello from inside the container'
Hello from inside the container
INFO: Creating SIF file...

Related

How to turn a singularity sandbox container into a sif file ? (while preserving the sandbox)

I have built a Singularity sandbox container using this command:
sudo singularity build --sandbox ubuntu/ library://ubuntu
Now, I would like to copy/export this container as a sif file. But I cannot find how to do this in the documentation.
Any idea ?
Ok, so by reading the doc more carefully, it's apparently not possible to keep the changes made on a sandbox into a sif file, see here
sudo singularity build ubuntu.sif ubuntu/
INFO: Starting build...
INFO: Creating SIF file...
INFO: Build complete: ubuntu.sif
See https://docs.sylabs.io/guides/3.5/user-guide/build_a_container.html#converting-containers-from-one-format-to-another

Append the package.json version number to my build artifact in aws-codebuild

I really dont know if this is a simple (must be), common or complex task.
I have a buildspec.yml file in my codebuild project, and i am trying to append the version written in package.json file to the output artifact.
I have already seen a lot of tutorials that teach how to append the date (not really useful to me), and others that tell me to execute a version.sh file with this
echo $(sed -nr 's/^\s*"version": "([0-9]{1,}.[0-9]{1,}.*)",$/\1/p' package.json)
and set it in a variable (it doesn't work).
i'm ending up with a build folder called: "my-project-$(version.sh)"
codebuild environment uses ubuntu and nodejs
Update (solved):
my version.sh file:
#!/usr/bin/env bash
echo $(sed -nr 's/^\s*\"version": "([0-9]{1,}\.[0-9]{1,}.*)",$/\1/p' package.json)
Then, i just found out 2 things:
Allow access to your version.sh file:
git update-index --add --chmod=+x version.sh
Declare a variable in any phase in buildspec, i dit in in build phase (just to make sure repository is already copied in environment)
TAGG=$($CODEBUILD_SRC_DIR/version.sh)
then reference it in artifact versioned name:
artifacts:
files:
- '**/*'
name: workover-frontend-$TAG
As result, my build artifact's name: myproject-1.0.0
In my case this script do not want to fetch data from package.json. On my local machine it working great but on AWS doesn't. I had to use chmod in different way, because i got message that i don't have right permissions. My buildspec:
version: 0.2
env:
variables:
latestTag: ""
phases:
pre_build:
commands:
- "echo sed version"
- sed --version
build:
commands:
- chmod +x version.sh
- latestTag=$($CODEBUILD_SRC_DIR/version.sh)
- "echo $latestTag"
artifacts:
files:
- '**/*'
discard-paths: yes
And results in console:
CodeBuild
I also have to mark that when i paste only for example echo 222 into version.sh file i got right answer in CodeBuild console.

Read host system's environment variables build time in Singularity

When I'm building a Singularity container I'd like to read environment variables from the host system in the %post section. I've been looking online for a way to achieve this, but to no avail. I'm starting to question if this is even possible at the moment, but I can't find any mentions of it being possible/impossible.
Example:
Singularity definition file: recipe
BootStrap: docker
From: continuumio/anaconda3
%runscript
%post
echo $TEST_ENV_VARIABLE
On the host system / OS
export TEST_ENV_VARIABLE='foo'
sudo singularity build test.sif recipe
prints only a blank line when echoing TEST_ENV_VARIABLE.
If there is no way of reading host system's environment variables in the %post section, are there any other ways of passing arguments into the recipe that could be used build-time?
That is not currently possible, though there is an open issue for that functionality. I'm not personally a fan of dynamic build options as it makes it harder to guarantee reproducibility.
If you do want something more dynamic, you could use a template to create different definition files. A very simplistic example:
$ cat gen_def.py
#!/usr/bin/env python3
import sys
my_def = """BootStrap: docker
From: continuumio/anaconda3
%post
echo This is {0}
echo This is {1}"""
print(my_def.format(*sys.argv[1:]))
$ ./gen_def.py one two > Singularity.custom
$ sudo singularity build test.sif Singularity.custom

Jython 2.7b4 will not webstart: ImportError: No module named site

I injected my application into jython-standalone-2.7-b4.jar. Here is what I see in the webstart console of the client machine when I webstart my app:
Java Web Start 11.31.2.13
Using JRE version 1.8.0_31-b13 Java HotSpot(TM) 64-Bit Server VM
User home directory = C:\Users\me
...
#### Java Web Start Error:
#### null
When I click on 'Details', I find the following exception stack trace:
ImportError: No module named site
at org.python.core.ImportError(Py.java:328)
at org.python.core.imp.import_first(imp.java:842)
at org.python.core.imp.load(imp.java:695)
at org.python.util.PythonInterpreter.<init>(PythonInterpreter.java:118)
at org.python.util.PythonInterpreter.<init>(PythonInterpreter.java:94)
at org.python.util.InteractiveInterpreter.<init>(InteractiveInterpreter.java:39)
at org.python.util.InteractiveInterpreter.<init>(InteractiveInterpreter.java:28)
at org.python.util.InteractiveConsole.<init>(InteractiveConsole.java:67)
at org.python.util.InteractiveConsole.<init>(InteractiveConsole.java:53)
at org.python.util.InteractiveConsole.<init>(InteractiveConsole.java:33)
...
Jython2.7b1 worked for me. I tried Jython2.7b3, but that fails as well.
Due to class loader differences in the Web Start environment, Jython can't find the files in the Lib directory inside the standalone jar file. To fix this, you need to rearrange the contents of the standalone jar a bit. The following script will convert a standalone jar into something workable in a Web Start environment:
#!/bin/sh
# Converts a Jython standalone jar into something usable with Java Web Start
if [ -z $1 ]; then
echo "Please give the path to the standalone jar as the first argument."
exit 1
fi
CURRDIR=$(pwd)
JAR_PATH="$CURRDIR/$1"
CONVERTED_JAR_PATH="$CURRDIR/jython-webstart.jar"
TEMPDIR=$(mktemp -d)
cd "$TEMPDIR"
jar xf "$JAR_PATH"
rm -rf Lib/test # including Jython's own unit tests is pointless
java -jar "$JAR_PATH" -m compileall Lib
find Lib -name "*.py" -delete
mv Lib/* .
rmdir Lib
jar cf "$CONVERTED_JAR_PATH" *
rm -rf "$TEMPDIR"
Next you'll probably run into Jython bug 2283. To work around it, set the python.home system property to point to any existing directory.

rpmbuild: Installed (but unpackaged) file(s) found - Multiple options tried

This is getting rather maddening - I'm trying to build an RPM out of some BASH scripts which work as Nagios plugins. I keep getting:
error: Installed (but unpackaged) file(s) found:
/usr/lib64/nagios/plugins/netappassigncheck
/usr/lib64/nagios/plugins/netappassignprep
In the %files directive of my spec file I have tried most of the combos that have been suggested here and on various other internet forums:
/usr/lib/nagios/plugins/*
/usr/lib/nagios/plugins/netappassigncheck
/usr/lib/nagios/plugins/netappassignprep
%dir /usr/lib/nagios/plugins/
And currently I am on
%dir %{_libdir}/nagios/plugins/
This is why my most recent error output is lib64, previous errors when quoting the full path were /usr/lib/...
These are the only 2 files that should make up the package as well.
Here is my .spec file
Name: netappautoassign
Summary: A set of Nagios Plugins for automatically assigning disks to a Netapp
Version: 1.0
Release: 1
License: %{license}
Group: Applications/System
Source: %{source}
URL: Reserved
Vendor: %{vendor}
Packager: %{packager}
BuildArch: noarch
Requires: bash, grep, util-linux, coreutils, expect, openssh-clients, bc, sed
Provides: netappassignprep, netappassigncheck
%description
Since Netapp's autoassign function may lead to disks being assigned to the
wrong head these NAGIOS plugins will ensure disks are added to the correct
head when replaced.
%prep
%setup -q
%build
%install
rm -rf %{buildroot}
install -d %{buildroot}%{_libdir}/nagios/plugins
cp netappassigncheck %{buildroot}%{_libdir}/nagios/plugins/
cp netappassignprep %{buildroot}%{_libdir}/nagios/plugins/
%files
%defattr(755,root,root,755)
%dir %{_libdir}/nagios/plugins/
%clean
rm -rf %{buildroot}
%post
And here's my ~/.rpmmacros
%_topdir %(echo $HOME)/rpmbuild
%_tmppath %{_topdir}/tmp
%buildroot %{_tmppath}/%{name}-%{version}
%license RESERVED
%source %{name}-%{version}.tar.gz
%vendor REDACTED
%packager REDACTED
EDIT - SOLVED
I'm not sure if this is a bug or desired behaviour, but it would appear that during the build setion the %{buildroot} variable was not being read in from .rpmmacros Having moved this variable into the main spec file the RPM is now built.
I'm not sure if this is a bug or desired behaviour, but it would appear that during the file verification section, it was reading in all the current active plugins under the root file system and not the %{buildroot}.
I suspected that the %{buildroot} variable was not being read in from .rpmmacros at this stage, although it was for all other stages.
I moved the declaration of %{buildroot} into my main .spec file and the build is now working!