How to verify if in Singularity|Apptainer container? - singularity-container

According to the shell doc:
The change in prompt indicates that you have entered the container (though you should not rely on that to determine whether you are in container or not).
So other than the change in prompt, how should one determine whether they are in a container or not?

There are a few environment variables you can check for:
SINGULARITY_BIND - may still be empty if no binds/mounts are set
SINGULARITY_COMMAND - e.g., exec, shell, etc.
SINGULARITY_CONTAINER - path to the image on the host OS
SINGULARITY_ENVIRONMENT - usually /.singularity.d/env/91-environment.sh or something similar
SINGULARITY_NAME - filename of the singularity image
Alternatively, checking for the existence of /.singularity.d/Singularity. If inside a singularity container, that is a copy of the Singularity definition used when creating the image. In general, it is really unlikely for /.singularity.d to exist on a normal host OS unless someone did something really weird.

One way to do this is by passing the --cleanenv argument in the singularity shell command and checking if the PATH variable is the same as your host user's PATH:
#add an arbitrary file location to your PATH variable and check that it is present for the host
export PATH=$PATH:/path/to/foo/bar
echo $PATH
#now pull up a shell in your container with --cleanenv to ignore the environmental variables of the host - such as the PATH we just exported
singularity shell --cleanenv yourimage.sif
#check that /path/to/foo/bar is not in PATH in your container
echo $PATH

Related

JMeter Distributed Testing java.io.FileNotFoundException: rmi_keystore.jks (No such file or directory)

setting up a distributed test with Jmeter i ended up in this problem.
First of all i'm aware setting the jmeter.property server.rmi.ssl.disable=true is a work around.
Still i'd like to see if it is possible to use this rmi_keystore.jks. The Jmeter documentation
https://jmeter.apache.org/usermanual/remote-test.html is clear enough about setting up the environment but doesn't mention at all how specify the path to the rmi_keystore.jks, or if this has to be the rmi_keystore.jks on the worker or the one in the controller.
I noticed if you do a test on your machine as worker and controller ( as this guy does https://www.youtube.com/watch?v=Ok8Cqc0wipk ) setting the absolute path to the rmi_keystore.jks works.
Ex. server.rmi.ssl.truststore.file=C:\path\to\the\rmi_keystore.jks and server.rmi.ssl.keystore.file=C:\path\to\the\rmi_keystore.jks and.
But this doesn't work when controller has a different path to the rmi_keystore.jks then the worker.
My question is : how can I set the right jmeter properties server.rmi.ssl.truststore.file and server.rmi.ssl.keystore.file to resolve the FileNotFoundException? Stating that default values don't work?
thank you everyone
You need to:
Generate the rmi_keystore.jks file on the master machine
Copy it to all the slaves
The default location (where JMeter looks for the file) is rmi_keystore.jks, to wit if you drop it to "bin" folder of your JMeter installation on master and slaves - JMeter will find it and start using.
The server.rmi.ssl.keystore.file property should be used if you want to customize the file name and/or location so if it is different you can either set slave-specific location via user.properties file or pass it via -J command-line argument.
If location is common for all the slaves and you want to override it in a single shot - provide it via -G command-line argument.
More information:
Configuring JMeter
Full list of command-line options
Apache JMeter Properties Customization Guide
You can use create-rmi-keystore.bat to generate the rmi_keystore.jks. You will find it under Bin folder.

How to dynamically set an ENV variable using a Dockerfile

I have a Dockerfile that has access to a variable that indicates the environment it is being targeted to. Our CICD pipeline makes this environment variable available to the Dockerfile and I can test for a particular environment using "Run if $Environment =".
When I detect a "test" environment, I need to create another environment variable on-the-fly. However, code like this doesn't seem to work:
RUN if $Environment="test" ; then ; /
ENV NewEnvironmentVariable = "test" ; /
fi
The get "ENV" not found when it runs. So obviously, you can't use ENV this way within a RUN .. if.
I CAN however, use bash commands to export the variable, but it's probably creating this export in a different context, so, the Dockerfile doesn't have access to it. I would have thought that exporting it would make the new environment variable to the Docker file (when it returns from the "if" block.
In short, I simply need to evaluate and existing environment variable and if it contains the value I'm looking for it will create a new ENV variable just as if I have done "ENV MyNewVar=1".
Is this possible?
Thanks

write outputs from a script run into singularity

I can't get the output of a script run through singularity.
I have a python script, at the end of which the output is saved with:
...
with open('saveOut.pkl','wb') as myFile:
pickle.dump(myTable,myFile)
I want to run this script with singularity on a distant machine. Since I am learning singularity, I made a 'sand box' debian image (not compiled into a single 'img' file yet) in the directory /tmp/debian; in this image I copied the python script test.py in /usr/src and I run it with the command:
sudo singularity exec /tmp/debian python3.5 /usr/src/test.py
The problem:
It works well as long as I have only displayed results. with the pickle example described above, I don't get any saveOut.pkl file anywhere: this file is just not written anywhere but I don't see any message. I tried to write an explicit path in the python script. For instance /usr/src/saveOut.pkl, but this is the same.
How could I write a result ?
What was your expected result i.e. in which directory did you expect
to find the output file?
I expect a file saveOutput.pkl anywhere, in the container or not, I don't care the location. Currently I don't get it at all: neither in the container's current directory, nor in the container's /usr/src/, nor on the host, nor anywhere.
Did you look for it on the host or in the container?
both, I don't see it anywhere
What's happening here is that your python script is writing the pickle file to its current location (/usr/src/ in the container). Then, since the output from your script is not persistent (due to the sandbox not being writable on execution), it gets deleted at the end of the run.
I believe you could change your script:
with open('/opt/saveOut.pkl','wb') as myFile:
pickle.dump(myTable,myFile)
and then bind the local directory and get the output you're looking for:
sudo singularity exec -B ./:/opt /tmp/debian python3.5 /usr/src/test.py
This worked for me, anyway.

How to define a variable in a Dockerfile?

In my Dockerfile, I would like to define variables that I can use later in the Dockerfile.
I am aware of the ENV instruction, but I do no want these variables to be environment variables.
Is there a way to declare variables at Dockerfile scope?
You can use ARG - see https://docs.docker.com/engine/reference/builder/#arg
The ARG instruction defines a variable that users can pass at
build-time to the builder with the docker build command using the
--build-arg <varname>=<value> flag. If a user specifies a build
argument that was not defined in the Dockerfile, the build outputs an
error.
Can be useful with COPY during build time (e.g. copying tag specific content like specific folders)
For example:
ARG MODEL_TO_COPY
COPY application ./application
COPY $MODEL_TO_COPY ./application/$MODEL_TO_COPY
While building the container:
docker build --build-arg MODEL_TO_COPY=model_name -t <container>:<model_name specific tag> .
To answer your question:
In my Dockerfile, I would like to define variables that I can use later in the Dockerfile.
You can define a variable with:
ARG myvalue=3
Spaces around the equal character are not allowed.
And use it later with:
RUN echo $myvalue > /test
To my knowledge, only ENV allows that, as mentioned in "Environment replacement"
Environment variables (declared with the ENV statement) can also be used in certain instructions as variables to be interpreted by the Dockerfile.
They have to be environment variables in order to be redeclared in each new containers created for each line of the Dockerfile by docker build.
In other words, those variables aren't interpreted directly in a Dockerfile, but in a container created for a Dockerfile line, hence the use of environment variable.
This day, I use both ARG (docker 1.10+, and docker build --build-arg var=value) and ENV.
Using ARG alone means your variable is visible at build time, not at runtime.
My Dockerfile usually has:
ARG var
ENV var=${var}
In your case, ARG is enough: I use it typically for setting http_proxy variable, that docker build needs for accessing internet at build time.
Christopher King adds in the comments:
Watch out!
The ARG variable is only in scope for the "stage that it is used" and needs to be redeclared for each stage.
He points out to Dockerfile / scope
An ARG variable definition comes into effect from the line on which it is defined in the Dockerfile not from the argument’s use on the command-line or elsewhere.
For example, consider this Dockerfile:
FROM busybox
USER ${user:-some_user}
ARG user
USER $user
# ...
A user builds this file by calling:
docker build --build-arg user=what_user .
The USER at line 2 evaluates to some_user as the user variable is defined on the subsequent line 3.
The USER at line 4 evaluates to what_user as user is defined and the what_user value was passed on the command line.
Prior to its definition by an ARG instruction, any use of a variable results in an empty string.
An ARG instruction goes out of scope at the end of the build stage where it was defined.
To use an arg in multiple stages, each stage must include the ARG instruction.
If the variable is re-used within the same RUN instruction, one could simply set a shell variable. I really like how they approached this with the official Ruby Dockerfile.
You can use ARG variable defaultValue and during the run command you can even update this value using --build-arg variable=value. To use these variables in the docker file you can refer them as $variable in run command.
Note: These variables would be available for Linux commands like RUN echo $variable and they wouldn't persist in the image.
Late to the party, but if you don't want to expose environment variables, I guess it's easier to do something like this:
RUN echo 1 > /tmp/__var_1
RUN echo `cat /tmp/__var_1`
RUN rm -f /tmp/__var_1
I ended up doing it because we host private npm packages in aws codeartifact:
RUN aws codeartifact get-authorization-token --output text > /tmp/codeartifact.token
RUN npm config set //company-123456.d.codeartifact.us-east-2.amazonaws.com/npm/internal/:_authToken=`cat /tmp/codeartifact.token`
RUN rm -f /tmp/codeartifact.token
And here ARG cannot work and i don't want to use ENV because i don't want to expose this token to anything else

How to set PATH variable in crontab via whenever gem

Is it possible to set the PATH or SHELL variable in a crontab via the whenever schedule.rb file?
# here I want to set the PATH and SHELL variable somehow
every 3.hours do
# some cronjob
end
I want this output in my crontab after my capistrano deploy:
SHELL=/bin/bash
PATH=/usr/local/bin:/usr/local/sbin:/sbin:/usr/sbin:/bin:/usr/bin:/usr/bin/X11
# some cronjobs
Ok, it seems as I found the solution. I found it here: https://gist.github.com/jjb/950975
I will update this answer when I have tested it
I have to put this into my schedule.rb
# If your ruby binary isn't in a standard place (for example if it's in /usr/local/bin,
# because you installed it yourself from source, or from a thid-party package like REE),
# this tells whenever (or really, the rails runner) where to find it.
env :PATH, '/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin'
You are already doing it when running zenity when setting DISPLAY, LANG etc.
If you want to set the shell, set it in the first line of /home/username/script/script1.sh using #!/bin/bash.
If you want to set the path, one way to do it is to set it before running the command:
5 9-20 * * * PATH=/usr/local/bin:/usr/local/sbin:/sbin:/usr/sbin:/bin:/usr/bin:/usr/bin/X11 /home/username/script/script1.sh > /dev/null
A alternate/better way is to create a simple wrapper script like so:
#!/bin/bash
export PATH=/usr/local/bin:/usr/local/sbin:/sbin:/usr/sbin:/bin:/usr/bin:/usr/bin/X11
# Absolute path to this script
SCRIPT=`readlink -f $0`
# Absolute directory this script is in
SCRIPTPATH=`dirname $SCRIPT`
#make sure we are in the same directory as the script1.sh - this is useful in case the script assumes it is running from the same directory it's in and makes relative directory/file references
cd $SCRIPTPATH
##run final script, and pass through all parameters that were passed to wrapper script.
/home/username/script/script1.sh "$#"