How to set PATH variable in crontab via whenever gem - ruby-on-rails-3

Is it possible to set the PATH or SHELL variable in a crontab via the whenever schedule.rb file?
# here I want to set the PATH and SHELL variable somehow
every 3.hours do
# some cronjob
end
I want this output in my crontab after my capistrano deploy:
SHELL=/bin/bash
PATH=/usr/local/bin:/usr/local/sbin:/sbin:/usr/sbin:/bin:/usr/bin:/usr/bin/X11
# some cronjobs

Ok, it seems as I found the solution. I found it here: https://gist.github.com/jjb/950975
I will update this answer when I have tested it
I have to put this into my schedule.rb
# If your ruby binary isn't in a standard place (for example if it's in /usr/local/bin,
# because you installed it yourself from source, or from a thid-party package like REE),
# this tells whenever (or really, the rails runner) where to find it.
env :PATH, '/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin'

You are already doing it when running zenity when setting DISPLAY, LANG etc.
If you want to set the shell, set it in the first line of /home/username/script/script1.sh using #!/bin/bash.
If you want to set the path, one way to do it is to set it before running the command:
5 9-20 * * * PATH=/usr/local/bin:/usr/local/sbin:/sbin:/usr/sbin:/bin:/usr/bin:/usr/bin/X11 /home/username/script/script1.sh > /dev/null
A alternate/better way is to create a simple wrapper script like so:
#!/bin/bash
export PATH=/usr/local/bin:/usr/local/sbin:/sbin:/usr/sbin:/bin:/usr/bin:/usr/bin/X11
# Absolute path to this script
SCRIPT=`readlink -f $0`
# Absolute directory this script is in
SCRIPTPATH=`dirname $SCRIPT`
#make sure we are in the same directory as the script1.sh - this is useful in case the script assumes it is running from the same directory it's in and makes relative directory/file references
cd $SCRIPTPATH
##run final script, and pass through all parameters that were passed to wrapper script.
/home/username/script/script1.sh "$#"

Related

How to verify if in Singularity|Apptainer container?

According to the shell doc:
The change in prompt indicates that you have entered the container (though you should not rely on that to determine whether you are in container or not).
So other than the change in prompt, how should one determine whether they are in a container or not?
There are a few environment variables you can check for:
SINGULARITY_BIND - may still be empty if no binds/mounts are set
SINGULARITY_COMMAND - e.g., exec, shell, etc.
SINGULARITY_CONTAINER - path to the image on the host OS
SINGULARITY_ENVIRONMENT - usually /.singularity.d/env/91-environment.sh or something similar
SINGULARITY_NAME - filename of the singularity image
Alternatively, checking for the existence of /.singularity.d/Singularity. If inside a singularity container, that is a copy of the Singularity definition used when creating the image. In general, it is really unlikely for /.singularity.d to exist on a normal host OS unless someone did something really weird.
One way to do this is by passing the --cleanenv argument in the singularity shell command and checking if the PATH variable is the same as your host user's PATH:
#add an arbitrary file location to your PATH variable and check that it is present for the host
export PATH=$PATH:/path/to/foo/bar
echo $PATH
#now pull up a shell in your container with --cleanenv to ignore the environmental variables of the host - such as the PATH we just exported
singularity shell --cleanenv yourimage.sif
#check that /path/to/foo/bar is not in PATH in your container
echo $PATH

LMOD TCL execute bash script while loading module

having a small problem where you can help me out. On our new cluster we use LMod as environmental module system.
Creating a Module TCL Script for OpenFOAM, a system-dependent bashrc file need to be loaded.
This is the TCL script which I am using on another module system, it works fine. I am not able to execute the "source" command line in Lmod, what I am missing here?
#%Module1.0#####################################################################
##
## modules software/openfoam_v1812
##
## /opt/software/openfoam/openfoamv1812/OpenFOAM-v1812
proc ModulesHelp { } {
global version modroot
puts stderr "software/OpenFOAM-v1812 - sets the Environment for OpenFOAM-v1812 (openfoam.com)"
}
module-whatis "Sets the environment for using OpenFOAM-v1812"
# for Tcl script use only
set VERSION v1812
set OpenFOAM_PATH /opt/software/openfoam/openfoam${VERSION}/OpenFOAM-${VERSION}
set FOAM_INST_DIR /opt/software/openfoam/openfoam${VERSION}
puts stdout "source /opt/software/openfoam/openfoam${VERSION}/OpenFOAM-${VERSION}/etc/bashrc;"
I am not an expert, but I have recently come across a similar problem, in my case for activating Anaconda Python in a model. In my case, the solution was to use the 'execute' command in LMod
https://lmod.readthedocs.io/en/latest/050_lua_modulefiles.html
which has the documentation:
execute {cmd=”<any command>”,modeA={“load”}}
Run any command with a certain mode. For example execute {cmd=”ulimit
-s unlimited”,modeA={“load”}} will run the command ulimit -s unlimited as the last thing that the loading the module will do.
Hope this helps

write outputs from a script run into singularity

I can't get the output of a script run through singularity.
I have a python script, at the end of which the output is saved with:
...
with open('saveOut.pkl','wb') as myFile:
pickle.dump(myTable,myFile)
I want to run this script with singularity on a distant machine. Since I am learning singularity, I made a 'sand box' debian image (not compiled into a single 'img' file yet) in the directory /tmp/debian; in this image I copied the python script test.py in /usr/src and I run it with the command:
sudo singularity exec /tmp/debian python3.5 /usr/src/test.py
The problem:
It works well as long as I have only displayed results. with the pickle example described above, I don't get any saveOut.pkl file anywhere: this file is just not written anywhere but I don't see any message. I tried to write an explicit path in the python script. For instance /usr/src/saveOut.pkl, but this is the same.
How could I write a result ?
What was your expected result i.e. in which directory did you expect
to find the output file?
I expect a file saveOutput.pkl anywhere, in the container or not, I don't care the location. Currently I don't get it at all: neither in the container's current directory, nor in the container's /usr/src/, nor on the host, nor anywhere.
Did you look for it on the host or in the container?
both, I don't see it anywhere
What's happening here is that your python script is writing the pickle file to its current location (/usr/src/ in the container). Then, since the output from your script is not persistent (due to the sandbox not being writable on execution), it gets deleted at the end of the run.
I believe you could change your script:
with open('/opt/saveOut.pkl','wb') as myFile:
pickle.dump(myTable,myFile)
and then bind the local directory and get the output you're looking for:
sudo singularity exec -B ./:/opt /tmp/debian python3.5 /usr/src/test.py
This worked for me, anyway.

How to define a variable in a Dockerfile?

In my Dockerfile, I would like to define variables that I can use later in the Dockerfile.
I am aware of the ENV instruction, but I do no want these variables to be environment variables.
Is there a way to declare variables at Dockerfile scope?
You can use ARG - see https://docs.docker.com/engine/reference/builder/#arg
The ARG instruction defines a variable that users can pass at
build-time to the builder with the docker build command using the
--build-arg <varname>=<value> flag. If a user specifies a build
argument that was not defined in the Dockerfile, the build outputs an
error.
Can be useful with COPY during build time (e.g. copying tag specific content like specific folders)
For example:
ARG MODEL_TO_COPY
COPY application ./application
COPY $MODEL_TO_COPY ./application/$MODEL_TO_COPY
While building the container:
docker build --build-arg MODEL_TO_COPY=model_name -t <container>:<model_name specific tag> .
To answer your question:
In my Dockerfile, I would like to define variables that I can use later in the Dockerfile.
You can define a variable with:
ARG myvalue=3
Spaces around the equal character are not allowed.
And use it later with:
RUN echo $myvalue > /test
To my knowledge, only ENV allows that, as mentioned in "Environment replacement"
Environment variables (declared with the ENV statement) can also be used in certain instructions as variables to be interpreted by the Dockerfile.
They have to be environment variables in order to be redeclared in each new containers created for each line of the Dockerfile by docker build.
In other words, those variables aren't interpreted directly in a Dockerfile, but in a container created for a Dockerfile line, hence the use of environment variable.
This day, I use both ARG (docker 1.10+, and docker build --build-arg var=value) and ENV.
Using ARG alone means your variable is visible at build time, not at runtime.
My Dockerfile usually has:
ARG var
ENV var=${var}
In your case, ARG is enough: I use it typically for setting http_proxy variable, that docker build needs for accessing internet at build time.
Christopher King adds in the comments:
Watch out!
The ARG variable is only in scope for the "stage that it is used" and needs to be redeclared for each stage.
He points out to Dockerfile / scope
An ARG variable definition comes into effect from the line on which it is defined in the Dockerfile not from the argument’s use on the command-line or elsewhere.
For example, consider this Dockerfile:
FROM busybox
USER ${user:-some_user}
ARG user
USER $user
# ...
A user builds this file by calling:
docker build --build-arg user=what_user .
The USER at line 2 evaluates to some_user as the user variable is defined on the subsequent line 3.
The USER at line 4 evaluates to what_user as user is defined and the what_user value was passed on the command line.
Prior to its definition by an ARG instruction, any use of a variable results in an empty string.
An ARG instruction goes out of scope at the end of the build stage where it was defined.
To use an arg in multiple stages, each stage must include the ARG instruction.
If the variable is re-used within the same RUN instruction, one could simply set a shell variable. I really like how they approached this with the official Ruby Dockerfile.
You can use ARG variable defaultValue and during the run command you can even update this value using --build-arg variable=value. To use these variables in the docker file you can refer them as $variable in run command.
Note: These variables would be available for Linux commands like RUN echo $variable and they wouldn't persist in the image.
Late to the party, but if you don't want to expose environment variables, I guess it's easier to do something like this:
RUN echo 1 > /tmp/__var_1
RUN echo `cat /tmp/__var_1`
RUN rm -f /tmp/__var_1
I ended up doing it because we host private npm packages in aws codeartifact:
RUN aws codeartifact get-authorization-token --output text > /tmp/codeartifact.token
RUN npm config set //company-123456.d.codeartifact.us-east-2.amazonaws.com/npm/internal/:_authToken=`cat /tmp/codeartifact.token`
RUN rm -f /tmp/codeartifact.token
And here ARG cannot work and i don't want to use ENV because i don't want to expose this token to anything else

Do I simply delete the bashrc 'return' command?

I've been advised to remove the return command from my bashrc file in order to allow Ruby Version Manager to function properly. Do I simply delete the return command, or do I replace it with some other command? I am hesitant to mess with my System-wide shell without some proper direction. But I would really like to get RVM working as it is a time saver.
My bashrc is located in the etc directory and looks like this:
# System-wide .bashrc file for interactive bash(1) shells.
if [ -z "$PS1" ]; then
return
fi
PS1='\h:\W \u\$ '
# Make bash check its window size after a process completes
shopt -s checkwinsize
if [[ -s /Users/justinz/.rvm/scripts/rvm ]] ; then source /Users/justinz/.rvm/scripts/rvm ; fi
The last line, is an insert, described in the RVM installation.
I wouldn't. That return is probably there for a good reason. It obviously doesn't want to execute anything after that if the PS1 variable is empty.
I would just move the inserted line up above the if statement.
In addition, if that's actually in the system-wide bashrc file, you should be using something like:
${HOME}/.rvm/scripts/rvm
rather than:
/Users/justinz/.rvm/scripts/rvm
I'm sure Bob and Alice don't want to run your startup script.
If it's actually your bashrc file (in /Users/justinz), you can ignore that last snippet above.
The last line uses a file in a specific user's home directory, and as such should not be in the system-wide bashrc, since only root and that user will have access to that file. Best to place it in that user's ~/.bashrc instead.