having a small problem where you can help me out. On our new cluster we use LMod as environmental module system.
Creating a Module TCL Script for OpenFOAM, a system-dependent bashrc file need to be loaded.
This is the TCL script which I am using on another module system, it works fine. I am not able to execute the "source" command line in Lmod, what I am missing here?
#%Module1.0#####################################################################
##
## modules software/openfoam_v1812
##
## /opt/software/openfoam/openfoamv1812/OpenFOAM-v1812
proc ModulesHelp { } {
global version modroot
puts stderr "software/OpenFOAM-v1812 - sets the Environment for OpenFOAM-v1812 (openfoam.com)"
}
module-whatis "Sets the environment for using OpenFOAM-v1812"
# for Tcl script use only
set VERSION v1812
set OpenFOAM_PATH /opt/software/openfoam/openfoam${VERSION}/OpenFOAM-${VERSION}
set FOAM_INST_DIR /opt/software/openfoam/openfoam${VERSION}
puts stdout "source /opt/software/openfoam/openfoam${VERSION}/OpenFOAM-${VERSION}/etc/bashrc;"
I am not an expert, but I have recently come across a similar problem, in my case for activating Anaconda Python in a model. In my case, the solution was to use the 'execute' command in LMod
https://lmod.readthedocs.io/en/latest/050_lua_modulefiles.html
which has the documentation:
execute {cmd=”<any command>”,modeA={“load”}}
Run any command with a certain mode. For example execute {cmd=”ulimit
-s unlimited”,modeA={“load”}} will run the command ulimit -s unlimited as the last thing that the loading the module will do.
Hope this helps
Related
I'm using singularity to run python in an environnement deprived of python. I'm also running a mysql instance as explained by the IOWA state university (running an instance of mysql, and closing it when done).
For clarity, I'm using a bash script to open mysql, then do what i have to do (a python script) and close mysql, and it works fine. But Python's only way to stop if an error occured is sys.exit([value]) and this not only stops the python script, but also the bash script that ran it. This makes it impossible for me to manage the errors and close the instance of mysql if the python script exits.
My question is : Is there a way for me to execute a 'singularity instance stop mysql' while being in the python sandbox. Something to tell singularity "hey, this command here must be used on the host !" ?
I keep searching but can't find anything.
I only tried to execute it with subprocess like any other command, but it returned an error message because I don't have this instance inside the python sandbox. I don't even have singularity in this sandbox.
For any clarifications, just ask me, I'm trying to be clear but I'm pretty sure it's not very clear.
Thanks a lot !
Generally speaking, it would be a big security issue if a process could be initiated from inside a container (docker or singularity) but run in the host OS's namespace.
If the bash script is exiting on the python failure, it sounds like you're using set -e or #!/bin/bash -e. This causes the script to abort if any command returns non-zero. It's commonly recommended for safer processing, but can cause problems like this at times. To bypass that for the python step you can modify your script:
# start mysql, do some stuff
set +x # disable abort on non-zero return
python my_script.py
set -x # re-enable abort on non-zero
# shut down mysql, do other stuff
I'm using WSL and ConEmu build 180506. I'm trying to setup a task in ConEmu to use the current directory of the active tab when opening a new console but I cannot get it to work.
What I did is to setup the task {Bash: bash} using the instructions on this page
setting the task command as :
set "PATH=%ConEmuBaseDirShort%\wsl;%PATH%" & %ConEmuBaseDirShort%\conemu-cyg-64.exe --wsl -C~ -cur_console:pm:/mnt
Then following the instruction on this page, I added to my .bashrc
if [[ -n "${ConEmuPID}" ]]; then
PS1="$PS1\[\e]9;9;\"\w\"\007\e]9;12\007\]"
fi
and finally setup a shortcut using the macro :
Shell("new_console", "{bash}", "", "%CD%")
But it always open the new console in the default directory ('/home/[username]').
I don't understand what I'm not doing right.
I also noticed that a lot of environment variables listed here are not set. Basically, only $ConEmuPID and $ConEmuBuild seem to be set.
Any help would be appreciated.
GuiMacro Shell was intended to run certain commands, not tasks.
You think you may try to run macro Task("{bash}","%CD%")
Or set your {bash} task parameters to -dir %CD% and just set hotkey for your task.
Of course both methods require working CD acquisition from shell. Seems like it's OK in your case - %d shows proper folder.
I found the answer:
Shell("new_console:I", "bash.exe", "", "%CD%")
The readme is actually pretty good: https://github.com/cmderdev/cmder/blob/master/README.md
Please note that this is an AIX related question.
I have a jenkins server running on Redhat which is running a node via SSH on an AIX server.
The commands are run non-interactively using SSH to a user on the AIX machine who has ksh as its standard shell.
The problem is that this build needs a number of environment variables, and i can't seem to get it to work.
I have tried:
Jenkins allows me to set some environment variables for the session. So i tried:
ENV="$HOME/.profile"
I tried creating a .kshrc file containing
. .profile
But none of these approaches seems to make KSH run the .profile script.
The .profile script contains the environment setup for the user i need.
How do i get an AIX implementation of KSH to run my .profile script before executing commands?
You need to specifically tell Jenkins that you want to execute them in ksh shell.
By default, Jenkins runs as sh <commands>.
Add a shebang in your shell command as first line,
#!/bin/ksh
Most shells don't source their .profile files on non-interactive sessions. A simple solution is to source the .profile yourself as part of the command you are sending.
So instead of
yourcommand1; yourcommand2
you should send
. ~/.profile; yourcommand1; yourcommand2
over ssh
UPDATE after reading the comment about Jenkins controlling the ssh command
In the case your ssh command is performed by Jenkins you should have a look at https://wiki.jenkins-ci.org/display/JENKINS/SSH+Slaves+plugin, especially the 'Login profile files' paragraph.
I'd say one of these solutions is best
Set all environment variables from Jenkins using the node's configure page. Install the EnvInject plugin to do this.
Write a wrapper around the java command on the slave that sources your profile script and adjust the JavaPath (also on the node's configure page) to point to that wrapper.
The only way I know of for setting environment variables that will apply for non-interactive shells on AIX is via /etc/environment. I believe this is the correct place, but it will of course then apply to all users and all shells.
Take a look at Get puppet build to fail when the contained SQL script fails execution
I was attempting to run a vagrant build which installs Oracle XE in an Ubuntu Virtualbox VM and then runs a an SQL script to initialize the Oracle Schema. The vagrant build is here : https://github.com/ajorpheus/vagrant-ubuntu-oracle-xe. The setup.sql is run as a part of the oracle module's init.pp (right at the bottom or search for 'oracle-script').
When running the SQL script as a part of the vagrant build, I see the following error:
notice: /Stage[main]/Oracle::Xe/Exec[oracle-script]/returns: Error 6 initializing SQL*Plus
notice: /Stage[main]/Oracle::Xe/Exec[oracle-script]/returns: SP2-0667: Message file sp1<lang>.msb not found
notice: /Stage[main]/Oracle::Xe/Exec[oracle-script]/returns: SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory
There were two things that were instrumental in me finding a workaround for the problem:
As suggested in this answer, setting the logoutput attribute to true for the exec block under question immediately showed me the error, whereas before the exec was just failing silently.
It seemed strange that I was able to run the command (sqlplus system/manager#xe < /tmp/setup.sql) after manually logging in as the 'vagrant' user. That suggested that there was something missing in the environment. Therefore, I copied all ORACLE env. vars into the exec as seen on Line 211 here
That worked, however, setting up the env vars manually seems a bit brittle. Is there a better way to setup the ORACLE environment for the vagrant user? Or, is there a way to get puppet to setup the environment for the vagrant user similar to an interactive shell?
If some profile has been set up to give the user a working interactive shell, you should be able to pass your action through such a shell
command => 'bash -i -c "<actual command>"'
As an aside about logoutput, since you mentioned that - the documentation advises that "on_failure" is a sane default, as it will only bloat your output when there are actual errors to analyze. It is the actual default in the latests versions of Puppet.
I want to pass a system-wide variable to Apache so I can pass it to executed scripts using PassEnv. Basically a script executed Apache executes a shell script, that shell script wont run without the variable being set.
But Ubuntu devs did this in the startup script:
ENV="env -i LANG=C PATH=/usr/local/bin:/usr/bin:/bin"
Resulting in variables from /etc/environment to be discarded. Can I fix this without modifying the startup script?
Turns out you can pass along vars in /etc/apache2/envvars. Still sucks though.
Nope. The value stays empty.