Execute a shell command outside of a sandbox while in a sandbox - singularity-container

I'm using singularity to run python in an environnement deprived of python. I'm also running a mysql instance as explained by the IOWA state university (running an instance of mysql, and closing it when done).
For clarity, I'm using a bash script to open mysql, then do what i have to do (a python script) and close mysql, and it works fine. But Python's only way to stop if an error occured is sys.exit([value]) and this not only stops the python script, but also the bash script that ran it. This makes it impossible for me to manage the errors and close the instance of mysql if the python script exits.
My question is : Is there a way for me to execute a 'singularity instance stop mysql' while being in the python sandbox. Something to tell singularity "hey, this command here must be used on the host !" ?
I keep searching but can't find anything.
I only tried to execute it with subprocess like any other command, but it returned an error message because I don't have this instance inside the python sandbox. I don't even have singularity in this sandbox.
For any clarifications, just ask me, I'm trying to be clear but I'm pretty sure it's not very clear.
Thanks a lot !

Generally speaking, it would be a big security issue if a process could be initiated from inside a container (docker or singularity) but run in the host OS's namespace.
If the bash script is exiting on the python failure, it sounds like you're using set -e or #!/bin/bash -e. This causes the script to abort if any command returns non-zero. It's commonly recommended for safer processing, but can cause problems like this at times. To bypass that for the python step you can modify your script:
# start mysql, do some stuff
set +x # disable abort on non-zero return
python my_script.py
set -x # re-enable abort on non-zero
# shut down mysql, do other stuff

Related

SSH command step not working for one command - in Jmeter

I have a unique problem using jmeter SSH command.
I use this step to run spark jobs.
the problem is that one of the commands not working, to clarify it connects and not get response and just wait and wait for hours, and nothing displayed on screen.
I know how to work with the tool, and this behavior is special for this script alone.
All other script worked, I duplicate one that worked for example
sudo /run_stg.sh this command worked
sudo /run_off2-stg.sh this command not worked
if I run the job manually via jenkins it worked
if I entered to command line and use plik ssh it worked,
the problem is just Jmeter, that is waiting and waiting and I can not understand for what?
the job is about 3 minutes, and I wait for response in Jmeter for 4 hours and nothing Jmeter just waiting.
in the console log I set to trace level and nothing, absolutely no idea how to start handle this issue in Jmeter.
an anyone please assists how to make Jmeter to write what happened?
or just to know if he connect or anything
since this behavior all the test can not be performed
Most probably you are as usual misconfiguring the SSH Command sampler.
The idea is not to run the script per se, you need to delegate the script execution to the Unix Shell, for example Bash this way you will be able to combine several commands together, see the output, amend debugging level, etc.
So I would recommend setting your command to something like /bin/bash -c -x /your/script.sh
Another guess, given you use sudo it might be the case that the sudo command simply waits for the password (which JMeter never provides), if this is the case try amending your script permissions using chmod command and allowing your user its execution without root privileges.
And finally, given you're able to run your command using "plik ssh" (whatever it is) you can run it using OS Process Sampler
More information: How to Run External Commands and Programs Locally and Remotely from JMeter

Is there a better way than this to run an SQL script through puppet?

Take a look at Get puppet build to fail when the contained SQL script fails execution
I was attempting to run a vagrant build which installs Oracle XE in an Ubuntu Virtualbox VM and then runs a an SQL script to initialize the Oracle Schema. The vagrant build is here : https://github.com/ajorpheus/vagrant-ubuntu-oracle-xe. The setup.sql is run as a part of the oracle module's init.pp (right at the bottom or search for 'oracle-script').
When running the SQL script as a part of the vagrant build, I see the following error:
notice: /Stage[main]/Oracle::Xe/Exec[oracle-script]/returns: Error 6 initializing SQL*Plus
notice: /Stage[main]/Oracle::Xe/Exec[oracle-script]/returns: SP2-0667: Message file sp1<lang>.msb not found
notice: /Stage[main]/Oracle::Xe/Exec[oracle-script]/returns: SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory
There were two things that were instrumental in me finding a workaround for the problem:
As suggested in this answer, setting the logoutput attribute to true for the exec block under question immediately showed me the error, whereas before the exec was just failing silently.
It seemed strange that I was able to run the command (sqlplus system/manager#xe < /tmp/setup.sql) after manually logging in as the 'vagrant' user. That suggested that there was something missing in the environment. Therefore, I copied all ORACLE env. vars into the exec as seen on Line 211 here
That worked, however, setting up the env vars manually seems a bit brittle. Is there a better way to setup the ORACLE environment for the vagrant user? Or, is there a way to get puppet to setup the environment for the vagrant user similar to an interactive shell?
If some profile has been set up to give the user a working interactive shell, you should be able to pass your action through such a shell
command => 'bash -i -c "<actual command>"'
As an aside about logoutput, since you mentioned that - the documentation advises that "on_failure" is a sane default, as it will only bloat your output when there are actual errors to analyze. It is the actual default in the latests versions of Puppet.

Merge multiple stdout/stderr into one stdout

I have a development stack with multiple processes running: web server, auto-testing, compilation in background etc. All of these are basic command line commands such as node app.js or lein midje :autotest.
Is it possible with one script to run all these processes in "background" and merge their outputs into one stdout (that is: to show it on the screen in terminal)?
One of the problem with easy bash solution that I found (using &) is that on Ctrl+C the background processes are obviously kept alive, which is not possible desirable.
I have tried adding trap 'kill $(jobs -pr)' SIGINT SIGTERM EXIT but this doesn't seem to work reliably on OS X - surprisingly the node processes get killed, but the java ones are still living after the script exits (via Ctrl+C).
I can use any scripting language. I would prefer pure bash or JS, but Python or Ruby are OK too.
I would also like the ANSI escape colouring to be preserved in the merged output.
You might use multitail utility. It not only allows you to tail log files, but also output of arbitrary CLI programs (lein run, lein midje :autotest, ...).
Example:
$ multitail --mergeall -cT ANSI -l "lein midje :autotest" -cT ANSI -l "lein ring server-headless"
Ctrl-C than kills all processes which are being tailed.
If you are OSX user you can install multitail using brew install multitail (assuming that you already have homebrew installed - if not, see http://mxcl.github.io/homebrew/)
In order to get more info about multitail configuration you might read man multitail. There are also usage examples at http://www.vanheusden.com/multitail/index.php

Can't write in tcl command line unless sending echo, after using PLINK

I'm having a confusing problem running a variety of programs from a tcl script. Here's the story: I have a script (in tcl) which executes plink to establish a remote connection on a Linux computer. I basically use eval for calling plink, sending as parameters some ssh commands and info, and also a bash file to be executed on the Linux computer.
So far, that works fine, or at least it does what I intend it to do. The issue here is that after calling this procedure, my prompt stops working the normal way. I can type, but it doesn't appear on screen unless the command I send is "echo" (without ""). If so, I get the "ECHO is on" message and the prompt continues to work normally.
Does anyone have any idea why this could be happening? I thought about just patch it and add the "echo" command inside my script, but it says that it's an invalid command in this case...
Well, thanks for the help!

Apache2 PassEnv on Ubuntu

I want to pass a system-wide variable to Apache so I can pass it to executed scripts using PassEnv. Basically a script executed Apache executes a shell script, that shell script wont run without the variable being set.
But Ubuntu devs did this in the startup script:
ENV="env -i LANG=C PATH=/usr/local/bin:/usr/bin:/bin"
Resulting in variables from /etc/environment to be discarded. Can I fix this without modifying the startup script?
Turns out you can pass along vars in /etc/apache2/envvars. Still sucks though.
Nope. The value stays empty.