Detect whether script is running in the context of Buildbot - gitlab-ci

Can a script that may be run by Buildbot's ShellCommand detect whether it is indeed running inside the context of Buildbot (e.g. as opposed to having been started manually in a shell)?
For instance, does Buildbot set any environment variables (such as GitLab's CI_JOB_STAGE that the script could inspect?
I am looking for a well-portable solution, so setting a proprietary environment variable e.g. at the level of some Buildbot builders or inspecting the script's parent process is not quite what I am looking for.

At least, it can detect if it is in an interactive session (when launched manually) vs. launched non-interactively (from a script or, in this instance, buildbox builder).
In a bash session, $- would display the current options set for the shell.
Change you job to include a call to your script with:
scriptshell=$- ./myscript.sh
In your script:
if [[ ${scriptshell} != *i* ]]; then
echo "non-interractive (script)"
else
echo "interractive (manual launch)
fi

Related

SSH connection command to embedded OS QNX Neutrino via paramiko [duplicate]

I am trying to run sesu command in Unix server from Python with the help of Paramiko exec_command. However when I am running this command exec_command('sesu test'), I am getting
sh: sesu: not found
When I am running simple ls command it giving me desired output. Only with sesu command it is not working fine.
This is how my code looks like:
import paramiko
host = host
username = username
password = password
port = port
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip,port,username,password)
stdin,stdout,stderr=ssh.exec_command('sesu test')
stdin.write('Password')
stdin.flush()
outlines=stdout.readlines()
resp=''.join(outlines)
print(resp)
The SSHClient.exec_command by default does not run shell in "login" mode and does not allocate a pseudo terminal for the session. As a consequence a different set of startup scripts is (might be) sourced, than in your regular interactive SSH session (particularly for non-interactive sessions, .bash_profile is not sourced). And/or different branches in the scripts are taken, based on an absence/presence of TERM environment variable.
Possible solutions (in preference order):
Fix the command not to rely on a specific environment. Use a full path to sesu in the command. E.g.:
/bin/sesu test
If you do not know the full path, on common *nix systems, you can use which sesu command in your interactive SSH session.
Fix your startup scripts to set the PATH the same for both interactive and non-interactive sessions.
Try running the script explicitly via login shell (use --login switch with common *nix shells):
bash --login -c "sesu test"
If the command itself relies on a specific environment setup and you cannot fix the startup scripts, you can change the environment in the command itself. Syntax for that depends on the remote system and/or the shell. In common *nix systems, this works:
PATH="$PATH;/path/to/sesu" && sesu test
Another (not recommended) approach is to force the pseudo terminal allocation for the "exec" channel using the get_pty parameter:
stdin,stdout,stderr = ssh.exec_command('sesu test', get_pty=True)
Using the pseudo terminal to automate a command execution can bring you nasty side effects. See for example Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
You may have a similar problem with LD_LIBRARY_PATH and locating shared objects.
See also:
Environment variable differences when using Paramiko
Certain Unix commands fail with "... not found", when executed through Java using JSch

What is the difference between calling a command via "wsl [command]" and opening a wsl shell and calling "[command]"?

I am using Ubuntu via WSL 2.0 on Windows 10 and would like to run Texlive from the Windows command line. To do so I prepended the Texlive folder to the path in /etc/environment (I also tried a number of other locations eg. $HOME/.bashrc):
C:\Users\scott\Documents>wsl echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Windows/system32:...
C:\Users\scott\Documents>wsl
scott#SCOTT-PC:/mnt/c/Users/scott/Documents$ echo $PATH
/usr/local/texlive/2020/bin/x86_64-linux:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Windows/system32:...
Why is there a difference between these two paths? Is it possible to change the first PATH variable?
To be honest, when I first looked at this question, I thought it would be an easy answer. Oh how wrong I was. There are a lot of nuances to how this works.
Let's start with the fairly "easy" part, though. The main difference between the first method and the second:
wsl by itself launches into a login (and interactive) shell
the shell launched with wsl echo $PATH is neither a login shell nor an interactive shell
So the first will source both login scripts (e.g. ~/.profile) and interactive startup scripts (e.g. ~/.bashrc). The second form does not get to source either of these.
You can see this a different way (and get to the solution) with the following commands:
wsl -e bash -c 'echo $PATH'
wsl -e bash -li -c 'echo $PATH'
The -li forces bash to run as a login and interactive shell, thus sourcing all of the applicable startup scripts. And, as #bovquier points out in the comments, a single quote is needed here to prevent PowerShell from interpolating the $ before it gets to Bash. That, or escape it.
You should be able to run TeX Live the same way, just replacing the "echo $PATH" with the startup command you need for TeX Live.
A second option would be to create a script that both adds the path and runs the command, and just launch that script through wsl /path/to/script.sh
That said, I honestly don't think that your current login/interactive PATH is coming from /etc/environment. In my testing, at least, /etc/environment has no use in WSL, and that's to be expected. /etc/environment is only sourced by PAM modules, and with no login check performed by WSL, there's no reason to invoke PAM in either the wsl nor the wsl echo $PATH commands.
I'd expect that you still have the PATH setting in ~/.bashrc or somewhere similar), and that's where the shell is picking it up from at the moment.
While this isn't necessarily critical to understanding the answer, you might also wonder, if /etc/environment isn't used for setting the default (non-login, non-interactive) path in WSL, what is? The answer seems to be that it is hard-coded into the init that starts up WSL. That init is also what appends the Windows path (assuming you don't have that feature disabled in /etc/wsl.conf).

Execute a shell command outside of a sandbox while in a sandbox

I'm using singularity to run python in an environnement deprived of python. I'm also running a mysql instance as explained by the IOWA state university (running an instance of mysql, and closing it when done).
For clarity, I'm using a bash script to open mysql, then do what i have to do (a python script) and close mysql, and it works fine. But Python's only way to stop if an error occured is sys.exit([value]) and this not only stops the python script, but also the bash script that ran it. This makes it impossible for me to manage the errors and close the instance of mysql if the python script exits.
My question is : Is there a way for me to execute a 'singularity instance stop mysql' while being in the python sandbox. Something to tell singularity "hey, this command here must be used on the host !" ?
I keep searching but can't find anything.
I only tried to execute it with subprocess like any other command, but it returned an error message because I don't have this instance inside the python sandbox. I don't even have singularity in this sandbox.
For any clarifications, just ask me, I'm trying to be clear but I'm pretty sure it's not very clear.
Thanks a lot !
Generally speaking, it would be a big security issue if a process could be initiated from inside a container (docker or singularity) but run in the host OS's namespace.
If the bash script is exiting on the python failure, it sounds like you're using set -e or #!/bin/bash -e. This causes the script to abort if any command returns non-zero. It's commonly recommended for safer processing, but can cause problems like this at times. To bypass that for the python step you can modify your script:
# start mysql, do some stuff
set +x # disable abort on non-zero return
python my_script.py
set -x # re-enable abort on non-zero
# shut down mysql, do other stuff

Reading profile script in non-interactive mode with AIX implementation of ksh

Please note that this is an AIX related question.
I have a jenkins server running on Redhat which is running a node via SSH on an AIX server.
The commands are run non-interactively using SSH to a user on the AIX machine who has ksh as its standard shell.
The problem is that this build needs a number of environment variables, and i can't seem to get it to work.
I have tried:
Jenkins allows me to set some environment variables for the session. So i tried:
ENV="$HOME/.profile"
I tried creating a .kshrc file containing
. .profile
But none of these approaches seems to make KSH run the .profile script.
The .profile script contains the environment setup for the user i need.
How do i get an AIX implementation of KSH to run my .profile script before executing commands?
You need to specifically tell Jenkins that you want to execute them in ksh shell.
By default, Jenkins runs as sh <commands>.
Add a shebang in your shell command as first line,
#!/bin/ksh
Most shells don't source their .profile files on non-interactive sessions. A simple solution is to source the .profile yourself as part of the command you are sending.
So instead of
yourcommand1; yourcommand2
you should send
. ~/.profile; yourcommand1; yourcommand2
over ssh
UPDATE after reading the comment about Jenkins controlling the ssh command
In the case your ssh command is performed by Jenkins you should have a look at https://wiki.jenkins-ci.org/display/JENKINS/SSH+Slaves+plugin, especially the 'Login profile files' paragraph.
I'd say one of these solutions is best
Set all environment variables from Jenkins using the node's configure page. Install the EnvInject plugin to do this.
Write a wrapper around the java command on the slave that sources your profile script and adjust the JavaPath (also on the node's configure page) to point to that wrapper.
The only way I know of for setting environment variables that will apply for non-interactive shells on AIX is via /etc/environment. I believe this is the correct place, but it will of course then apply to all users and all shells.

Default c-shell, change to bash but allow for scp

Hi so I am trying to modify my .cshrc file to make bash my default. It is on a school account so I cannot change the main settings but can change the profile. The problem is that when I use the command:
bash
in my .cshrc it works when I am logging in just fine. But anytime I try to scp files it does not work because it launches the .cshrc and scp gets confused when it changes to the bash terminal.
Does anyone know how to get around this? Possibly launch bash in quiet mode...
In general, you shouldn't do anything that invokes an interactive application or produces visible output in your .cshrc. The problem is that .cshrc is sourced for non-interactive shells. And since your default shell is csh, you're going to have csh invoked non-interactively in a lot of cases -- as you've seen with scp.
Instead, I'd just invoke bash -- or, better, bash -l -- manually from the csh prompt. You can set up an alias like, say, alias b bash -l.
If you're going to invoke a new shell automatically on login (which is still not a good idea), put it in your .login, not your .cshrc.
This is assuming chsh doesn't work, but it should -- try it.