How to pass argument to another jenkins build step - ssh

In Jenkins job i need to execute shell on jenkins machine. I use Execute shell step from there i get value, which have to be used in another step Execute shell script on remote host using ssh. I can't find way to pass that value...
Here is my jenkins job configuration:
I appreciate any help.

Content of the file which constains the script:
echo "VERSION=`/data/...`"> env.properties
Path of the properties file:
env.properties
In shell script using ssh:
echo"... $VERSION"
Something like this works for the build, maybe here works too.

Related

Trying to execute script through tclsh + sftp

For context i am trying to run a script that is hosted on a server made with TCL.
To run this script i log in a Cisco router and use the command:
tclsh tftp://[server_ip]/location/script.tcl
and it works over tftp, but when i try with sftp it does not work, says "No such file or directory", even tho i can copy the file and that works.
the command i use to cope the file is
copy sftp: flash:[server_ip]/location/script.tcl
so the router supports sftp.
and to try to run it i do exactly like tftp:
tclsh sftp://[server_ip]/location/script.tcl
to no avail.
So the copy works but executing the script does not work.
Any idea on how to fix this?
EDIT: I also tried SCP to no avail

Executing script even if they are failures in yml file

Hi All I am tying to setup a RestAPI pipeline in aws codebuild. I have custom Newman docker. I have a build command that will failure but I want to execute the rest of the commands as well. but shell stops executions other commands when the Newman command fails. how to execute other commands in yml file.
One simple way would be:
You can create a custom shell script (mycommand.sh) with your command that can cause error inside a try catch statement (so that it will not result in an error)
In your Code build's yml file under commands section, just execute the ./mycommand.sh
Source:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-cmd.html

How to run a shell script on another server from kettle job at one server?

Using Pentaho, I want to execute a shell script lying on another server in Kettle Job (.kjb) file using "Execute a Shell script" component. I was successful to run the script locally by giving script file name in that component. But I am not getting how to run a remote script.
Here is how my simple kjb looks like:
Any ideas?
that is pretty straightforward using the shell step from a pentaho job:
1) on General tab check "insert script"
2) on Script tab now you are able to add this inline script:
ssh user#remotemachine 'ls -l'
this will execute ls -l on the remote server via ssh
make sure the ssh user is allowed to login with ssh keys, not asking for password.

Jenkins SSH remote process is getting killed as soon as the Jenkins SSH plugin returns back

Jenkins version: 1.574
I created a simple job which performs the following:
Using "Execute shell script on remote host using SSH" as one of the BUILD steps, I'm just calling a shell script. This shell script performs stop and start operations on Tomcat to restart an application on the target machine.
I have a valid username, password, port defined for the target SSH server in Jenkins Global settings.
I saw this behavior that when I run a Jenkins job and call the restart script (which gets the application name as parameter $1), it works fine, but as soon as "Execute shell script on remote host using SSH" step completes, I see the new process dies on the remote/target application server.
If I run the script from the target/remote server itself, everything works fine and the new process/PID remains live forever, but running the same script from Jenkins, though I don't see any errors and everything works as expected, the new process dies as soon as the above mentioned SSH step is complete and control comes back to the next BUILD step in Jenkins job OR the Jenkins job is complete.
I saw a few posts/blogs and tried setting: BUILD_ID=dontKillMe in the Jenkins job (in various places i.e. Prepare Environment variables and also using Inject Environment variables...). When the job's particular build# is complete, I can see Environment Variables for that build# does say BUILD_ID=dontKillMe as its value (instead of the default Timestamp tag value).
I tried putting nohup before calling the restart script, i.e.,
nohup restart_tomcat.sh "${app}"
I also tried:
BUILD_ID=dontKillMe nohup restart_tomcat.sh "${app}"
This doesn't give any error and creates a nohup.out file on the remote server (but I'm not worried about it as the restart_tomcat.sh script itself creates its own LOG file which I'm "cat"ing after the restart_tomcat.sh script is complete. cat'ing on the log file is performed using another "Execute shell script on remote host using SSH" build step, and it successfully shows the log file created by the restart script).
I don't know what I'm missing at this point, but as soon as the restart_tomcat.sh step is complete, the new PID/process on the remote/target server dies.
How can I fix this?
I've been through this myself.
On my first iteration, before I knew about Jenkins ProcessTreeKiller, I ended up just daemonizing Tomcat. The Apache Tomcat documentation includes a section on running as a daemon.
You can also try disabling the ProcessTreeKiller for your whole Jenkins instance, if it's relatively small (read the first link for information).
The BUILD_ID=dontKillMe should be passed to the shell, and therefore it should be in your command line, not in Jenkins global configuration or job parameters.
BUILD_ID=dontKillMe restart_tomcat.sh "${app}" should have worked without problems.
You can also try nohup restart_tomcat.sh "${app}" & with the & at the end.
My solution (it worked after trying everything else) in Ubuntu 14.04 (Trusty Tahr) (Amazon AWS - Amazon EC2), Jenkins 1.601:
Exec command: (setsid COMMAND < /dev/null > /dev/null 2>&1 &);
Exec in PTY: DISABLED
// Example COMMAND=socat TCP4-LISTEN:1337,fork TCP4:127.0.0.1:1338
I created this Transfer as my last one.
#!/bin/ksh
export BUILD_ID=dontKillMe
I added the above line to the start of my script and the issue was resolved.

Reading profile script in non-interactive mode with AIX implementation of ksh

Please note that this is an AIX related question.
I have a jenkins server running on Redhat which is running a node via SSH on an AIX server.
The commands are run non-interactively using SSH to a user on the AIX machine who has ksh as its standard shell.
The problem is that this build needs a number of environment variables, and i can't seem to get it to work.
I have tried:
Jenkins allows me to set some environment variables for the session. So i tried:
ENV="$HOME/.profile"
I tried creating a .kshrc file containing
. .profile
But none of these approaches seems to make KSH run the .profile script.
The .profile script contains the environment setup for the user i need.
How do i get an AIX implementation of KSH to run my .profile script before executing commands?
You need to specifically tell Jenkins that you want to execute them in ksh shell.
By default, Jenkins runs as sh <commands>.
Add a shebang in your shell command as first line,
#!/bin/ksh
Most shells don't source their .profile files on non-interactive sessions. A simple solution is to source the .profile yourself as part of the command you are sending.
So instead of
yourcommand1; yourcommand2
you should send
. ~/.profile; yourcommand1; yourcommand2
over ssh
UPDATE after reading the comment about Jenkins controlling the ssh command
In the case your ssh command is performed by Jenkins you should have a look at https://wiki.jenkins-ci.org/display/JENKINS/SSH+Slaves+plugin, especially the 'Login profile files' paragraph.
I'd say one of these solutions is best
Set all environment variables from Jenkins using the node's configure page. Install the EnvInject plugin to do this.
Write a wrapper around the java command on the slave that sources your profile script and adjust the JavaPath (also on the node's configure page) to point to that wrapper.
The only way I know of for setting environment variables that will apply for non-interactive shells on AIX is via /etc/environment. I believe this is the correct place, but it will of course then apply to all users and all shells.