I am getting error ssh exit staus 1 - ssh

In jenkins post build action I configured Execute shell script on remote host using ssh
ssh site 10.32.25.66, command:
cd $HOME/appsadm/bin; ./ims-carte-stop
and i again modified
cd /HOME/appsadm/bin; ./ims-carte-stop.*
I tried both these commands and Build is successful, but I see in console output in Jenkins after, that it is not executing my script. I am getting ssh exit status 1 error.
In my winscp my script (ims-carte-stop) in this location home/appsadm/bin.
Please tell me if I am doing aything wrong.
My intention is to stop my server from jenkins automatically whenever the build success.

This may be a typo in your question, but:
You said your ims-carte-stop script is in:
/home/appsadm/bin
whereas your script is doing:
cd $HOME/appsadm/bin
or
cd /HOME/appsadm/bin
Looking at the paths, I am going to assume you are using a UNIX-flavoured OS (Linux, BSD, OSX).
UNIX paths are case sensitive. Your script should be calling:
cd /home/appsadm/bin
Note that the word "home" is all small letter not capitals. Also, using $ makes it a variable, which I don't think you want.

Related

Gitlab CI job fails even if the script/command is successful

I have a CI stage with the following command, which has to be executed remotely and checks if the mentioned file exists, if yes it creates a backup for it.
script: |
ssh ${USER}#${HOST} '([ -f "${PATH}/test_1.txt" ] && cp -v "${PATH}/test_1.txt" ${PATH}/test_1_$CI_COMMIT_TIMESTAMP.txt)'
The issue is, this job always fails whether the file exists or not with the following output:
ssh user#hostname '([ -f /etc/file/path/test_1.txt ] && cp -v /etc/file/path/test_1.txt /etc/file/path/test_1_$CI_COMMIT_TIMESTAMP.txt)'
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Running the same command manually, just works fine. So,
How can I make sure that this job succeeds as long as command logic is executed successfully and only fail incase there are some genuine failures?
There is no way for the job to know if the command you ran remotely worked or not. It can only know if the ssh instruction worked or not. You can force it to always succeed by appending || true to any instruction.
However, if you want to see and save the output of your remote instruction, you can do something like this:
ssh user#host command 2>&1 | tee ssh-session.log

Apache Airflow command not found with SSHOperator

I am trying to use the SSHOperator to SSH into a remote machine and run an external application through the command line. I have setup the SSH connection via the admin page.
This section of code is used to define the commands and the SSH connection to the external machine.
sshHook = SSHHook(ssh_conn_id='remote_comp')
command_1 ="""
cd /files/232-065/Rans
bash run.sh
"""
Where 'run.sh' runs the shell script:
#!/bin/sh
starccm+ -batch run_export.java Rans_Model.sim
Which simply runs the commercial software starccm+ with some options I have specified.
This section defines the task:
inlet_profile = SSHOperator(
task_id='inlet_profile',
ssh_hook=sshHook,
command=command_1
)
I have confirmed the SSH connection works by giving a simple 'ls' command and checking the output.
The error that I get is:
bash run.sh, error: run.sh: line 2: starccm+: command not found
The command in 'run.sh' works when I am logged into the machine (it does not require a GUI). This makes me think that there is a problem with the SSH session and it is not the same as the one that Apache Airflow logs into, but I am not sure how to solve this problem.
Does anyone have any experience with this?
There is no issue with SSH connection (at least from the error message). However, the issue is with starccm+ installation path.
Please check the installation path of starccm+ .
Check if the installation path is part of $PATH env variable
$ echo $PATH
If not, then install it in the standard locations like /bin or /usr/bin etc (provided they are included in $PATH variable), or export the installed director into PATH variable like this,
$ export PATH=$PATH:/<absolute_path>
It is not ideal but if you struggle with setting the path variable you can run starccm stating the full path like:
/directory/where/star/is/installed/starccm+ -batch run_export.java Rans_Model.sim

Why is $PATH different when executing commands via SSH and libssh?

I'm trying to run a command on a remote host via libssh2 as wrapped by the ssh2 Rust crate.
So I would like to run the command cargo build, but when I try to run it via libssh, I get the error:
cargo: command not found
However, when I ssh into the server manually from the command line everything works fine.
I have noticed that the $PATH is different when running ssh from the command line and libssh as well:
for instance when I echo $PATH
ssh gives me:
/home/<user>/.cargo/bin:/usr/share/swift/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bi
while libssh gives me:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
So it looks like what's happening is that the modifications made to $PATH inside .bashrc and .profile are not making it in when running via libssh.
I also get the same behavior if I run /bin/bash -c "echo ${PATH}"
Why would this be the case, and is there any way to get the same behavior in both these cases?
Please take a look at that question.
TL;DR A login shell first reads /etc/profile and then ~/.bash_profile. A non-login shell reads from /etc/bash.bashrc and then ~/.bashrc.

Can I fail a build based on the outcome of a SSH Task?

I was wondering if I could use bamboo's SSH task to run a script (this kicks off a small java message injector).
Then grep the logs for ERRORS. If any ERROR is present I would like to fail the build.
Something like this:
Is this a Bash question or is it really about Bamboo? Here is the Bash problem answer:
If you run
[[ ! $(grep ERROR /a/directory/log/*) ]]
the script will exit with an error if it finds the word "ERROR" anywhere in the files.
Bamboo should detect the task execution as failed.
(Note that if Bash is not the default shell on your target system you may need a #!/bin/bash on top of the script file.)

How to run a script file remotely using SSH

I want to run a script remotely. But the system doesn't recognize the path. It complains that "no such file or directory". Am I using it right?
ssh kev#server1 `./test/foo.sh`
You can do:
ssh user#host 'bash -s' < /path/script.sh
Backticks will run the command on the local shell and put the results on the command line. What you're saying is 'execute ./test/foo.sh and then pass the output as if I'd typed it on the commandline here'.
Try the following command, and make sure that thats the path from your home directory on the remote computer to your script.
ssh kev#server1 './test/foo.sh'
Also, the script has to be on the remote computer. What this does is essentially log you into the remote computer with the listed command as your shell. You can't run a local script on a remote computer like this (unless theres some fun trick I don't know).
If you want to execute a local script remotely without saving that script remotely you can do it like this:
cat local_script.sh | ssh user#remotehost 'bash -'
It works like a charm for me.
I do that even from Windows to Linux given that you have MSYS installed on your Windows computer.
I don't know if it's possible to run it just like that.
I usually first copy it with scp and then log in to run it.
scp foo.sh user#host:~
ssh user#host
./foo.sh
I was able to invoke a shell script using this command:
ssh ${serverhost} "./sh/checkScript.ksh"
Of course, checkScript.ksh must exist in the $HOME/sh directory.
Make the script executable by the user "Kev" and then remove the try it running through the command
sh kev#server1 /test/foo.sh