I have a CI stage with the following command, which has to be executed remotely and checks if the mentioned file exists, if yes it creates a backup for it.
script: |
ssh ${USER}#${HOST} '([ -f "${PATH}/test_1.txt" ] && cp -v "${PATH}/test_1.txt" ${PATH}/test_1_$CI_COMMIT_TIMESTAMP.txt)'
The issue is, this job always fails whether the file exists or not with the following output:
ssh user#hostname '([ -f /etc/file/path/test_1.txt ] && cp -v /etc/file/path/test_1.txt /etc/file/path/test_1_$CI_COMMIT_TIMESTAMP.txt)'
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Running the same command manually, just works fine. So,
How can I make sure that this job succeeds as long as command logic is executed successfully and only fail incase there are some genuine failures?
There is no way for the job to know if the command you ran remotely worked or not. It can only know if the ssh instruction worked or not. You can force it to always succeed by appending || true to any instruction.
However, if you want to see and save the output of your remote instruction, you can do something like this:
ssh user#host command 2>&1 | tee ssh-session.log
Related
I'm trying to run a command on a remote host via libssh2 as wrapped by the ssh2 Rust crate.
So I would like to run the command cargo build, but when I try to run it via libssh, I get the error:
cargo: command not found
However, when I ssh into the server manually from the command line everything works fine.
I have noticed that the $PATH is different when running ssh from the command line and libssh as well:
for instance when I echo $PATH
ssh gives me:
/home/<user>/.cargo/bin:/usr/share/swift/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bi
while libssh gives me:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
So it looks like what's happening is that the modifications made to $PATH inside .bashrc and .profile are not making it in when running via libssh.
I also get the same behavior if I run /bin/bash -c "echo ${PATH}"
Why would this be the case, and is there any way to get the same behavior in both these cases?
Please take a look at that question.
TL;DR A login shell first reads /etc/profile and then ~/.bash_profile. A non-login shell reads from /etc/bash.bashrc and then ~/.bashrc.
updated: added the missing docker attach.
Hi am trying to run a docker container, with -dti. but I cannot access with a terminal set to dumb. is there a way to change this (it is currently set to xterm, even though my ssh client is dumb)
example:
create the container
docker run -dti --name test -v /my-folder alpine /bin/ash
docker attach test
apk --update add nodejs
cd /my-folder
npm install -g gulp
the last command always contains ascii escape chars to move the cursor.
I have tried "export TERM=dumb" inside the running container, but it does not work.
is there a way to "run" this using the dumb terminal?
I am running this from a script on another computer, via (dumb) ssh.
using the -t which sets this https://docs.docker.com/engine/reference/run/#env-environment-variables, however removing effects the command prompt (the prompt is not shown)
possible solution 1 remove the -t and keep the -i. To see if the command has completed echo out a known token (ENDENDEND). ie
docker run -di --name test -v /my-folder alpine /bin/ash
docker attach test
apk --update add nodejs;echo ENDENDEND
cd /my-folder;echo ENDENDEND
npm install -g gulp;echo ENDENDEND
not pretty, but it works (there is no ascii in the results)
Possible solution 2 use the journal, docker can log out to the linux journal, this can be gathered as commands are executed in the container. (I have yet to fully test this one out. however the log seems to be a nicer output of what happened)
update:
Yep -t is the problem.
However if you want to see the entire process when running a command, maybe this way is better:
docker run -di --name test -v/my-folder alpine /bin/ash
docker exec -it test /bin/ash
finally you need to kill the container after all jobs finished.
docker run -d means "Run container in background and print container ID"
not start the container as a daemon
I was hitting this issue on OSx running docker, i had to do 2 things to stop the terminal/ascii/ansi escape sequences.
remove the "t" option on the docker run command (from docker run -it ... to docker run -i...)
ensure to force bash or sh shells used on osx when running the command from a script file, not the default zsh
Also
the escape sequences were not always visible on the terminal
even so, they still usually caused content corruption, even with SED brought to bear
they always were shown in my editor
I was wondering if I could use bamboo's SSH task to run a script (this kicks off a small java message injector).
Then grep the logs for ERRORS. If any ERROR is present I would like to fail the build.
Something like this:
Is this a Bash question or is it really about Bamboo? Here is the Bash problem answer:
If you run
[[ ! $(grep ERROR /a/directory/log/*) ]]
the script will exit with an error if it finds the word "ERROR" anywhere in the files.
Bamboo should detect the task execution as failed.
(Note that if Bash is not the default shell on your target system you may need a #!/bin/bash on top of the script file.)
I am learning the shell language. I have creating a shell script whose function is to login into the DB and run a .sql file. Following are the contents of the script -
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
echo "Running SQL Dump - auto_qa_db_sync"
\\i auto_qa_db_sync.sql
After running the above script, I get the following error
./autoqa_script.sh: 39: ./autoqa_script.sh: /i: not found
Following one article, I tried reversing the slash but it didn't worked.
I don't understand why this is happening. Because when I try manually running the sql file, it works properly. Can anyone help?
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production and run script"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT -f auto_qa_db_sync.sql
The lines you put in a shell script are (moreless, let's say so for now) equivalent to what you would put right to the Bash prompt (the one ending with '$' or '#' if you're a root). When you execute a script (a list of commands), one command will be run after the previous terminates.
What you wanted to do is to run the client and issue a "\i ./autoqa_script.sh" comand in it.
What you did was to run the client, and after the client terminated, issue that command in Bash.
You should read about Bash pipelines - these are the way to run programs and input text inside them. Following your original idea to solving the problem, you'd write something like:
echo '\i auto_qa_db_sync.sql' | $DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
Hope that helps to understand.
In jenkins post build action I configured Execute shell script on remote host using ssh
ssh site 10.32.25.66, command:
cd $HOME/appsadm/bin; ./ims-carte-stop
and i again modified
cd /HOME/appsadm/bin; ./ims-carte-stop.*
I tried both these commands and Build is successful, but I see in console output in Jenkins after, that it is not executing my script. I am getting ssh exit status 1 error.
In my winscp my script (ims-carte-stop) in this location home/appsadm/bin.
Please tell me if I am doing aything wrong.
My intention is to stop my server from jenkins automatically whenever the build success.
This may be a typo in your question, but:
You said your ims-carte-stop script is in:
/home/appsadm/bin
whereas your script is doing:
cd $HOME/appsadm/bin
or
cd /HOME/appsadm/bin
Looking at the paths, I am going to assume you are using a UNIX-flavoured OS (Linux, BSD, OSX).
UNIX paths are case sensitive. Your script should be calling:
cd /home/appsadm/bin
Note that the word "home" is all small letter not capitals. Also, using $ makes it a variable, which I don't think you want.