run job file via ssh command order how? - ssh

about run script.sh via ssh
#!/bin/bash
/usr/local/cpanel/scripts/cpbackup
clamscan -i -r --remove /home/
exit
are that mean run /usr/local/cpanel/scripts/cpbackup and after finished run clamscan -i -r --remove /home/
or run two command at same time ???

Commands in a script are run one at a time in order unless any of the commands "daemonizes" itself.

Related

Gitlab CI job fails even if the script/command is successful

I have a CI stage with the following command, which has to be executed remotely and checks if the mentioned file exists, if yes it creates a backup for it.
script: |
ssh ${USER}#${HOST} '([ -f "${PATH}/test_1.txt" ] && cp -v "${PATH}/test_1.txt" ${PATH}/test_1_$CI_COMMIT_TIMESTAMP.txt)'
The issue is, this job always fails whether the file exists or not with the following output:
ssh user#hostname '([ -f /etc/file/path/test_1.txt ] && cp -v /etc/file/path/test_1.txt /etc/file/path/test_1_$CI_COMMIT_TIMESTAMP.txt)'
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Running the same command manually, just works fine. So,
How can I make sure that this job succeeds as long as command logic is executed successfully and only fail incase there are some genuine failures?
There is no way for the job to know if the command you ran remotely worked or not. It can only know if the ssh instruction worked or not. You can force it to always succeed by appending || true to any instruction.
However, if you want to see and save the output of your remote instruction, you can do something like this:
ssh user#host command 2>&1 | tee ssh-session.log

Why do I have to spawn a new shell when doing remote sudo ssh commands to get proper file permissions?

I'm using password-less key based login with sudo to execute remote commands. I have figured out that I have to spawn a new shell to execute commands that write to root areas of the remote file system. But, I would like a clear explanation of exactly why this is the case?
This fails:
sudo -u joe ssh example.com "sudo echo test > /root/echo_test"
with:
bash: /root/echo_test: Permission denied
This works fine:
sudo -u joe ssh example.com "sudo bash -c 'echo test > /root/echo_test'"
It's the same reason that a local sudo echo test >/root/echo_test will fail (if you are not root) -- the redirection is done by the shell (not the sudo or echo command) which is running as the normal user. sudo only runs the echo command as root.
With sudo -u joe ssh example.com "sudo echo test > /root/echo_test", the remote shell is running as a normal user (probably joe) and does not have permission to write to the file. Using an extra bash invokation works, because sudo then runs bash as root (rather than echo), and that bash can open the file and do the redirect.

Switching users in remote ssh command execution

I'm wondering why executing su in an ssh command does not appear to be having the desired effect of switching users before executing the subsequent commands, as illustrated below:
The following command:
bob#server1:~$ sudo ssh -n root#server2 "su bob; env"
Produces the following output:
...
USER=root
PWD=/root
HOME=/root
LOGNAME=root
...
I expected the output to reflect that which user bob would have observed, however it is the environment of the root user. I have found, however, that the following command achieves the desired effect:
bob#server1:~$ sudo ssh -n root#server2 "su bob -c \"env\""
This command produces the following output:
...
USER=bob
PWD=/root
HOME=/users/bob
LOGNAME=bob
...
I would like to understand why the first way (executing "su bob; env") does not work.
Consider first what the su command does: it starts a new shell as the target user. Ignoring ssh for a moment, just become root on your local system and try running something like this:
su someuser; env
What happens? You will get a shell as someuser, and when you exit that shell, the env command executes in root's environment. If you wanted to run the env command as someuser, you would need:
su someuser -c env
This instructs su to run the env command as someuser.
When you run:
sudo ssh -n root#server2 "su bob; env"
The shell spawned by su exits immediately, because you've disabled stdin (with -n), and the env command executes in root's environment, just like in this example.

Capistrano: cap staging git:check - how to configure ssh path in git-ssh.sh?

I want to figure out how to set the path to ssh in git-ssh.sh file copied to the server by capistrano after executing the deploy command.
Actually the second line of git-ssh.sh looks like:
exec /usr/bin/ssh -o PasswordAuthentication=no -o StrictHostKeyChecking=no "$#"
I can not execute this command directly on the server. The following error occurs:
[5b4fcea9] /tmp/app.de/git-ssh.sh: line 2: /usr/bin/ssh: No such file or directory
After editing the ssh path to /usr/local/bin/ssh it works well but capistrano will upload this file every time calling cap staging deploy.
See my logs on pastie for more details, specially in the git:check part:
http://pastie.org/9523811
It is possible to set this path in my deploy.rb?
Thanks & cheers
Mirko
Yeah, i got it. :))
Rake::Task["deploy:check"].clear_actions
namespace :deploy do
task check: :'git:wrapper' do
on release_roles :all do
execute :mkdir, "-p", "#{fetch(:tmp_dir)}/#{fetch(:application)}/"
upload! StringIO.new("#!/bin/sh -e\nexec /usr/local/bin/ssh -o PasswordAuthentication=no -o StrictHostKeyChecking=no \"$#\"\n"), "#{fetch(:tmp_dir)}/#{fetch(:application)}/git-ssh.sh"
execute :chmod, "+x", "#{fetch(:tmp_dir)}/#{fetch(:application)}/git-ssh.sh"
end
end
end
This line seems to be hardcoded in capistrano source code link here.
Instead of changing capistrano source, why don't you create /usr/bin/ssh symlink on the server? So this:
ln -s /usr/local/bin/ssh /usr/bin/ssh
That will create a /usr/bin/ssh symlink that, when executed, will run /usr/local/bin/ssh.
In your Capfile if you add the following line you can override the git tasks:
require 'capistrano/git'

Running ssh command and keeping connection

Is there a way to execute a command before accessing a remote terminal
When I enter this command:
bash
$> ssh user#server.com 'ls'
The ls command is executed on the remote computer but ssh quits and I cannot continue in my remote session.
Is there a way of keeping the connection? The reason that I am asking this is that I want to create a setup for ssh session without having to modify the remote .bashrc file.
I would force the allocation of a pseudo tty and then run bash after the ls command:
syzdek#host1$ ssh -t host2.example.com 'ls -l /dev/null; bash'
-rwxrwxrwx 1 root other 27 Apr 1 2005 /dev/null
bash-4.1$
You can try using process subsitution on the init file of bash. In the example below, I define a function myfunc:
myfunc () {
echo "Running myfunc"
}
which I transform to a properly-escaped one-liner echoed in the <(...) construct for process subsitution for the --init-file argument of bash:
$ ssh -t localhost 'bash --init-file <( echo "myfunc() { echo \"Running myfunc\" ; }" ) '
Password:
bash-3.2$ myfunc
Running myfunc
bash-3.2$ exit
Note that once connected, my .bashrc is not sourced but myfunc is defined and properly usable in an interactive session.
It might prove a little difficult for more complex bash functions, but it works.