How to ask a remote server over SSH to run a background job? - ssh

I'm trying to start a long-running process on a remote server, over SSH:
$ echo Hello | ssh user#host "cat > /tmp/foo.txt; sleep 100 &"
Here, sleep 100 is a simulation of my long-running process. I want this command to exit instantly, but it waits for 100 seconds. Important to mention that I need the job to receive an input from me (Hello in the example above).
Server:
$ sshd -?
OpenSSH_8.2p1 Ubuntu-4ubuntu0.5, OpenSSL 1.1.1f 31 Mar 2020

Saying "I want this command to exit instantly" is incompatible with "long-running". Perhaps you mean that you want the long-running command to run in the background.
If output is not immediately needed locally (ie. it can be retrieved by another ssh in future), then nohup is simple:
echo hello |
ssh user#host '
cat >/tmp/foo.txt;
nohup </dev/null >cmd.out 2>cmd.err cmd &
'
If output must be received locally as the command runs, you can background ssh itself using -f:
echo hello |
ssh -f user#host '
cat >/tmp/foo.txt;
cmd
' >cmd.out 2>cmd.err

Related

Using tail to monitor an active logging file

I'm running multiple 'shred' commands on multiple hard drives in a workstation. The 'shred' commands are all run in the background in order to run the commands concurrently. The output of each 'shred' is redirected to a text file, and I also have the output directed to the terminal as well. I'm using tail to monitor the log file for errors, and halt the script if any are encountered. If there are no errors, the script should simply continue on to conclusion. When I test it by forcing a drive failure (disconnecting a drive), it detects the I/O errors and the script halts as expected. The problem I'm having is that when there are NO errors, I cannot get 'tail' to terminate once the 'shred' commands have completed, and the script just hangs at that point. Since I put the 'tail' command in the 'while' loop below, I would have thought that 'tail' would continue to run as long as the 'shred' processes were running, but would then halt after the 'shred' processes stopped, thus ending the 'while' loop. But that hasn't been the case. The script still hangs even after the 'shred' processes have ended. If I go to another terminal window while the script is "hangiing," and kill the 'tail' process, the script continues as normal. Any ideas how to get the 'tail' process to end when the 'shred' processes are gone?
My code:
shred -n 3 -vz /dev/sda 2>&1 | tee -a logfile &
shred -n 3 -vz /dev/sdb 2>&1 | tee -a logfile &
shred -n 3 -vz /dev/sdc 2>&1 | tee -a logfile &
pids=$(pgrep shred)
while kill -0 $pids 2> /dev/null; do
tail -qn0 -f logfile | \
read LINE
echo "$LINE" | grep -q "error"
if [ $? = 0 ]; then
killall shred > /dev/null 2>&1
echo "Error encountered. Halting."
exit
fi
done
wait $pids
There is other code after the 'wait' that does other stuff, but this is where the script is hanging
Not directly related to the question, but you can use Daggy - Data Aggregation Utility
In this case, all subprocesses will be end with main daggy process.

Piping the output of ssh sudo

I sometimes have a need to run commands as root on a remote server, and parse the output of the command on my local server. The remote server does not allow root login by ssh, but has sudo configured in a way that requires a password. A simplified example of what I need to do is
ssh remote sudo echo bar | tr bar foo
(Obviously in this simplified example, there's no good reason to need to run echo on a different machine to tr: this is just a toy example to explain what I'm trying to do.)
If I run the command above, I get an error that sudo has no way to prompt for a password:
richard#local:~$ ssh remote sudo echo bar | tr bar foo
sudo: no tty present and no askpass program specified
One way I can try to fix this is by adding the -t option to ssh. If I do that, sudo does wait for and accept a password, but the output of ssh's pseudo-terminal goes to stdout, meaning the sudo prompt message is piped to tr and not displayed to the user. If the user doesn't know sudo is waiting for a password, they will think the script has hung, and passing the prompt message to the pipe probably breaks further processing:
richard#local:~$ ssh -t remote sudo echo bar | tr bar foo
[sudo] posswood foo oichood:
foo
(This admittedly silly example shows the prompt has been processed by the tr command the output is piped to.)
The other way I can see to try to fix this is by adding the -S option to sudo. If I do that, sudo prompts on stderr for the password, so the prompt is not passed down the pipeline. That's good, but sudo also accepts the password on standard input meaning it's echoed to the terminal where anyone looking over the user's shoulder can read it:
richard#local:~$ ssh remote sudo -S echo bar | tr bar foo
[sudo] password for richard: p8ssw0rd
foo
I've found inelegant ways of working around the problems with these two options, but my workarounds hit a problem if the user gets their password wrong the first time. That in itself is a problem. Examples of this are:
richard#local:~$ echo "[sudo] password for $USER:"; \
ssh -t remote sudo echo bar | tail +2 | tr bar foo
richard#local:~$ (read -s password; echo $password; echo >&2) \
| ssh remote sudo -S echo bar | tr bar foo
I'm sure there must be a good solution to this, as it doesn't seem an uncommon thing to want to do. Any ideas?
The best solution I've come up with is to use sudo -S and disable local echo so the password isn't shown as you type it:
$ { stty -echo; ssh remote sudo -S echo hello; stty echo; echo 1>&2; }
[sudo] password for user:
hello
This leaves sudo in charge of the password prompting, so it works properly if the user types the password wrong.
I don't think any solution using ssh -t can ever work properly, since it combines stderr and stdout.

Use SSH commands in putty and/or psftp script for sftp server [duplicate]

I am looking to script something in batch which will need to run remote ssh commands on Linux. I would want the output returned so I can either display it on the screen or log it.
I tried putty.exe -ssh user#host -pw password -m command_run but it doesn't return anything on my screen.
Anyone done this before?
The -m switch of PuTTY takes a path to a script file as an argument, not a command.
Reference: https://the.earth.li/~sgtatham/putty/latest/htmldoc/Chapter3.html#using-cmdline-m
So you have to save your command (command_run) to a plain text file (e.g. c:\path\command.txt) and pass that to PuTTY:
putty.exe -ssh user#host -pw password -m c:\path\command.txt
Though note that you should use Plink (a command-line connection tool from PuTTY suite). It's a console application, so you can redirect its output to a file (what you cannot do with PuTTY).
A command-line syntax is identical, an output redirection added:
plink.exe -ssh user#host -pw password -m c:\path\command.txt > output.txt
See Using the command-line connection tool Plink.
And with Plink, you can actually provide the command directly on its command-line:
plink.exe -ssh user#host -pw password command > output.txt
Similar questions:
Automating running command on Linux from Windows using PuTTY
Executing command in Plink from a batch file
You can also use Bash on Ubuntu on Windows directly. E.g.,
bash -c "ssh -t user#computer 'cd /; sudo my-command'"
Per Martin Prikryl's comment below:
The -t enables terminal emulation. Whether you need the terminal emulation for sudo depends on configuration (and by default you do no need it, while many distributions override the default). On the contrary, many other commands need terminal emulation.
As an alternative option you could install OpenSSH http://www.mls-software.com/opensshd.html and then simply ssh user#host -pw password -m command_run
Edit: After a response from user2687375 when installing, select client only. Once this is done you should be able to initiate SSH from command.
Then you can create an ssh batch script such as
ECHO OFF
CLS
:MENU
ECHO.
ECHO ........................
ECHO SSH servers
ECHO ........................
ECHO.
ECHO 1 - Web Server 1
ECHO 2 - Web Server 2
ECHO E - EXIT
ECHO.
SET /P M=Type 1 - 2 then press ENTER:
IF %M%==1 GOTO WEB1
IF %M%==2 GOTO WEB2
IF %M%==E GOTO EOF
REM ------------------------------
REM SSH Server details
REM ------------------------------
:WEB1
CLS
call ssh user#xxx.xxx.xxx.xxx
cmd /k
:WEB2
CLS
call ssh user#xxx.xxx.xxx.xxx
cmd /k

Jenkins Publish Over SSH Plugin on MacOS limits to 130MB and frezze

In Jenkins test project i have in Execute shell:
dd if=/dev/urandom of=ios_512MB bs=531628032 count=1
and i have checked and configured Send files or execute commands over SSH after the build runs
When i run it i see:
Started by user X
Building remotely on ios in workspace /data/workspace/test
[test] $ /bin/sh -xe /var/folders/9b/s86tztx90bb9c_73gtynfzx80000gn/T/hudson8582983867531973712.sh
+ dd if=/dev/urandom of=ios_512MB bs=531628032 count=1
1+0 records in
1+0 records out
531628032 bytes transferred in 44.384345 secs (11977828 bytes/sec)
SSH: Connecting from host [jenkins2.local]
SSH: Connecting with configuration [jenkins.builds] ...
I see on Network traffic this connection, and its stops. On sftp i have:
ls -lh
-rw------- 1 10048 10047 130M Apr 16 09:56 ios_512MB
On windows/ubuntu all works well. How to fix it?
Fixed by Execute Shell:
echo "mkdir ${short}/${date}
mkdir ${short}/${date}/${RANDSTR}
put ${WORKSPACE}${location}${n}${format} ${short}/${date}/${RANDSTR}/
put ${WORKSPACE}${location}${n}.html ${short}/${date}/${RANDSTR}/" | sftp -o StrictHostKeyChecking=no -P 2222 -i ~/.ssh/id_rsa jenkins.builds#sftp.example.com

Running ssh command and keeping connection

Is there a way to execute a command before accessing a remote terminal
When I enter this command:
bash
$> ssh user#server.com 'ls'
The ls command is executed on the remote computer but ssh quits and I cannot continue in my remote session.
Is there a way of keeping the connection? The reason that I am asking this is that I want to create a setup for ssh session without having to modify the remote .bashrc file.
I would force the allocation of a pseudo tty and then run bash after the ls command:
syzdek#host1$ ssh -t host2.example.com 'ls -l /dev/null; bash'
-rwxrwxrwx 1 root other 27 Apr 1 2005 /dev/null
bash-4.1$
You can try using process subsitution on the init file of bash. In the example below, I define a function myfunc:
myfunc () {
echo "Running myfunc"
}
which I transform to a properly-escaped one-liner echoed in the <(...) construct for process subsitution for the --init-file argument of bash:
$ ssh -t localhost 'bash --init-file <( echo "myfunc() { echo \"Running myfunc\" ; }" ) '
Password:
bash-3.2$ myfunc
Running myfunc
bash-3.2$ exit
Note that once connected, my .bashrc is not sourced but myfunc is defined and properly usable in an interactive session.
It might prove a little difficult for more complex bash functions, but it works.