How to use ssh to run a local command after connection and quit after this local command is executed? - ssh

I wish to use SSH to establish a temporary port forward, run a local command and then quit, closing the ssh connection.
The command has to be run locally, not on the remote site.
For example consider a server in a DMZ and you need to allow an application from your machine to connect to port 8080, but you have only SSH access.
How can this be done?

Assuming you're using OpenSSH from the command line....
SSH can open a connection that will sustain the tunnel and remain active for as long as possible:
ssh -fNT -Llocalport:remotehost:remoteport targetserver
You can alternately have SSH launch something on the server that runs for some period of time. The tunnel will be open for that time. The SSH connection should remain after the remote command exits for as long as the tunnel is still in use. If you'll only use the tunnel once, then specify a short "sleep" to let the tunnel expire after use.
ssh -f -Llocalport:remotehost:remoteport targetserver sleep 10
If you want to be able to kill the tunnel from a script running on the local side, then I recommend you background it in your shell, then record the pid to kill later. Assuming you're using an operating system that includes Bourne shell....
#/bin/sh
ssh -f -Llocalport:remotehost:remoteport targetserver sleep 300 &
sshpid=$!
# Do your stuff within 300 seconds
kill $sshpid
If backgrounding your ssh using the shell is not to your liking, you can also use advanced ssh features to control a backgrounded process. As described here, the SSH features ControlMaster and ControlPath are how you make this work. For example, add the following to your ~/.ssh/config:
host targetserver
ControlMaster auto
ControlPath ~/.ssh/cm_sockets/%r#%h:%p
Now, your first connection to targetserver will set up a control, so that you can do things like this:
$ ssh -fNT -Llocalport:remoteserver:remoteport targetserver
$ ssh -O check targetserver
Master running (pid=23450)
$ <do your stuff>
$ ssh -O exit targetserver
Exit request sent.
$ ssh -O check targetserver
Control socket connect(/home/sorin/.ssh/cm_socket/sorin#192.0.2.3:22): No such file or directory
Obviously, these commands can be wrapped into your shell script as well.

You could use a script similar to this (untested):
#!/bin/bash
coproc ssh -L 8080:localhost:8080 user#server
./run-local-command
echo exit >&${COPROC[1]}
wait

Related

run noninteractive ssh command in background immeidately suspends the job

(1) I can run the following command and get the output successfully
ssh server hostname
(2) If I run it in background (not to background hotname, but to background ssh)
ssh server hostname &
and do nothing other than wait, I can get the output
(3) However, if before it finishes I type any key to the terminal, the job immediately turns into suspended state
[ZSH] suspended (tty input) ssh server hostname
[BASH] Stopped ssh server hostname
What is the reason for this and how to solve it?
I just use hostname as an example. You can try using sleep 5 instead if the program returns too quickly. The actual program I want to run lasts for minutes.
Use ssh -T -f server hostname as the manual page states:
-f requests ssh to go to background just before command execution.
This is useful if ssh is going to ask for passwords or
passphrases, but the user wants it in the background. This
implies -n. The recommended way to start X11 programs at a
remote site is with something like ssh -f host xterm.
If the ExitOnForwardFailure configuration option is set to “yes”,
then a client started with -f will wait for all remote port for-
wards to be successfully established before placing itself in the
background.
-T Disable pseudo-tty allocation.

Enable keepalives in Plink

We are using Plink for a tunnel to a MySQL. We are using it in this format:
plink.exe -L [Port of our client]:[my-sql server host name]:3306 [bridge server ssh username]#[bridge server IP] -i [private key]
We cannot find an option to prevent the connection to be closed, a sort of keepalive.
How could we achieve this?
Instead of a keepalive that plink manages internally, another option is to use the shell that is created on the host to keep sending short bits of data on the wire. This can be done through a very simple shell script such as:
while true;
do echo 0;
sleep 30s;
done
This very simple bash script will write the character 0 every 30 seconds to the screen.
A full example of the whole command line when invoking plink:
plink -P 443 [user#]host.com -R *:80:127.0.0.1:80 -C -T while true; do echo 0; sleep 30s; done
Plink does not have any command-line option for keepaliaves.
All you can do is to configure a stored session in PuTTY GUI with the keepalive on and then re-use the session in Plink using -load switch.

Creating SSH tunnel without running the ssh command

Establishing SSH tunnel can done from the command line by explicitly giving
ssh -N -f -L 18888:192.168.224.143:8888 username#192.168.224.143
or defining tunnel in ~/.ssh/config file
Host tunnel
HostName 192.168.224.143
IdentityFile ~/.ssh/mine.key
LocalForward 18888 192.168.224.143:8888
User username
and then running,
ssh -f -N tunnel
Is there a way to start this tunnel without running the ssh ssh -f -N tunnel command explicitly?
I would like to establish this tunnel whenever my machine boots up. Do not want to add it in init script. Can it be done with SSH configuration itself?
No. SSH configuration is not designed to start something for you automatically. You need to add it to your startup applications or init script/systemd service, if you want to start it automatically after the network.
I also recommend you to use autossh which will take care of re-establishing the tunnel, if it fails for some reason.

Plink never completes a private key ssh connection, doesn't timeout, key isn't refused. Putty works fine

My ultimate goal is to use MyEnTunnel to set up a tunnel between a Windows server at location A, and a BSD machine at location B so that I can access a database server running at location B locally at A. (localhost:3054 ======> bsdmachine:3050) MyEnTunnel is essentially a Windows Service wrapper for plink.
We use a private key for ssh access at location B. PuttyGen was used to convert the private key into a .ppk file to be compatible with putty, plink, etc. Putty connects to the BSD machine using the .ppk with no problems whatsoever.
I copied the command line string MyEnTunnel is using to establish the connection, pasted it into a directory with the latest version of putty, plink, etc. (in case MyEnTunnel's plink.exe is outdated), and it still failed.
plink.exe 192.168.0.233 -N -ssh -2 -P 916 -l "root" -C -i "keyfile.ppk" -L 3054:192.168.0.208:3050
The BSD machine has several jails running; 1.233 is the host, and accepts SSH connections. 1.208 is a jail with a server listening to 3050, and will not accept ssh connections.
I use tunnels so rarely, I always forget the proper order of things, and when I'm supposed to ur -R and -L, so I tried the 16 possibilities. ;-) I then started plink with the bare options:
plink.exe 192.168.0.233 -N -ssh -2 -P 916 -l "root" -i "keyfile.ppk"
Putty, with these settings, connects without a hitch. Plink reports:
Using username "root".
And proceeds to do nothing forever.
What am I doing wrong, and what would establish the tunnel with the local listening port 3054, and the target port 3050 at 192.168.0.208?
You used the -N flag, this makes it run on the background. If you add the -v flag you can see all the activity of the forward/tunnel.

SSH to multiple hosts at once

I have a script which loops through a list of hosts, connecting to each of them with SSH using an RSA key, and then saving the output to a file on my local machine - this all works correctly. However, the commands to run on each server take a while (~30 minutes) and there are 10 servers. I would like to run the commands in parallel to save time, but can't seem to get it working. Here is the code as it is now (working):
for host in $HOSTS; do
echo "Connecting to $host"..
ssh -n -t -t $USER#$host "/data/reports/formatted_report.sh"
done
How can I speed this up?
You should add & to the end of the ssh call, it will run on the background.
for host in $HOSTS; do
echo "Connecting to $host"..
ssh -n -t -t $USER#$host "/data/reports/formatted_report.sh" &
done
I tried using & to send the SSH commands to the background, but I abandoned this because after the SSH commands are completed, the script performs some more commands on the output files, which need to have been created.
Using & made the script skip directly to those commands, which failed because the output files were not there yet. But then I learned about the wait command which waits for background commands to complete before continuing. Now this is my code which works:
for host in $HOSTS; do
echo "Connecting to $host"..
ssh -n -t -t $USER#$host "/data/reports/formatted_report.sh" &
done
wait
Try massh http://m.a.tt/er/massh/. This is a nice tool to run ssh across multiple hosts.
The Hypertable project has recently added a multi-host ssh tool. This tool is built with libssh and establishes connections and issues commands asynchronously and in parallel for maximum parallelism. See Multi-Host SSH Tool for complete documentation. To run a command on a set of hosts, you would run it as follows:
$ ht ssh host00,host01,host02 /data/reports/formatted_report.sh
You can also specify a host name or IP pattern, for example:
$ ht ssh 192.168.17.[1-99] /data/reports/formatted_report.sh
$ ht ssh host[00-99] /data/reports/formatted_report.sh
It also supports a --random-start-delay <millis> option that will delay the start of the command on each host by a random time interval between 0 and <millis> milliseconds. This option can be used to avoid thundering herd problems when the command being run accesses a central resource.