I wish to tunnel a port over SSH from a remote host. I wish to implement this as an oclif plugin; I want the user experience to look like this:
laptop$ give-jupyter
http://localhost:4040/
laptop$ kill-jupyter
laptop$
...and that should be relatively straightforward; I just* need to maintain a pidfile, right? Something like:
import child_process from 'child_process';
const childProcess = child_process.spawn('ssh', [/* flags */], {detached: true, stdio: 'ignore'});
childProcess.unref();
writeToSomePidfile({childProcess.pid);
However, all of this is basically besides the point. The problem is I have to figure those flags out! Well, okay; this works:
laptop$ ssh machine -L 4040:localhost:4040
machine$
...however, it also opens a shell on the remote end. No problem, man ssh says that's what -L is for:
laptop$ ssh machine -LN 4040:localhost:4040
That's great, but it's now taking my user's shell hostage. Fine, let's just send the process to the background:
laptop$ ssh machine -LN 4040:localhost:4040 &
laptop$ f
f: file not found
laptop$ fg
^C
The background version of ssh enters a race condition with the shell over STDIN, and everything is positively terrible. Fine, man ssh says that's what -n is for:
laptop$ ssh machine -nLN 4040:localhost:4040 &
laptop$Job 1, 'ssh machine -nNL 40…' has ended
...well, that's just great: ssh now quits immediately, and so does the tunnel.
SSH mentions -f should enable some sort of background mode, but ssh -fnN doesn't do it either; ssh quits immediately still.
If I can't have nice things, maybe I can approximate them with a command that will run forever even with no STDIN. Server Fault suggests:
laptop$ ssh machine -nL 4040:localhost:4040 tail -f /dev/null &
laptop$
Still no good?! Fine:
laptop$ ssh machine -nL 4040:localhost:4040 sleep infinity &
laptop$
That seems to work at the low, low cost of one tiny process, a far sight better than other iterations I had tried while writing this questions, mostly involving yes...
Is there however a... less kludgy way to run an SSH tunnel in the background? Bonus points: I need this to work on OSX laptops too...
I'd personally create a file on the system that binds to the tunnel using -S control socket. Easier than dealing with PIDs for sure.
Open tunnel
ssh -M -S ~/jupyter-tunnel -o "ExitOnForwardFailure yes" -fN machine -L 4040:localhost:4040
Close tunnel
ssh -S ~/jupyter-tunnel -O exit machine
Related
I want to establish a stable ssh tunnel between two machines. I have been using autossh for this in the past. However, the present setup does not allow me to perform local port forwarding (this is disabled in sshd_config on both sides for security reasons). As a consequence, it seems that autossh gets confused (it cannot set up a double, local and remote, port forwarding tunnel, to "ping itself", so it seems to be resetting the ssh tunnel periodically). So, I consider instead relying on a "pure ssh" solution, something like:
while true; do
echo "start tunnel..."
ssh -v -o ServerAliveInterval=120 -o ServerAliveCountMax=2 -R remote_port:localhost:local_port user#remote
echo "ssh returned, there was a problem. sleep a bit and retry..."
sleep 15
echo "... ready to retry"
done
My question is: is there some guarantees / stability features that I "used to have" with autossh, but that I will not have with the new solution? Anything I should be aware of? This solution should well check that the server is alive and communicating thanks to the 2 -o options, and restart the tunnel if needed, right?
I want to create a port forwarding using ssh's -L option. The problem I have is that I use connection sharing to the remote host. So depending on if there is already a connection providing a master I either need
ssh -O forward -L ... $remotehost
(if there is already a master) or
ssh -N -L ... $remotehost
. I could use something like:
if ssh -O check $remotehost 2>/dev/null; then
ssh -O forward -L ... $remotehost
else
ssh -N -L ... $remotehost
fi
, but this is racy and from C code it would be easier if there was an option that makes ssh automatically start a master if there is none yet. For "normal" invocations you could use -o "ControlMaster auto", but this doesn't do the right thing here. I fail to find such an option in the docs however and wonder if I missed something.
So my question is: Is there a catch-all command, that adds a port forward independent of the settings for command multiplexing, that maybe even works if multiplexing isn't enabled at all.
ssh -N -L ... $remotehost doesn't seem to do anything at all if an already established connection is used. Is this a bug?
(Of course ssh -S none -N -L ... $remotehost works, but the obvious downside is that the maybe already existing connection isn't used then.)
It seems that this was old knowledge, ssh -N -L ... $remotehost does the right thing in my setup. Probably fixed since I checked this problem for real last time ... No points for "well researched question" :-)
Checking the changelog of openssh I didn't find this problem mentioned.
I have a strange SSH problem.
If I run this
ssh user#remote 'md5sum file.txt'
I get back the result as expected, but if I run this
ssh user#remote 'cat file.txt'
then it just sits there.
I'll attribute this problems to bad network connection. I've started using Mosh and is works magnitudes better than ssh.
I'm trying to set up an automatic rsync backup (using cron) over an ssh tunnel but am getting an error "Connection to localhost closed by remote host.". I'm running Ubuntu 12.04. I've searched for help and tried many solutions such as adding ALL:ALL to /etc/hosts.allow, check for #MaxStartups 10:30:60 in sshd_config, setting UsePrivilegeSeparation no in sshd_config, creating /var/empty/sshd but none have fixed the problem.
I have autossh running to make sure the tunnel is always there:
autossh -M 25 -t -L 2222:destination.address.edu:22 pbeyersdorf#intermediate.address.edu -N -f
This seems to be running fine, and I've been able to use the tunnel for various rsync tasks, and in fact the first time I ran the following rsync task via cron it succeeded:
rsync -av --delete-after /tank/Documents/ peteman#10.0.1.5://Volumes/TowerBackup/tank/Documents/
with the status of each file and the output
sent 7331634 bytes received 88210 bytes 40215.96 bytes/sec
total size is 131944157313 speedup is 17782.61
Ever since that first success, every attempt gives me the following output
building file list ... Connection to localhost closed by remote host.
rsync: connection unexpectedly closed (8 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(605) [sender=3.0.9]
An rsync operation of a smaller subdirectory works as expected. I'd appreciate any ideas on what could be the problem.
It seems the issues is related to autossh. If I create my tunnel via ssh instead of autossh it works fine. I suspect I could tweak the environment variables that affect the autossh configuration, but for my purposes I've solved the problem by wrapping the rsycn command in a script that first opens a tunnel via ssh, executes the backup then kills the ssh tunnel, thereby eliminating the need for the always open tunnel created by autossh:
#!/bin/sh
#Start SSH tunnel
ssh -t -L 2222:destination.address.edu:22 pbeyersdorf#intermediate.address.edu -N -f
#execute backup commands
rsync -a /tank/Documents/ peteman#localhost://Volumes/TowerBackup/tank/Documents/ -e "ssh -p 2222"
#Kill SSH tunnel
pkill -f "ssh.*destination.address"
I wish to use SSH to establish a temporary port forward, run a local command and then quit, closing the ssh connection.
The command has to be run locally, not on the remote site.
For example consider a server in a DMZ and you need to allow an application from your machine to connect to port 8080, but you have only SSH access.
How can this be done?
Assuming you're using OpenSSH from the command line....
SSH can open a connection that will sustain the tunnel and remain active for as long as possible:
ssh -fNT -Llocalport:remotehost:remoteport targetserver
You can alternately have SSH launch something on the server that runs for some period of time. The tunnel will be open for that time. The SSH connection should remain after the remote command exits for as long as the tunnel is still in use. If you'll only use the tunnel once, then specify a short "sleep" to let the tunnel expire after use.
ssh -f -Llocalport:remotehost:remoteport targetserver sleep 10
If you want to be able to kill the tunnel from a script running on the local side, then I recommend you background it in your shell, then record the pid to kill later. Assuming you're using an operating system that includes Bourne shell....
#/bin/sh
ssh -f -Llocalport:remotehost:remoteport targetserver sleep 300 &
sshpid=$!
# Do your stuff within 300 seconds
kill $sshpid
If backgrounding your ssh using the shell is not to your liking, you can also use advanced ssh features to control a backgrounded process. As described here, the SSH features ControlMaster and ControlPath are how you make this work. For example, add the following to your ~/.ssh/config:
host targetserver
ControlMaster auto
ControlPath ~/.ssh/cm_sockets/%r#%h:%p
Now, your first connection to targetserver will set up a control, so that you can do things like this:
$ ssh -fNT -Llocalport:remoteserver:remoteport targetserver
$ ssh -O check targetserver
Master running (pid=23450)
$ <do your stuff>
$ ssh -O exit targetserver
Exit request sent.
$ ssh -O check targetserver
Control socket connect(/home/sorin/.ssh/cm_socket/sorin#192.0.2.3:22): No such file or directory
Obviously, these commands can be wrapped into your shell script as well.
You could use a script similar to this (untested):
#!/bin/bash
coproc ssh -L 8080:localhost:8080 user#server
./run-local-command
echo exit >&${COPROC[1]}
wait