How to specify RemoteForward in the ssh config file? - ssh

I'm trying to setup a an ssh tunnel with remote port forwarding. The idea is the have a VPS act as a means to ssh into remote deployed systems (which currently incorporate a Raspberry Pi). Everything seems to work, but I run into issues when trying to move all arguments into the ~/.ssh/config file.
what does work is the setting of the HostName, User, Port and IdentityFile. However setting the RemoteForward parameter does not seem to work.
The following works:
ssh -R 5555:localhost:22 ssh-tunnel
How ever when using the following line in the config file;
Host ssh-tunnel
...
RemoteForward 5555 localhost:22
The following command returns the message "Bad remote forwarding specification 'ssh-tunnel'"
ssh -R ssh-tunnel

Obvious I found the answer almost immediately after posting the question. Using the -R flag requires you to set the remote forwarding in the command line call. However because remote forwarding is set in the config file you shouldn't add it to the command. However something confusing occurs in that aside from setting up the tunnel you also ssh into the remote server. To avoid this add the -f and the -N flag. This results in the following command:
ssh -f -N ssh-tunnel

Related

SSH to Github not working

SSH has been working fine for the last few weeks since I got my new PC. I've had no problems but today I started getting:
ssh: connect to host github.com port 22: resource temporarily unavailable
I did some googling and found that there is a common issue with WSL which sometimes causes this, but I'm unable to SSH from my bash shell, or from cmd/powershell.
This is the part that confuses me, if I do: ssh -T git#192.30.253.113 I am prompted for the password to my key, it successfully authenticates and responds with "Hi alexmk92! You've successfully authenticated".
Great, that at least proves that my firewall isn't blocking SSH on port 22. But why does git#github.com throw the resource failed error? My initial thought is that this could be a DNS problem.
So I tried to configure my network adapter to use Google's DNS server (8.8.8.8 and 8.8.4.4) I even configured the IPV6 DNS servers just in case. Following this I did an ipconfig /flushdns, attempted to connect via git#github.com again and BAM the same result, however git#192.30.253.113 still works.
I'm guessing another potential cause is that github.com is behind a load balancer and one of the IP's on the cluster could be black-listed somewhere on my machine? I'm just pulling guesses out of thin air now, any help would be greatly appreciated, this is driving me insane.
After some further Googling it turned out that my machine did not have a hosts entry for github.com and it was unable to automatically resolve it.
In Windows Subsystem for Linux I created a ssh config file
touch ~/.ssh/config
(for some reason the base distro of Ubuntu 18.04 on the windows marketplace didn't have one) I then had to make sure the file permissions were correct:
chmod 755 ~/.ssh/config
Once the file was created, I edited it with
sudo nano ~/.ssh/config
and added github.com as a Host.
Host github.com
Hostname ssh.github.com
Port 22
Upon saving, I ran
sudo /etc/init.d/ssh restart
and attempted
ssh -T git#github.com
Everything now seems to be working.
In my case my ISP did not allow ssh, so it was not working from cmd and wsl both. Got around it using vpn
To have successful SSH connection to Github, SSH key has to be import into Github
Open Git bash or Terminal
Run the command ssh-keygen
Choose all default option
A private and a public key gets generated in the folder * < user_home>/.ssh/*
Login to Github.com
Navigate to account settings
Choose item "SSH and GPG Keys" from the side navigation bar
click added new SSh key
Copy and save public key content from * < user_home>/.ssh/id_rsa.pub *

Putty multihop tunnel replicate in bash

Im experiencing a problem replicate my putty ssh tunneling with Cmder bash (on windows machine).
1. I want to access web interface on port 7183 on server_2. To get there I have to go through jump_server first and and tunnel twice, as from the jump_server, only visible port is 22.
Steps with putty:
1. connect to jump_server with tunnel (L22 server_2:22) using username_1
2. connect to localhost with tunnel (L7183 localhost:7183) using username_2
After that, Im able to access the web interface when I type localhost:7183 into browser on my local machine.
Now Im trying to reproduce this in Cmder, but I havent been able to do that with one big command, nor 2 separate commands:
ssh -L 7183:localhost:7183 username_1#jump_server ssh -L 22:localhost:22 -N username_2#server_2 -vvv
This is only the last command I used as I tried interchanging ports and hosts without success.
2. Is the syntax different when I want to open port 12345 on my local machine and have it forwarded to port 21050 on server_2 or that would be remote tunneling?
Finally managed to achieve the 1. question with:
ssh username_1#jump_server -L 22:server_2:22 -N -vvv
ssh -L 7183:localhost:7183 username_2#localhost
Now Im albe to access the web interface from server_2 on my localhost:7183

X11 forwarding of a GUI app running in docker

First off: I have read the answers to similar questions on SO, but none of them worked.
IMPORTANT NOTE: The answer below is still valid, but maybe jump to the end for an alternative.
The situation:
App with GUI is running in a docker container (CentOS 7.1) under Arch Linux. (machine A)
Machine A has a monitor connected to it.
I want to access this GUI via X11 forwarding on my Arch Linux client machine. (machine B)
What works:
GUI works locally on machine A (with /tmp/.X11-unix being mounted in the Docker container).
X11 forwarding of any app running outside of docker (X11 forwarding is set up and running properly for non-docker usage).
I can even switch the user while remotely logged in, copy the .Xauthority file to the other user and X11 forwarding works as well.
Some setup info:
Docker networking is 'bridged'.
Container can reach host (firewall is open).
DISPLAY variable is set in container (to host-ip-addr:10.0 because of TCP port 6010 where sshd is listening).
Packets to X forward port (6010) are reaching the host from the container (tcpdump checked).
What does not work:
X11 forwarding of the Docker app
Errors:
X11 connection rejected because of wrong authentication.
xterm: Xt error: Can't open display: host-ip-addr:10.0
Things i tried:
starting client ssh with ssh -Y option on machine B
putting "X11ForwardTrusted yes" in ssh_config on machine B
xhost + (so allow any clients to connect) on machine B
putting Host * in ssh_config on machine B
putting X11UseLocalhost no in sshd_config on machine A (to allow non-localhost clients)
Adding the X auth token in the container with xauth add from the login user on machine A
Just copying over the .Xauthority file from a working user into the container
Making shure .Xauthority file has correct permissions and owner
How can i just disable all the X security stuff and get this working?
Or even better: How can i get it working with security?
Is there at least a way to enable extensive debugging to see where exactly the problem is?
Alternative: The first answer below shows how to effectively resolve this issue. However: I would recommend you to look into a different approach all together, namely VNC. I personally switched to a tigerVNC setup that replaces the X11 forwarding and have not looked back. The performance is just leagues above what X11 forwarding delivered for me. There might be some instances where you cannot use VNC for whatever reason, but i would try it first.
The general setup is now as follows:
-VNC server runs on machine A on the host (not inside a docker container).
-Now you just have to figure out how to get a GUI for inside a docker container (which is a much more trivial undertaking).
-If the docker container was started NOT from the VNC environment, the DISPLAY variable maybe needs ajdusting.
Thanks so much #Lazarus535
I found that for me adding the following to my docker command worked:
--volume="$HOME/.Xauthority:/root/.Xauthority:rw"
I found this trick here
EDIT:
As Lazarus pointed out correctly you also have to set the --net=host option to make this work.
Ok, here is the thing:
1) Log in to remote machine
2) Check which display was set with echo $DISPLAY
3) Run xauth list
4) Copy the line corresponding to your DISPLAY
5) Enter your docker container
6) xauth add <the line you copied>*
7) Set DISPLAY with export DISPLAY=<ip-to-host>:<no-of-display>
*so far so good right?
This was nothing new...however here is the twist:
The line printed by xauth list for the login user looks something like this (in my case):
<hostname-of-machine>/unix:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
Because i use the bridged docker setup, the X forwarding port is not listening locally, because the sshd is not running in the container. Change the line above to:
<ip-of-host>:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
In essence: Remove the /unix part.
<ip-of-host> is the IP address where the sshd is running.
Set the DISPLAY variable as above.
So the error was that the DISPLAY name in the environment variable was not the "same" as the entry in the xauth list / .Xauthority file and the client could therefor not authenticate properly.
I switched back to an untrusted X11 forwarding setting.
The X11UseLocalhost no setting in the sshd_config file however is important, because the incomming connection will come from a "different" machine (the docker container).
This works in any scenario.
Install xhost if you don't have it. Then, in bash,
export DISPLAY=:0.0
xhost +local:docker
After this run your docker run command (or whatever docker command you are running) with -e DISPLAY=$DISPLAY
It works usually via https://stackoverflow.com/a/61060528/429476
But if you are running docker with a different user than the one used for ssh -X into the server with; then copying the Xauthority only helped along with volume mapping the file.
Example - I sshed into the server with alex user.Then ran docker after su -root and got this error
X11 connection rejected because of wrong authentication.
After copying the .XAuthoirty file and mapping it like https://stackoverflow.com/a/51209546/429476 made it work
cp /home/alex/.Xauthority .
docker run -it --network=host --env DISPLAY=$DISPLAY --privileged \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
-v /tmp/.X11-unix:/tmp/.X11-unix --rm <dockerimage>
More details on wiring here https://unix.stackexchange.com/a/604284/121634
Some clarifying remarks. Host is A, local machine is B
Ive edited this post to note things that I think should work in theory but haven't been tested, vs things I know to work
Running docker non-interactively
If your docker is running not interactively and running sshd, you can use jumphosts or proxycommand and specify the x11 client to run. You should NOT volume share your Xauthority file with the container, and sharing -e DISPLAY likely has no effect on future ssh sessions
Since you essentially have two sshd servers, either of the following should work out of the box
if you have openssh-client greater than version 7.3, you can use the following command
ssh -X -J user-on-host#hostmachine,user-on-docker#dockercontainer xeyes
If your openssh client is older, the syntax is instead
(google says the -X is not needed in the proxy command, but I am suspicious)
ssh -X -o ProxyCommand="ssh -W %h:%p user-on-host#hostmachine" user-on-docker#dockermachine xeyes
Or ssh -X into host, then ssh -X into docker.
In either of the above cases, you should NOT share .Xauthority with the container
Running docker interactively from within the ssh session
The easiest way to get this done is to set --net=host and X11UseLocalhost yse.
If your docker is running sshd, you can open a second ssh -X session on your local machine and use the jumphost method as above.
If you start it in the ssh session, you can either -e DISPLAY=$DISPLAY or export it when you're in. You might have to export it if you attach to an exiting container where this line wasn't used.
Use these docker args for --net host and x11uselocalhost yes
ssh -X to host
-e DISPLAY=$DISPLAY
-v $HOME/.Xauthority:/home/same-as-dash-u-user/.Xauthority
-u user
What follows is explanation of how everything works and other approaches to try
About Xauthority
ssh -X/-Y set up a session key in the hosts Xauthority file, and then sets up a listen port on which it places an x11 proxy that uses the session key, and converts it to be compatible with the key on your local machine. By design, the .Xauthority keys will be different between your local machine and the host machine. If you use jumphosts/proxycommand the keys between the host and the container will yet again be different from each other. If you instead use ssh tunnels or direct X11 connection, you will have to share the host Xauthority with the container, in the case of sharing .Xauthority with the container, you can only have one active session per user, since new sessions will invalidate the previous ones by modifying the hosts .Xauthority such that it only works with that session's ssh x11 proxy
X11UserLocalhost no theory##
Even Though X11UseLocalhost no causes the x server to listen on the wildcard address, With --net host I could not redirect the container display to localhost:X.Y where x and why are from the host $DISPLAY
X11UseLocalhost yes is the easy way
If you choose X11UseLocalhost yes the DISPLAY variable on the host becomes localhost:X:Y, which causes the ssh x11 proxy to listen only on localhost port x.
If X11UseLocalhost is no, the DISPLAY variable on the host becomes the host's hostname:X:Y, which causes the xerver to listen on 0.0.0.0:6000+X and causes xclients to reach out over the network to the hostname specified.
this is theoretical, I don't yet have access to docker on a remote host to test this
But this is the easy way. We bypass that by redirecting the DISPLAY variable to always be localhost, and do docker port mapping to move the data from localhost:X+1.Y on the container, to localhost:X.Y on the host, where ssh is waiting to forward x traffic back to the local machine. The +1 makes us agnostic to running either --net=host or --net=bridge
setting up container ports requires specifying expose in the dockerfile and publishing the ports with the -p command.
Setting everything up manually without ssh -X
This works only with --net host. This approach works without xauth because we are directly piping to your unix domain socket on the local machine
ssh to host without -X
ssh -R6010:localhost:6010 user#host
start docker with -e DISPLAY=localhost:10.1 or export inside
in another terminal on local machine
socat -d -d TCP-LISTEN:6010,fork UNIX-CONNECT:/tmp/.X11-unix/X0
In original terminal run xclients
if container is net --bridged and you can't use docker ports, enable sshd on the container and use the jumphosts method

Creating SSH tunnel without running the ssh command

Establishing SSH tunnel can done from the command line by explicitly giving
ssh -N -f -L 18888:192.168.224.143:8888 username#192.168.224.143
or defining tunnel in ~/.ssh/config file
Host tunnel
HostName 192.168.224.143
IdentityFile ~/.ssh/mine.key
LocalForward 18888 192.168.224.143:8888
User username
and then running,
ssh -f -N tunnel
Is there a way to start this tunnel without running the ssh ssh -f -N tunnel command explicitly?
I would like to establish this tunnel whenever my machine boots up. Do not want to add it in init script. Can it be done with SSH configuration itself?
No. SSH configuration is not designed to start something for you automatically. You need to add it to your startup applications or init script/systemd service, if you want to start it automatically after the network.
I also recommend you to use autossh which will take care of re-establishing the tunnel, if it fails for some reason.

Tunelling VNC through two ssh hops

I've long seeked a solution to tunnel to a machine behind a firewall, passing VNC (or other ports) through. Like explained in this old usenet post, which I'll recap here:
I have to log through an intermediate machine, something like:
local $ ssh interim
interim $ ssh remote
remote $ ...any commands...
This works fine. But now I am trying to tunnel a vnc session from remote to local and I can't find the magic incantation, using either one or two steps.
I recently found a wonderfully simple and adaptable solution: simply tunnel the ssh to the target system through the connection to the firewall. Like such:
local $ ssh -L 2222:remote:22 interim
interim $ ...no need to do anything here...
In another local console you connect to localhost on port 2222, which is actually your remote destination:
local $ ssh -C -p 2222 -L 5900:localhost:5900 localhost
remote $ ...possibly start you VNC server here...
In yet another local console:
local $ xtightvncviewer :0
It's that simple. You can add any port forwarding you want to the 2nd command (-L localport:localhost:remoteport) just like if there wasn't any intermediate firewall. For instance for RDP: -L 3389:localhost:3389