How to do ssh jump over two jump hosts in command line - ssh

I can't get connection chain with ssh one liner to work.
Chain:
My PC -> jumphost -> Bastion -> my app X host(sharing subnet with Bastion)
-Jumphost expect private key A
-Bastion and X host both expect private key B
my pc> ssh -i /path_to_priv_key_for_X/id_rsa -o StrictHostKeyChecking=no -o
"ProxyCommand ssh -p 22 -W %h:%p -o \"ProxyCommand ssh -p 24 -W %h:%p
-i /path_to_key_jump/id_rsa jumphostuser#jumphostdomain\" -i
/path_to_bastion_key/id_rsa bastionuser#ip_to_bastion" myappuser#subnet_ip
Above does not work, but
ssh -i /path_to_bastion_key/id_rsa -o "ProxyCommand ssh -p 24 -W
%h:%p -i /path_to_key_jump/id_rsa jumphostuser#jumphostdomain"
bastionuser#ip_to_bastion
works, so I can access bastion with one liner, but adding app x host in the command chain does not work, wonder why?
I can step by step manually access the myapp X host like this
mypc> ssh -p 24 -i path_to_key_jump/id_rsa jumphostuser#jumphostdomain
jumphost> ssh -i /path_to_bastion_key/id_rsa bastionuser#ip_to_bastion
bastion> ssh myappuser#subnet_ip
myapp>
How to make in command line two hops over two jump hosts both requiring different key without ssh config?

Something which is working for me surprisingly well is ssh with -J option:
-J destination
Connect to the target host by first making a ssh connection
to the jump host described by destination and then establishing a TCP
forwarding to the ultimate destination from there.
In fact, I's about its feature which I was not aware of for very long time:
Multiple jump hops may be specified separated by comma characters.
So multi-hop like PC -> jump server 1 -> jump server 2 -> target server (in my example: PC -> vpn -> vnc -> ece server can be done with one combo:
$ ssh -J vpn,scs694#tr200vnc rms#tr001tbece11
Of course, most handy is to have ssh keys to open pwd-less connections (PC->vpn and vpn -> vnc and vnc -> target.
I hope it will help,
Jarek

To add to the above. My use-case was a triple-hop to a database server, which looked like Server 1 (Basic Auth) --> Server 2 (Token) --> Server 3 (Basic Auth) --> DB Server (Port Forward).
After quite a few hours of turmoil, the solution was:
ssh -v -4 -J username#server1,username#server2 -N username#Server3 -L 1122:dbserver:{the_database_port_number}
Then I was able to just have the DB client hit localhost:1122 where 1122 can be any free port number on your localhost.

Related

How to remotely capture traffic across multiple SSH hops?

I want to debug another machine on my network but have to pass through one or more SSH tunnels to get there.
Currently:
# SSH into one machine
ssh -p 22 me#some_ip -i ~/.ssh/00_id_rsa
# From there, SSH into the target machine
# Note that this private key lives on this machine
ssh -p 1234 root#another_ip -i ~/.ssh/01_id_rsa
# Capture debug traffic on the target machine
tcpdump -n -i eth0 -vvv -s 0 -XX -w tcpdump.pcap
But then it's a pain to successively copy that .pcap out. Is there a way to write the pcap directly to my local machine, where I have wireshark installed?
You should use ProxyCommand to chain ssh hosts and to pipe output of tcpdump directly into wireshark. To achieve that you should create the following ssh config file:
Host some_ip
IdentityFile ~/.ssh/00_id_rsa
Host another_ip
Port 1234
ProxyCommand ssh -o 'ForwardAgent yes' some_ip 'ssh-add ~/.ssh/01_id_rsa && nc %h %p'
I tested this with full paths, so be carefull with ~
To see the live capture you should use something like
ssh another_ip "tcpdump -s0 -U -n -w - -i eth0 'not port 1234'" | wireshark -k -i -
If you want to just dump pcap localy, you can redirect stdout to filename of your choice.
ssh another_ip "tcpdump -n -i eth0 -vvv -s 0 -XX -w -" > tcpdump.pcap
See also:
https://serverfault.com/questions/337274/ssh-from-a-through-b-to-c-using-private-key-on-b
https://serverfault.com/questions/503162/locally-examine-network-traffic-of-remote-machine/503380#503380
How can I have tcpdump write to file and standard output the appropriate data?

SSH tunnel to Database from two level of jump server with different keys

I have database server on AWS and from my PC i have to access that database using ssh tunneling for below scenario.
PC --> Jump1 [x.pem, port:22] --> Jump2 [y.pem, port:443] --> mysqldb:3306
For this kind of scenarios, Config File is the best way to do it.
Run
$ touch ~/.ssh/config
Add host entries in a config file.
Host <Host_Name>
HostName <URL/IP of Jump2>
User <>
Port <>
Identityfile <yyy.pem>
StrictHostKeyChecking no
ProxyCommand ssh -i <xxx.pem> <user>#<IP/DNS of Jump1> nc %h %p 2> /dev/null
and then to create a tunnel
$ ssh <local_port>:DB_URL:<DB PORT> <Host_name>
that's it.
Now you can connect to DB using
localhost:<local_port>
If you already have your public keys in authorized_keys on respective hosts
then you can use -J directive.
like this:
ssh -J user1#host1 user2#host2
If you have more than one jump host you can concatenate it inside of -J directive like this:
ssh -J user1#host1,user2#host2,user(n-1)#host(n-1) userN#hostN
I also using port forwarding so it takes your port data all the way to the last site and then connect to specific site like this:
ssh -L 8080:microsoft.com:80 -J user1#host1 user2#host2
It will create unencrypted connection only from host2 to microsoft.com:80

Ansible percent expand

I have an ansible playbook which connects to a virtual machine via a non-standard ssh port (forwarded to localhost) and a different user than the host user (vagrant).
The ssh port is specified in the ansible inventory:
[vms]
localhost:2222
The username given on the command line to ansible-playbook:
ansible-playbook -i <inventory from above> <some playbook> -u vagrant
The communication with the VM works correctly, however, %p always expands to 22 and %r to the host username.
Consequently, I cannot flush the SSH connection (for the user's changed group membership to take effect) like this:
- name: flush the ssh connection
command: ssh -o ControlPath="~/.ansible/cp/ansible-ssh-%h-%p-%r" -O stop {{inventory_hostname}}
delegate_to: 127.0.0.1
Am I making a silly mistake somewhere? Alternatively, is there a different way to flush the SSH connection?
The percent expand is not expanded by ansible, but by ssh later on.
Sorry, forgot to add the most important part
Using
command: ssh -o ControlPath=[...] -O stop {{inventory_hostname}}
will use default port, because you didn't specify it on the command-line. You would have to specify also the port to "flush" the connection this way:
command: ssh -o ControlPath=[...] -O stop -p {{inventory_port}} {{inventory_hostname}}
But I don't think it is needed. Ansible should clean up the connections when the playbook ends and I don't see any different reason why to do that.

ssh -L forward multiple ports

I'm currently running a bunch of:
sudo ssh -L PORT:IP:PORT root#IP
where IP is the target of a secured machine, and PORT represents the ports I'm forwarding.
This is because I use a lot of applications which I cannot access without this forwarding. After performing this, I can access through localhost:PORT.
The main problem occured now that I actually have 4 of these ports that I have to forward.
My solution is to open 4 shells and constantly search my history backwards to look for exactly which ports need to be forwarded etc, and then run this command - one in each shell (having to fill in passwords etc).
If only I could do something like:
sudo ssh -L PORT1+PORT2+PORT+3:IP:PORT+PORT2+PORT3 root#IP
then that would already really help.
Is there a way to make it easier to do this?
The -L option can be specified multiple times within the same command. Every time with different ports. I.e. ssh -L localPort0:ip:remotePort0 -L localPort1:ip:remotePort1 ...
Exactly what NaN answered, you specify multiple -L arguments. I do this all the time. Here is an example of multi port forwarding:
ssh remote-host -L 8822:REMOTE_IP_1:22 -L 9922:REMOTE_IP_2:22
Note: This is same as -L localhost:8822:REMOTE_IP_1:22 if you don't specify localhost.
Now with this, you can now (from another terminal) do:
ssh localhost -p 8822
to connect to REMOTE_IP_1 on port 22
and similarly
ssh localhost -p 9922
to connect to REMOTE_IP_2 on port 22
Of course, there is nothing stopping you from wrapping this into a script or automate it if you have many different host/ports to forward and to certain specific ones.
For people who are forwarding multiple port through the same host can setup something like this in their ~/.ssh/config
Host all-port-forwards
Hostname 10.122.0.3
User username
LocalForward PORT_1 IP:PORT_1
LocalForward PORT_2 IP:PORT_2
LocalForward PORT_3 IP:PORT_3
LocalForward PORT_4 IP:PORT_4
and it becomes a simple ssh all-port-forwards away.
You can use the following bash function (just add it to your ~/.bashrc):
function pfwd {
for i in ${#:2}
do
echo Forwarding port $i
ssh -N -L $i:localhost:$i $1 &
done
}
Usage example:
pfwd hostname {6000..6009}
jbchichoko and yuval have given viable solutions. But jbchichoko's answer isn't a flexible answer as a function, and the opened tunnels by yuval's answer cannot be shut down by ctrl+c because it runs in the background. I give my solution below solving both the two flaws:
Defing a function in ~/.bashrc or ~/.zshrc:
# fsshmap multiple ports
function fsshmap() {
echo -n "-L 1$1:127.0.0.1:$1 " > $HOME/sh/sshports.txt
for ((i=($1+1);i<$2;i++))
do
echo -n "-L 1$i:127.0.0.1:$i " >> $HOME/sh/sshports.txt
done
line=$(head -n 1 $HOME/sh/sshports.txt)
cline="ssh "$3" "$line
echo $cline
eval $cline
}
A example of running the function:
fsshmap 6000 6010 hostname
Result of this example:
You can access 127.0.0.1:16000~16009 the same as hostname:6000~6009
In my company both me and my team members need access to 3 ports of a non-reachable "target" server so I created a permanent tunnel (that is a tunnel that can run in background indefinitely, see params -f and -N) from a reachable server to the target one. On the command line of the reachable server I executed:
ssh root#reachableIP -f -N -L *:8822:targetIP:22 -L *:9006:targetIP:9006 -L *:9100:targetIP:9100
I used user root but your own user will work. You will have to enter the password of the chosen user (even if you are already connected to the reachable server with that user).
Now port 8822 of the reachable machine corresponds to port 22 of the target one (for ssh/PuTTY/WinSCP) and ports 9006 and 9100 on the reachable machine correspond to the same ports of the target one (they host two web services in my case).
Another one liner that I use and works on debian:
ssh user#192.168.1.10 $(for j in $(seq 20000 1 20100 ) ; do echo " -L$j:127.0.0.1:$j " ; done | tr -d "\n")
One of the benefits of logging into a server with port forwarding is facilitating the use of Jupyter Notebook. This link provides an excellent description of how to it. Here I would like to do some summary and expansion for all of you guys to refer.
Situation 1. Login from a local machine named Host-A (e.g. your own laptop) to a remote work machine named Host-B.
ssh user#Host-B -L port_A:localhost:port_B
jupyter notebook --NotebookApp.token='' --no-browser --port=port_B
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-B but see it in Host-A.
Situation 2. Login from a local machine named Host-A (e.g. your own laptop) to a remote login machine named Host-B and from there login to the remote work machine named Host-C. This is usually the case for most analytical servers within universities and can be achieved by using two ssh -L connected with -t.
ssh -L port_A:localhost:port_B user#Host-B -t ssh -L port_B:localhost:port_C user#Host-C
jupyter notebook --NotebookApp.token='' --no-browser --port=port_C
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-C but see it in Host-A.
Situation 3. Login from a local machine named Host-A (e.g. your own laptop) to a remote login machine named Host-B and from there login to the remote work machine named Host-C and finally login to the remote work machine Host-D. This is not usually the case but might happen sometime. It's an extension of Situation 2 and the same logic can be applied on more machines.
ssh -L port_A:localhost:port_B user#Host-B -t ssh -L port_B:localhost:port_C user#Host-C -t ssh -L port_C:localhost:port_D user#Host-D
jupyter notebook --NotebookApp.token='' --no-browser --port=port_D
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-D but see it in Host-A.
Note that port_A, port_B, port_C, port_D can be random numbers except common port numbers listed here. In Situation 1, port_A and port_B can be the same to simplify the procedure.
Here is a solution inspired from the one from Yuval Atzmon.
It has a few benefits over the initial solution:
first it creates a single background process and not one per port
it generates the alias that allows you to kill your tunnels
it binds only to 127.0.0.1 which is a little more secure
You may use it as:
tnl your.remote.com 1234
tnl your.remote.com {1234,1235}
tnl your.remote.com {1234..1236}
And finally kill them all with tnlkill.
function tnl {
TUNNEL="ssh -N "
echo Port forwarding for ports:
for i in ${#:2}
do
echo " - $i"
TUNNEL="$TUNNEL -L 127.0.0.1:$i:localhost:$i"
done
TUNNEL="$TUNNEL $1"
$TUNNEL &
PID=$!
alias tnlkill="kill $PID && unalias tnlkill"
}
An alternative approach is to tell ssh to work as a SOCKS proxy using the -D flag.
That way you would be able to connect to any remote network address/port accesible through the ssh server as long as the client applications are able to go through a SOCKS proxy (or work with something like socksify).
If you want a simple solution that runs in the background and is easy to kill - use a control socket
# start
$ ssh -f -N -M -S $SOCKET -L localhost:9200:localhost:9200 $HOST
# stop
$ ssh -S $SOCKET -O exit $HOST
I've developed loco for help with ssh forwarding. It can be used to share ports 5000 and 7000 on remote locally at the same ports:
pip install loco
loco listen SSHINFO -r 5000 -r 7000
First It can be done using Parallel Execution by xargs -P 0.
Create a file for binding the ports e.g.
localhost:8080:localhost:8080
localhost:9090:localhost:8080
then run
xargs -P 0 -I xxx ssh -vNTCL xxx <REMOTE> < port-forward
or you can do a one-liner
echo localhost:{8080,9090} | tr ' ' '\n' | sed 's/.*/&:&/' | xargs -P 0 -I xxx ssh -vNTCL xxx <REMOTE>
pros independent ssh port-forwarding, they are independent == avoiding Single Point of Failure
cons each ssh port-forwarding is forked separately, somehow not efficient
second it can be done using curly brackets expansion feature in bash
echo "ssh -vNTC $(echo localhost:{10,20,30,40,50} | perl -lpe 's/[^ ]+/-L $&:$&/g') <REMOTE>"
# output
ssh -vNTC -L localhost:10:localhost:10 -L localhost:20:localhost:20 -L localhost:30:localhost:30 -L localhost:40:localhost:40 -L localhost:50:localhost:50 <REMOTE>
real example
echo "-vNTC $(echo localhost:{8080,9090} | perl -lpe 's/[^ ]+/-L $&:$&/g') gitlab" | xargs ssh
Forwarding 8080 and 9090 to gitlab server.
pros one single fork == efficient
cons by closing this process (ssh) all forwarding are closed == Single Point of Failure
You can use this zsh function (probably works with bash, too)(Put it in ~/.zshrc):
ashL () {
local a=() i
for i in "$#[2,-1]"
do
a+=(-L "${i}:localhost:${i}")
done
autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -NT "$1" "$a[#]"
}
Examples:
ashL db#114.39.161.24 6480 7690 7477
ashL db#114.39.161.24 {6000..6050} # Forwards the whole range. This is simply shell syntax sugar.

How to pass ssh arguments into a ProxyCommand

I have been struggling with setting up a ProxyCommand to ssh through multiple hops. The issue I am having is integrating arguments in my normal ssh statement into the config file. I want to connect to IP2 via IP1. My username is greg and I am connecting using rsa. This is the one liner that will connect me:
ssh -A -t -p 22 -i ~/.ssh/private_key greg#IP1 ssh -A -t greg#IP2
I have tried a bunch of different config set ups and currently I am using:
Host ezConnect
ProxyCommand ssh %h nc IP2 22
HostKeyAlias IP2
HostName IP1
User greg
I know the issue is that it does not include the arguments I need, but wherever I try to put them it seems to break.
The reason I'm doing this is because I need to use a db GUI (navicat) to connect through a gateway server and the UI doesn't support a strait up ssh command.
Any help would be appreciated.
I figured it out so here is the correct config fie:
Host ezCon
Hostname **IP2**
User greg
ProxyCommand ssh -l greg -p 22 -i ~/.ssh/private_key **IP1** -W %h:%p