Dynamic port forwarding fails after turning off and on Google Cloud virtual machine (compute engine) - ssh

I'm connecting to my Spark cluster master node with dynamic port forwarding so that I can open jupyter notebook web interface in my local machine.
I followed the instructions from this Google Cloud Dataproc tutorial: https://cloud.google.com/dataproc/docs/tutorials/jupyter-notebook
I created ssh funnel with the following command as advised:
gcloud compute ssh --zone=<cluster-zone> --ssh-flag="-D" --ssh-flag="10000" --ssh-flag="-N" "<cluster-name>-m"
And opened web interface:
<browser executable path> \
"http://<cluster-name>-m:8123" \
--proxy-server="socks5://localhost:10000" \
--host-resolver-rules="MAP * 0.0.0.0 , EXCLUDE localhost" \
--user-data-dir=/tmp/
It worked perfectly fine first time I tried.
However, once I turned my goole compute engine off and turned it on after a while, the exact same commands doesn't work, giving out error message below:
debug1: Connection to port 10000 forwarding to socks port 0 requested.
debug2: fd 8 setting TCP_NODELAY
debug3: fd 8 is O_NONBLOCK
debug3: fd 8 is O_NONBLOCK
debug1: channel 2: new [dynamic-tcpip]
debug2: channel 2: pre_dynamic: have 0
debug2: channel 2: pre_dynamic: have 3
debug2: channel 2: decode socks5
debug2: channel 2: socks5 auth done
debug2: channel 2: pre_dynamic: need more
debug2: channel 2: pre_dynamic: have 0
debug2: channel 2: pre_dynamic: have 19
debug2: channel 2: decode socks5
debug2: channel 2: socks5 post auth
debug2: channel 2: dynamic request: socks5 host cluster-1-m port 8123 command 1
channel 2: open failed: connect failed: Connection refused
debug2: channel 2: zombie
debug2: channel 2: garbage collecting
debug1: channel 2: free: direct-tcpip: listening port 10000 for cluster-1-m port 8123, connect from ::1 port 49535 to ::1 port 10000, nchannels 3
debug3: channel 2: status: The following connections are open:
Waiting for help:D

The Jupyter notebook kernel is not relaunched after reboots. You'll need to manually restart the notebook yourself once the machine has booted, e.g.:
gcloud compute ssh <cluster-name>-m
nohup /usr/local/bin/miniconda/bin/jupyter notebook --no-browser > /var/log/jupyter_notebook.log 2>&1 &
Once the kernel is up and running, you should be able to access the web UI by proxy.
Note: In general, Dataproc does not support stopping or restarting the entire cluster.

Related

SSH port forwarding occasionally fails

I'm using SSH port forwarding to get to a DB behind a firewall. I use the following command (forwards remote 5432 port to local 5430):
ssh -i privatekey -v -N -A \
ec2-user#host -fNT -4 -L \
5430:rds-endpoint.us-west-2.rds.amazonaws.com:5432
This command always returns exit code 0, but approx. once in ten cases it doesn't actually open the tunnel and I get connection refused error when I try to connect to localhost:5430.
I've checked the debug output and noticed that there's one difference. The unsuccessful runs' debug output ends with this:
debug1: channel 0: new [port listener]
debug1: Requesting no-more-sessions#openssh.com
debug1: forking to background
while the successful runs have 3 more lines after the forking to background line:
debug1: Entering interactive session.
debug1: pledge: network
debug1: client_input_global_request: rtype hostkeys-00#openssh.com want_reply 0
So I assume SSH fails to "enter interactive session". Is there a way to fight this bug and make the port forwarding command reliable?

Connection Refused: ssh to headless Raspberry Pi 3 b+

I have downloaded raspbian lite and flashed it to SD card through etcher
As per raspbian's headless ssh tutorial, I created an empty ssh file in /boot
touch /Volumes/boot/ssh
Then I connected ethernet cable from pi into apple airport extreme
when pi is booted, airport utility on macbook shows 10.0.1.9 as a new device on the network
from macbook:
$ ssh -vvv pi#10.0.1.9
OpenSSH_7.8p1, LibreSSL 2.6.2
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 48: Applying options for *
debug2: resolve_canonicalize: hostname 10.0.1.9 is address
debug2: ssh_connect_direct
debug1: Connecting to 10.0.1.9 [10.0.1.9] port 22.
debug1: connect to address 10.0.1.9 port 22: Connection refused
ssh: connect to host 10.0.1.9 port 22: Connection refused
I've done this multiple times, recreating the ssh file at each boot, redownloading and reflashing the raspbian image, it always fails.
I'm 100% certain of 10.0.1.9 being the pi's local ip because i've attempted this around 10 times and it only appears when pi is on and ethernet is plugged in
Run these command to generate the needed keys for ssh remote access.
sudo rm -r /etc/ssh/ssh*key
sudo dpkg-reconfigure openssh-server

Kerberose, get ticket using ssh tunneling

So I have to kinit as certain principal locally using his keytab.
Since the Kerberose kdc on remote server, which I reach with on vpn, I need to use ssh to access the server, and thus make tunneling to the service.
For this I did the following:
Copied the krb5.conf from the remote server and replaced the local with it
Copied the keytab of my interest
Since I need access to the service:
ssh -L1088:localhost:88 -L10749:localhost:749 remote_server
Changed the local file krb5.conf to
admin_server = localhost:10749
kdc = localhost:1088
But when I try to kinit
KRB5_TRACE=/dev/stdout kinit -kt ${PRINCIPAL_KEYTAB}.keytab ${PRINCIPAL_NAME}
[12332] 1504171391.121253: Getting initial credentials for ${PRINCIPAL_NAME}
[12332] 1504171391.123940: Looked up etypes in keytab: des, des-cbc-crc, aes128-cts, rc4-hmac, aes256-cts, des3-cbc-sha1
[12332] 1504171391.124027: Sending request (227 bytes) to ${DOMAIN}
[12332] 1504171391.124613: Resolving hostname localhost
[12332] 1504171391.124988: Sending initial UDP request to dgram ::1:1088
[12332] 1504171391.125070: Sending initial UDP request to dgram 127.0.0.1:1088
[12332] 1504171391.125120: Initiating TCP connection to stream ::1:1088
[12332] 1504171391.125165: Terminating TCP connection to stream ::1:1088
[12332] 1504171391.125186: Initiating TCP connection to stream 127.0.0.1:1088
[12332] 1504171391.125216: Terminating TCP connection to stream 127.0.0.1:1088
kinit: Cannot contact any KDC for realm '${DOMAIN}' while getting initial credentials
I retried by adding ssh -vvv and got
debug1: Connection to port 1088 forwarding to localhost port 88 requested.
debug2: fd 15 setting TCP_NODELAY
debug2: fd 15 setting O_NONBLOCK
debug3: fd 15 is O_NONBLOCK
debug1: channel 7: new [direct-tcpip]
debug3: send packet: type 90
debug1: Connection to port 1088 forwarding to localhost port 88 requested.
debug2: fd 16 setting TCP_NODELAY
debug2: fd 16 setting O_NONBLOCK
debug3: fd 16 is O_NONBLOCK
debug1: channel 8: new [direct-tcpip]
debug3: send packet: type 90
I tried to tcpdump, and locally there are tries to connect, but cannot find any packages received to the other site.
I edit out all other information in the krb5.conf.
What I am missing here or is this possible at all?
PS:
netstat says the ports are existing and opened on both machines.
I have no problem to kinit on the server itself.
PSS:
From what I see the kdc is actually listening at port udp 88 not tcp, could this be a problem?
I resolved it after all by using socat and ssh as follows, and several tutorials:
We are receiving udp packages to 1088, but ssh tunnels only tcp, so with socat we can "transform" them:
locally$ socat -T15 udp4-recvfrom:1088,reuseaddr,fork tcp:localhost:1089
Now we create ssh tunnel of that port to the remote server by
locally$ ssh -L1089:localhost:1089 remote_server
After that we transform the tcp packages arriving at 1089 to udp and redirect them to the kdc at port 88 vie
server$ socat tcp4-listen:1088,reuseaddr,fork UDP:localhost:88
Instead of having to tunnel UDP traffic as well, you could force kerberos to only use tcp as following:
[realms]
MY.REALM = {
kdc = tcp/localhost:1088
master_kdc = tcp/localhost:1088
admin_server = tcp/localhost:1749
}
And now setup your tcp/ssh tunnel as before:
ssh -L1088:kdc.server:88 -L1749:kdc.server:749 ssh.hop

Why the forwarded SSH port still seem open even with the endpoint failing?

I mount a SSH port forwarding tunnel to a remote server RemoteServerSSH and forward the 55555 port to a non-existing equipment (this is what I try to test).
$ hostname
MyMachine
Setting the forwarding tunnel
$ ssh -q -N -p 22 -vvv \
-i ~/.ssh/MyKey \
-o Compression=yes \
-o ServerAliveInterval=3 \
-o serverAliveCountMax=3 \
-L *:55555:RemoteDownItem:9100 user#RemoteServerSSH
Testing the tunnel
When I telnet the device directly I got the correct behavior (not connected). However, when I try to reach it through the tunnel, telnet says it's connected:
$ telnet RemoteDownItem 9100 # Not Connected = OK
$ telnet MyMachine 55555 # Connected! Why? should be same as above
When I measure the telnet time connection, it is instantaneous (1ms!).
It is the SSH client that answers me, it does not cross the ssh tunnel! Why ?
Verbose
...
debug1: Local connections to *:55555 forwarded to remote address 10.220.9.183:9100
debug3: channel_setup_fwd_listener: type 2 wildcard 1 addr NULL
debug1: Local forwarding listening on 0.0.0.0 port 55555.
debug2: fd 4 setting O_NONBLOCK
debug3: fd 4 is O_NONBLOCK
debug1: channel 0: new [port listener]
debug3: sock_set_v6only: set socket 5 IPV6_V6ONLY
debug1: Local forwarding listening on :: port 55555.
debug2: fd 5 setting O_NONBLOCK
debug3: fd 5 is O_NONBLOCK
debug1: channel 1: new [port listener]
debug2: fd 3 setting TCP_NODELAY
debug3: packet_set_tos: set IP_TOS 0x10
debug1: Requesting no-more-sessions#openssh.com
debug1: Entering interactive session.
debug1: Connection to port 55555 forwarding to 10.220.9.183 port 9100 requested.
debug2: fd 6 setting TCP_NODELAY
debug2: fd 6 setting O_NONBLOCK debug3: fd 6 is O_NONBLOCK
debug1: channel 2: new [direct-tcpip]
Question
Is there an SSH parameter to forward the telnet connection directly to the endpoint?
Project Related Question
Telnet connect to non-existing adress
Consider how a tunnel works. When you run something like ssh -L *:55555:RemoteDownItem:9100 user#host, the port forward is handled like this:
The local ssh instance binds to TCP port 55555 and listens for connections.
An "originator" connects to port 55555 on the local system. The local ssh instance accepts the TCP connection.
The local ssh instance sends a "direct-tcpip" request through the SSH connection to the remote ssh server.
The remote ssh server attempts to connect to host "RemoteDownItem" port 9100.
At step 4, if the ssh server is able to connect to the target of the tunnel, then the ssh client and server will each relay data between the originator and the target through the direct-tcpip channel.
Alternately, at step 4, the server may not be able to make the TCP connection to the target. Or the server may be configured not to permit forward requests. In either case, it will respond to the client with an error (or a message saying the channel is closed).
At this point, the only thing that the local ssh instance can do is to close the TCP connection to the originator. From the perspective of the originator, it successfully connected to a "server" (the ssh client), and then the "server" almost immediately closed the connection.
The OpenSSH software doesn't contain any logic to handle this in a more sophisticated way. And handling it in a more sophisticated way may be difficult. Consider:
The remote SSH server has no idea whether it can connect to "RemoteDownItem" port 9100 until it tries. So it's problematic for ssh to figure out in advance that the port forward won't work.
Even if one attempt to connect to the target fails, the next attempt might succeed. So it's problematic for ssh to assume the port forward won't work, just because one attempt failed.
The remote SSH server could successfully connect to the target, and then the target could immediately close the TCP connection. So the ssh server, ssh client, and originator all have to handle this behavior anyway.

Trying to make a SSH Tunel

I configurated a bastion server on AWS on my public subnet.
I can make direct ssh to the ec2 instance inside the private subnet, using the bastion host.
I can connect to the bastion host and check if the 7474 port on the private ec2 istance is opened.
nc -v -z -w 5 10.0.3.102 7474; echo $?
Connection to 10.0.3.102 7474 port [tcp/*] succeeded!
0
I want to ssh tunnel from a localhost (my home machine) to a ec2 instance on private network.
ssh -v -C -N -L 9000:PRIVATE_MDM:7474 BASTION
But i getting:
open failed: administratively prohibited: open failed
Authenticated to 52.32.240.40 ([52.32.240.40]:22).
debug1: Local connections to LOCALHOST:9000 forwarded to remote address PRIVATE_MDM:7474
debug1: Local forwarding listening on ::1 port 9000.
debug1: channel 0: new [port listener]
debug1: Local forwarding listening on 127.0.0.1 port 9000.
debug1: channel 1: new [port listener]
debug1: Requesting no-more-sessions#openssh.com
debug1: Entering interactive session.
debug1: Connection to port 9000 forwarding to PRIVATE_MDM port 7474 requested.
debug1: channel 2: new [direct-tcpip]
debug1: Connection to port 9000 forwarding to PRIVATE_MDM port 7474 requested.
debug1: channel 3: new [direct-tcpip]
channel 2: open failed: administratively prohibited: open failed
channel 3: open failed: administratively prohibited: open failed
debug1: channel 2: free: direct-tcpip: listening port 9000 for PRIVATE_MDM port 7474, connect from 127.0.0.1 port 42685 to 127.0.0.1 port 9000, nchannels 4
debug1: channel 3: free: direct-tcpip: listening port 9000 for PRIVATE_MDM port 7474, connect from 127.0.0.1 port 42686 to 127.0.0.1 port 9000, nchannels 3
debug1: Connection to port 9000 forwarding to PRIVATE_MDM port 7474 requested.
debug1: channel 2: new [direct-tcpip]
channel 2: open failed: administratively prohibited: open failed
debug1: channel 2: free: direct-tcpip: listening port 9000 for PRIVATE_MDM port 7474, connect from 127.0.0.1 port 42687 to 127.0.0.1 port 9000, nchannels 3
BASTION machine has forbidden to create port forwarding by option AllowTcpForwarding. If you want to have port-forwarding working, you need to allow this option on this machine.
EDIT: Now I see the flaw there. Can you add description what are you trying to achieve? Forwarding your non-used local port to non-used remote port does not make sense. You either forward existing service on remote side to your local port (then use -L -- local port forwarding) or the other way round, your local service to remote port (then you use -R -- remote port forwarding). Without this, you can't proceed further.
SOLUTION: Difference between nc and ssh command in examples is in usage of direct IP address and hostname. The BASTION was not able to resolve PRIVATE_MDM which caused the problem.