Kerberose, get ticket using ssh tunneling - ssh

So I have to kinit as certain principal locally using his keytab.
Since the Kerberose kdc on remote server, which I reach with on vpn, I need to use ssh to access the server, and thus make tunneling to the service.
For this I did the following:
Copied the krb5.conf from the remote server and replaced the local with it
Copied the keytab of my interest
Since I need access to the service:
ssh -L1088:localhost:88 -L10749:localhost:749 remote_server
Changed the local file krb5.conf to
admin_server = localhost:10749
kdc = localhost:1088
But when I try to kinit
KRB5_TRACE=/dev/stdout kinit -kt ${PRINCIPAL_KEYTAB}.keytab ${PRINCIPAL_NAME}
[12332] 1504171391.121253: Getting initial credentials for ${PRINCIPAL_NAME}
[12332] 1504171391.123940: Looked up etypes in keytab: des, des-cbc-crc, aes128-cts, rc4-hmac, aes256-cts, des3-cbc-sha1
[12332] 1504171391.124027: Sending request (227 bytes) to ${DOMAIN}
[12332] 1504171391.124613: Resolving hostname localhost
[12332] 1504171391.124988: Sending initial UDP request to dgram ::1:1088
[12332] 1504171391.125070: Sending initial UDP request to dgram 127.0.0.1:1088
[12332] 1504171391.125120: Initiating TCP connection to stream ::1:1088
[12332] 1504171391.125165: Terminating TCP connection to stream ::1:1088
[12332] 1504171391.125186: Initiating TCP connection to stream 127.0.0.1:1088
[12332] 1504171391.125216: Terminating TCP connection to stream 127.0.0.1:1088
kinit: Cannot contact any KDC for realm '${DOMAIN}' while getting initial credentials
I retried by adding ssh -vvv and got
debug1: Connection to port 1088 forwarding to localhost port 88 requested.
debug2: fd 15 setting TCP_NODELAY
debug2: fd 15 setting O_NONBLOCK
debug3: fd 15 is O_NONBLOCK
debug1: channel 7: new [direct-tcpip]
debug3: send packet: type 90
debug1: Connection to port 1088 forwarding to localhost port 88 requested.
debug2: fd 16 setting TCP_NODELAY
debug2: fd 16 setting O_NONBLOCK
debug3: fd 16 is O_NONBLOCK
debug1: channel 8: new [direct-tcpip]
debug3: send packet: type 90
I tried to tcpdump, and locally there are tries to connect, but cannot find any packages received to the other site.
I edit out all other information in the krb5.conf.
What I am missing here or is this possible at all?
PS:
netstat says the ports are existing and opened on both machines.
I have no problem to kinit on the server itself.
PSS:
From what I see the kdc is actually listening at port udp 88 not tcp, could this be a problem?

I resolved it after all by using socat and ssh as follows, and several tutorials:
We are receiving udp packages to 1088, but ssh tunnels only tcp, so with socat we can "transform" them:
locally$ socat -T15 udp4-recvfrom:1088,reuseaddr,fork tcp:localhost:1089
Now we create ssh tunnel of that port to the remote server by
locally$ ssh -L1089:localhost:1089 remote_server
After that we transform the tcp packages arriving at 1089 to udp and redirect them to the kdc at port 88 vie
server$ socat tcp4-listen:1088,reuseaddr,fork UDP:localhost:88

Instead of having to tunnel UDP traffic as well, you could force kerberos to only use tcp as following:
[realms]
MY.REALM = {
kdc = tcp/localhost:1088
master_kdc = tcp/localhost:1088
admin_server = tcp/localhost:1749
}
And now setup your tcp/ssh tunnel as before:
ssh -L1088:kdc.server:88 -L1749:kdc.server:749 ssh.hop

Related

WSL2 SSH RemoteForward connect back

I'm trying to use rsync on my dev server to download files to my local machine after checking out a branch on the dev server.
Before using wsl2, I used to be able to do the following:
Remote server
rsync -ave "ssh -p 22001" --delete --exclude-from ~/rsync_exclude_list.txt ~/as/ alex#localhost:/home/alexmk92/code/project
Local SSH config
Host dev-tunnel
HostName dev.sever.co.uk
User as
ControlMaster auto
ControlPath ~/.ssh/sockets/%r#%h:%p
RemoteForward 22001 localhost:22
Host dev
HostName dev.server.co.uk
User as
RequestTTY yes
RemoteCommand cd as; bash
I can then run these with ssh dev and ssh -fvN dev-tunnel if from the remote server I type ssh -p 22001 alex#localhost then I get:
debug1: remote forward success for: listen 22001, connect localhost:22
debug1: All remote forwarding requests processed
debug1: client_input_channel_open: ctype forwarded-tcpip rchan 2 win 2097152 max 32768
debug1: client_request_forwarded_tcpip: listen localhost port 22001, originator 127.0.0.1 port 34472
debug1: connect_next: host localhost ([127.0.0.1]:22) in progress, fd=5
debug1: channel 1: new [127.0.0.1]
debug1: confirm forwarded-tcpip
debug1: channel 1: connection failed: Connection refused
connect_to localhost port 22: failed.
debug1: channel 1: free: 127.0.0.1, nchannels 2
I'm guessing this is because WSL2 no longer runs on localhost, and is instead isolated within Hypervisor. Which probably means windows is receiving this request on localhost:22 (where no SSH server is running) and then hangs up the connection.
How can I forward the request to my WSL2 SSH process?
It is possible to add a port mapping to WSL2 machines, using the following WSH script:
$port = 3000;
$addr = '0.0.0.0';
$remoteaddr = bash.exe -c "ifconfig eth0 | grep 'inet '"
$found = $remoteaddr -match '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}';
if( $found ) {
$remoteaddr = $matches[0];
} else {
echo "Error: ip address of WSL 2 cannot be found";
exit;
}
Invoke-Expression "netsh interface portproxy delete v4tov4 listenport=$port listenaddress=$addr"
Invoke-Expression "netsh interface portproxy add v4tov4 listenport=$port listenaddress=$addr connectport=$port connectaddress=$remoteaddr"
echo "Success: Port mapping added!";
Of course, you need to change to port and maybe the IP address (first two lines)
Maybe you need to run the script as admin...

Why the forwarded SSH port still seem open even with the endpoint failing?

I mount a SSH port forwarding tunnel to a remote server RemoteServerSSH and forward the 55555 port to a non-existing equipment (this is what I try to test).
$ hostname
MyMachine
Setting the forwarding tunnel
$ ssh -q -N -p 22 -vvv \
-i ~/.ssh/MyKey \
-o Compression=yes \
-o ServerAliveInterval=3 \
-o serverAliveCountMax=3 \
-L *:55555:RemoteDownItem:9100 user#RemoteServerSSH
Testing the tunnel
When I telnet the device directly I got the correct behavior (not connected). However, when I try to reach it through the tunnel, telnet says it's connected:
$ telnet RemoteDownItem 9100 # Not Connected = OK
$ telnet MyMachine 55555 # Connected! Why? should be same as above
When I measure the telnet time connection, it is instantaneous (1ms!).
It is the SSH client that answers me, it does not cross the ssh tunnel! Why ?
Verbose
...
debug1: Local connections to *:55555 forwarded to remote address 10.220.9.183:9100
debug3: channel_setup_fwd_listener: type 2 wildcard 1 addr NULL
debug1: Local forwarding listening on 0.0.0.0 port 55555.
debug2: fd 4 setting O_NONBLOCK
debug3: fd 4 is O_NONBLOCK
debug1: channel 0: new [port listener]
debug3: sock_set_v6only: set socket 5 IPV6_V6ONLY
debug1: Local forwarding listening on :: port 55555.
debug2: fd 5 setting O_NONBLOCK
debug3: fd 5 is O_NONBLOCK
debug1: channel 1: new [port listener]
debug2: fd 3 setting TCP_NODELAY
debug3: packet_set_tos: set IP_TOS 0x10
debug1: Requesting no-more-sessions#openssh.com
debug1: Entering interactive session.
debug1: Connection to port 55555 forwarding to 10.220.9.183 port 9100 requested.
debug2: fd 6 setting TCP_NODELAY
debug2: fd 6 setting O_NONBLOCK debug3: fd 6 is O_NONBLOCK
debug1: channel 2: new [direct-tcpip]
Question
Is there an SSH parameter to forward the telnet connection directly to the endpoint?
Project Related Question
Telnet connect to non-existing adress
Consider how a tunnel works. When you run something like ssh -L *:55555:RemoteDownItem:9100 user#host, the port forward is handled like this:
The local ssh instance binds to TCP port 55555 and listens for connections.
An "originator" connects to port 55555 on the local system. The local ssh instance accepts the TCP connection.
The local ssh instance sends a "direct-tcpip" request through the SSH connection to the remote ssh server.
The remote ssh server attempts to connect to host "RemoteDownItem" port 9100.
At step 4, if the ssh server is able to connect to the target of the tunnel, then the ssh client and server will each relay data between the originator and the target through the direct-tcpip channel.
Alternately, at step 4, the server may not be able to make the TCP connection to the target. Or the server may be configured not to permit forward requests. In either case, it will respond to the client with an error (or a message saying the channel is closed).
At this point, the only thing that the local ssh instance can do is to close the TCP connection to the originator. From the perspective of the originator, it successfully connected to a "server" (the ssh client), and then the "server" almost immediately closed the connection.
The OpenSSH software doesn't contain any logic to handle this in a more sophisticated way. And handling it in a more sophisticated way may be difficult. Consider:
The remote SSH server has no idea whether it can connect to "RemoteDownItem" port 9100 until it tries. So it's problematic for ssh to figure out in advance that the port forward won't work.
Even if one attempt to connect to the target fails, the next attempt might succeed. So it's problematic for ssh to assume the port forward won't work, just because one attempt failed.
The remote SSH server could successfully connect to the target, and then the target could immediately close the TCP connection. So the ssh server, ssh client, and originator all have to handle this behavior anyway.

Dynamic port forwarding fails after turning off and on Google Cloud virtual machine (compute engine)

I'm connecting to my Spark cluster master node with dynamic port forwarding so that I can open jupyter notebook web interface in my local machine.
I followed the instructions from this Google Cloud Dataproc tutorial: https://cloud.google.com/dataproc/docs/tutorials/jupyter-notebook
I created ssh funnel with the following command as advised:
gcloud compute ssh --zone=<cluster-zone> --ssh-flag="-D" --ssh-flag="10000" --ssh-flag="-N" "<cluster-name>-m"
And opened web interface:
<browser executable path> \
"http://<cluster-name>-m:8123" \
--proxy-server="socks5://localhost:10000" \
--host-resolver-rules="MAP * 0.0.0.0 , EXCLUDE localhost" \
--user-data-dir=/tmp/
It worked perfectly fine first time I tried.
However, once I turned my goole compute engine off and turned it on after a while, the exact same commands doesn't work, giving out error message below:
debug1: Connection to port 10000 forwarding to socks port 0 requested.
debug2: fd 8 setting TCP_NODELAY
debug3: fd 8 is O_NONBLOCK
debug3: fd 8 is O_NONBLOCK
debug1: channel 2: new [dynamic-tcpip]
debug2: channel 2: pre_dynamic: have 0
debug2: channel 2: pre_dynamic: have 3
debug2: channel 2: decode socks5
debug2: channel 2: socks5 auth done
debug2: channel 2: pre_dynamic: need more
debug2: channel 2: pre_dynamic: have 0
debug2: channel 2: pre_dynamic: have 19
debug2: channel 2: decode socks5
debug2: channel 2: socks5 post auth
debug2: channel 2: dynamic request: socks5 host cluster-1-m port 8123 command 1
channel 2: open failed: connect failed: Connection refused
debug2: channel 2: zombie
debug2: channel 2: garbage collecting
debug1: channel 2: free: direct-tcpip: listening port 10000 for cluster-1-m port 8123, connect from ::1 port 49535 to ::1 port 10000, nchannels 3
debug3: channel 2: status: The following connections are open:
Waiting for help:D
The Jupyter notebook kernel is not relaunched after reboots. You'll need to manually restart the notebook yourself once the machine has booted, e.g.:
gcloud compute ssh <cluster-name>-m
nohup /usr/local/bin/miniconda/bin/jupyter notebook --no-browser > /var/log/jupyter_notebook.log 2>&1 &
Once the kernel is up and running, you should be able to access the web UI by proxy.
Note: In general, Dataproc does not support stopping or restarting the entire cluster.

Trying to make a SSH Tunel

I configurated a bastion server on AWS on my public subnet.
I can make direct ssh to the ec2 instance inside the private subnet, using the bastion host.
I can connect to the bastion host and check if the 7474 port on the private ec2 istance is opened.
nc -v -z -w 5 10.0.3.102 7474; echo $?
Connection to 10.0.3.102 7474 port [tcp/*] succeeded!
0
I want to ssh tunnel from a localhost (my home machine) to a ec2 instance on private network.
ssh -v -C -N -L 9000:PRIVATE_MDM:7474 BASTION
But i getting:
open failed: administratively prohibited: open failed
Authenticated to 52.32.240.40 ([52.32.240.40]:22).
debug1: Local connections to LOCALHOST:9000 forwarded to remote address PRIVATE_MDM:7474
debug1: Local forwarding listening on ::1 port 9000.
debug1: channel 0: new [port listener]
debug1: Local forwarding listening on 127.0.0.1 port 9000.
debug1: channel 1: new [port listener]
debug1: Requesting no-more-sessions#openssh.com
debug1: Entering interactive session.
debug1: Connection to port 9000 forwarding to PRIVATE_MDM port 7474 requested.
debug1: channel 2: new [direct-tcpip]
debug1: Connection to port 9000 forwarding to PRIVATE_MDM port 7474 requested.
debug1: channel 3: new [direct-tcpip]
channel 2: open failed: administratively prohibited: open failed
channel 3: open failed: administratively prohibited: open failed
debug1: channel 2: free: direct-tcpip: listening port 9000 for PRIVATE_MDM port 7474, connect from 127.0.0.1 port 42685 to 127.0.0.1 port 9000, nchannels 4
debug1: channel 3: free: direct-tcpip: listening port 9000 for PRIVATE_MDM port 7474, connect from 127.0.0.1 port 42686 to 127.0.0.1 port 9000, nchannels 3
debug1: Connection to port 9000 forwarding to PRIVATE_MDM port 7474 requested.
debug1: channel 2: new [direct-tcpip]
channel 2: open failed: administratively prohibited: open failed
debug1: channel 2: free: direct-tcpip: listening port 9000 for PRIVATE_MDM port 7474, connect from 127.0.0.1 port 42687 to 127.0.0.1 port 9000, nchannels 3
BASTION machine has forbidden to create port forwarding by option AllowTcpForwarding. If you want to have port-forwarding working, you need to allow this option on this machine.
EDIT: Now I see the flaw there. Can you add description what are you trying to achieve? Forwarding your non-used local port to non-used remote port does not make sense. You either forward existing service on remote side to your local port (then use -L -- local port forwarding) or the other way round, your local service to remote port (then you use -R -- remote port forwarding). Without this, you can't proceed further.
SOLUTION: Difference between nc and ssh command in examples is in usage of direct IP address and hostname. The BASTION was not able to resolve PRIVATE_MDM which caused the problem.

Port 22: Connection Refused when you attempt to ssh in

On Centos 7 I am faced with the error below:
ssh -vvv ##.###.###.###
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to ##.###.###.### [##.###.###.###] port 22.
debug1: connect to address ##.###.###.### port 22: Connection refused
ssh: connect to host ##.###.###.### port 22: Connection refused
pool-100-1-1-25:~ studiolaptop1$ ssh -vvv ##.###.###.###
Bizarrely, I have been getting this issue for awhile now when I try to ssh into my server. I have checked iptables looks all fine. Checked the ssh.config file, that is also looking fine, but clearly something is wrong. How can I solve this?
If the connection is refused, it means the sshd daemon/server is not running. Can you login to the server locally or via a console?
Try running the following as root on the target server:
lsof -i :22
or on the source server, see if you can connect to the ssh port:
telnet targethost 22
You should get something like the following:
# telnet localhost 22
Trying ::1...
Connected to localhost.
Escape character is '^]'.
SSH-2.0-OpenSSH_6.9
Switching from firewallD to iptables had caused this issue. Thus, needed to add relevant rules to the iptable to allow outbound and inbound connection on port 22.