Use ssh over port forwarded connection - ssh

My organisation makes us connect to our AWS environments using a "bastion" host so my openssl .ssh config file looks a bit like this:
Host bastion.*.c1.some.com
User bastionuser
ProxyCommand none
StrictHostKeyChecking no
ForwardAgent yes
Host *.c1.some.com 12.345.* 456.12.1.*
User awsuser
StrictHostKeyChecking no
ForwardAgent yes
ProxyCommand ~/.ssh/proxy_command.sh %h %p
I want to use an ssh client built into the CLion IDE to connect to my AWS environment but it does not support this kind of configuration.
Can I setup a port forward using openssl and then establish an ssh connection over that tunnel from within CLion?

I was able to setup a port forward using PuTTY and afterwards I was able to establish a second ssh connection over the port forward using Intellij. For some reason I couldn't establish the second ssh connection over the OpenSSH port forward, perhaps because the Git Bash environment is sandboxed or something?
Presumably this will also work with any other SSH client that doesn't support tunneling out of the box.

Related

Ubuntu Jump Host in Open Telekom Cloud not working as expected

Currently, I have built a small datacenter environment in OTC with Terraform. based on Ubuntu 20.04 images.
The idea is to have a jump host in the setup phase and for operational purposes that allows spontaneous access to service frontends via ssh proxy jumps without permanently routing them to the public net.
Basic setup works fine so far - I can access the jump host with ssh, and can access the internal machines from there with ssh when I put the private key onto the jump host. So, cloudwise the security seems to be fine. Key pair is generated with ed25519, I use the same key for jump host and internal servers (for now).
What I cannot achieve is the proxy jump as a chained command from my outside machine.
On the jump host, I set AllowTcpForwarding to "yes" in /etc/ssh/sshd_config and restarted ssh and sshd services.
My current local ssh config looks like this:
Host otc
User ubuntu
Hostname <FloatingIP-Address>
Port 22
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
IdentityFile= ~/.ssh/ssh_access
ControlPath ~/.ssh/cm-%r#%h:%p
ControlMaster auto
ControlPersist 10m
Host 10.*
User ubuntu
Port 22
IdentityFile=~/.ssh/ssh_access
ProxyJump otc
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
I can use this to ssh otc to the jump host.
What I would expect is that I could use e.g. ssh 10.0.0.56 to reach an internal host without further ado. As well I should be able to use commands like ssh -L 8080:10.0.0.56:8080 10.0.0.56 -N to map an internal server's port to a localhost port on my external machine. This is how I managed that successfully on other hosting scenarios in the public cloud.
All I get is:
Stdio forwarding request failed: Session open refused by peer
kex_exchange_identification: Connection closed by remote host
Journal on the Jump host says:
Jul 30 07:19:04 dev-nc-o-bastion sshd[2176]: refused local port forward: originator 127.0.0.1 port 65535, target 10.0.0.56 port 22
What I checked as well:
ufw is off on the Jump Host.
replaced ProxyJump configuration with ProxyCommand
So I am at the end of my knowledge. Has anyone a hint what else could be the reason? Any help welcome!
Ok, cause is found (but not yet fully explained).
My local ssh setting was allowing multiplexed forwards (ControlMaster auto ) which caused the creation of a unix socket file for the Controlpath in ~/.ssh.
I had to login to the jump host to AllowTcpForwarding in the first place.
After rebooting the sshd, I returned to the local machine and the failure occured when trying to forward to the remote internal machine.
After deleting the socket file in ~/.ssh, the connection can now be established as needed. Obviously, the persistent tunnel was not impacted by the restarted daemon on the jump host and simply refused to follow the new directive.
This cost me two days. On the bright side, I learned a lot about ssh :o

Tunnel NETCONF for dynamic host over SSH tunnel

I have a requirement to tunnel NETCONF (typically TCP-22) connections over a jumphost, but for a dynamic host.
I understand I can do remote SSH tunneling for defined hosts, e.g.:
ssh -R 2201:jumphost:22 rtr1
ssh -R 2202:jumphost:22 rtr2
But I'd like to be able to connect to a dynamic host, by tunneling over a jumphost, something like:
ssh -R 2201:jumphost:22 *
And then to be able to make a NETCONF connection such as:
connect rtrN port 2201
Is this doable via SSH tunneling? I don't want to use dynamic SSH tunnels, as I'd have to specify a proxy port whenever I make the connection, which I can't really do when I make the connection.
I've actually figured out how to do this in case anyone is interested:
In SSH config file:
Host *.*
ProxyCommand ssh user#jump nc %h %p
Then anything you SSH to, will forward over the jump connection, then nc to the host.

Connecting to a remote server from local machine via ssh-tunnel

I am running Ansible on my machine. And my machine does not have ssh access to the remote machine. Port 22 connection originating from local machine are blocked by the institute firewall. But I have access to a machine (ssh-tunnel), through which I can login to the remote machine. Now is there a way we can run ansible playbook from local machine on remote hosts.
In a way is it possible to make Ansible/ssh connect to the remote machine, via ssh-tunnel. But not exactly login to ssh-tunnel. The connection will pass through the tunnel.
Other way is I can install ansible on ssh-tunnel, but that is not the desired and run plays from there. But that would not be a desired solution.
Please let me know if this is possible.
There are two ways to achieve this without install the Ansible on the ssh-tunnel machine.
Solution#1:
Use these variables in your inventory:
[remote_machine]
remote ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='username' ansible_ssh_private_key_file='/home/user/private_key'
hope you understand above parameters, if need help please ask in comments
Solution#2:
Create ~/.ssh/config file and add the following parameters:
####### Access to the Private Server through ssh-tunnel/bastion ########
Host ssh-tunnel-server
HostName x.x.x.x
StrictHostKeyChecking no
User username
ForwardAgent yes
Host private-server
HostName y.y.y.y
StrictHostKeyChecking no
User username
ProxyCommand ssh -q ssh-tunnel-server nc -q0 %h %p
Hope that help you, if you need any help, feel free to ask
No request to install ansible on the jump and remote servers, ansible is ssh service only tool :-)
First make sure you can work it directly with SSH Tunnel.
On local machine (Local_A), you can login to Remote machine (Remote_B) via jump box (Jump_C).
login server Local_A
ssh -f user#remote_B -L 2000:Jump_C:22 -N
The other options are:
-f tells ssh to background itself after it authenticates, so you don't have to sit around running something on the remote server for the tunnel to remain alive.
-N says that you want an SSH connection, but you don't actually want to run any remote commands. If all you're creating is a tunnel, then including this option saves resources.
-L [bind_address:]port:host:hostport
Specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side.
There will be a password challenge unless you have set up DSA or RSA keys for a passwordless login.
There are lots of documents teaching you how to do the ssh tunnel.
Then try below ansible command from Local_A:
ansible -vvvv remote_B -m shell -a 'hostname -f' --ssh-extra-args="-L 2000:Jump_C:22"
You should see the remote_B hostname. Let me know the result.
Let's say you can ssh into x.x.x.x from your local machine, and ssh into y.y.y.y from x.x.x.x, while y.y.y.y is the target of your ansible playbook.
inventory:
[target]
y.y.y.y
playbook.yml
---
- hosts: target
tasks: ...
Run:
ansible-playbook --ssh-common-args="-o ProxyCommand='ssh -W %h:%p root#x.x.x.x'" -i inventory playbook.yml

How to use ansible with two factor authentication?

I have enabled two factor authentication for ssh using duosecurity (using this playbook https://github.com/CoffeeAndCode/ansible-duo ).
How can I use ansible to manage the server now. The SSH calls fail at gathering facts because of this. I want the person running the playbook to enter the two factor code before the playbook is run.
Disabling two factor for the deployment user is a possible solution but creates a security issue which I would I like to avoid.
It's a hack, but you can tunnel a non-2fac Ansible SSH connection through a 2fac-enabled SSH connection.
Overview
We will setup two users: ansible will be the user Ansible will use. It should be authenticated in a way that's supported by Ansible (i.e., not 2fac). This user will be restricted so it cannot connect from anywhere but 127.0.0.1, so it is not accessible from outside the machine.
The second user, ansible_tunnel will be open to the outside world, but will be authenticated by two factors, and will only allow tunneling of SSH connections to the local machine.
You must be able to configure 2-factor authentication only for some users (not all).
Some info on SSH tunnels.
On the target machine:
Create two users: ansible and ansible_tunnel
Put your public key in ~/.ssh/authorized_keys of both users
Set the shell of ansible_tunnel to /bin/false, or lock the user - it will be used for tunneling exclusively, not running commands
Add the following to /etc/ssh/sshd_config:
AllowTcpForwarding no
AllowUsers ansible#127.0.0.1 ansible_tunnel
Match User ansible_tunnel
AllowTcpForwarding yes
PermitOpen 127.0.0.1:22
ForceCommand echo 'This account can only be used for tunneling SSH sessions'
Setup 2-factor authentication only for ansible_tunnel
Restart sshd
On the machine running Ansible:
Before running Ansible, run the following (on the Ansible machine, not the target):
ssh -N -L 8022:127.0.0.1:22 ansible_tunnel#<host>
You will be authenticated using two factors.
Once the tunnel is up (check with netstat), run Ansible with ansible_ssh_user=ansible, ansible_ssh_port=8022 and ansible_ssh_host=localhost.
Recap
Only ansible_tunnel can connect from the outside, and it will be authenticated using two factors
Once the tunnel is set up, connecting to port 8022 on the local machine is the same as connecting to sshd on the remote machine
We're allowing ansible to connect over SSH only when it is done through the localhost, so only connections that are tunneled are allowed
Scale
This will not scale well for multiple server, due to the need to open a separate tunnel for each machine, which requires manual action. However, if you've chosen 2-factor authentication for your servers you're already willing to do some manual action to connect to each server, and this solution will only add a little overhead with some script-wrapping.
[EDITED TO ADD]
Bonus
For convenience, we may want to log into the maintenance account directly to do some manual work, without going through the process of setting up a tunnel. We can configure SSH to require 2fac authentication in this case, while maintaining the ability to connect without 2fac through the tunnel:
# All users must authenticate using two factors
AuthenticationMethods publickey,keyboard-interactive
# Allow both maintenance user and tunnel user with no restrictions
AllowUsers ansible ansible_tunnel
# The maintenance user is allowed to authenticate using a single factor only
# when connecting from a local address - it should be impossible to connect to
# this user using a single factor from the outside (the only way to do that is
# having an existing access to the machine, or use the two-factor tunnel)
Match User ansible Address 127.0.0.1
AuthenticationMethods publickey
I can use ansible with ssh and 2FA using the ControlMaster feature of ssh and ansible.
My local ssh client is configured to dump a ControlPath socket for multiplexing connection. Ansible is configured to use the same socket.
Local ssh client
This configuration enable multiplexing for all connections. I personally store this configuration in `~/.ssh/config:
Host *
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p.socket
ControlPersist 1m
When a connection is established, a socket appears in the $HOME/.ssh directory. This socket persists during one minute after disconnection.
Configure ansible
Ansible is configured to re-use the local socket.
Add this in your ansible configuration file (for instance, ~/.ansible.cfg):
[ssh_connection]
control_path=~/.ssh/master-%%r#%%h:%%p.socket
Note the double % for variable substitution.
Usage
Connect to your server using ssh regular command (ssh user#server), and perform 2FA;
Launch your ansible command as usual.
The step 2 must be performed within the ControlPersist configuration, or keep an ssh connection in a terminal when you launch ansible command in another one.
You can also force to close connection when you do not need it, using: ssh -O exit user#server.
Note that, if you open a third terminal and run ssh user#server, you will not be asked for credentials: the connection established in 1. will be re-used.
Drawbacks
In case of bad network conditions
Sometimes, when you loose connection, the socket persists. Every further connection hangs. You must manually disconnect this connection, using ssh -O exit user#server. This is the only known drawback for this method.
References:
Ansible parameter ANSIBLE_SSH_CONTROL_PATH
About multiplexing ssh (a very old blog post which makes me discover ssh multiplexing: https://blog.scottlowe.org/2015/12/11/using-ssh-multiplexing/)
Solution using a Bastion Host
Even using an ssh bastion host it took me quite a while to get this working. In case it helps anyone else, here's what I came up with. It uses the ControlMaster ssh config options and since ansible uses regular ssh it can be configured to use the same ssh features and re-use the connection to the bastion host regardless of how many connections it opens to remote hosts. I've seen these Control options recommended in general (presumably for performance reasons if you have a lot of hosts) but not in the context of 2FA to a bastion host.
With this approach you don't need any sshd config changes, so you'll want AuthenticationMethods publickey,keyboard-interactive as the only authentication method setting on the bastion server, and publickey only for all your other servers that you're proxying through the bastion to get to. Since the bastion host is the only one that accepts external connections from the internet, it's the only one that requires 2FA, and internal hosts rely on agent forwarding for public key authentication but don't use 2FA.
On the client, I created a new ssh config file for my ansible environment in the top-level directory that I run ansible from (so sibling of ansible.cfg) called ssh.config. It contains:
Host bastion-persistent-connection
HostName <bastion host>
ForwardAgent yes
IdentityFile ~/.ssh/my-key
ControlMaster auto
ControlPath ~/.ssh/ansible-%r#%h:%p
ControlPersist 10m
Host 10.0.*.*
ProxyCommand ssh -W %h:%p bastion-persistent-connection -F ./ssh.config
IdentityFile ~/.ssh/my-key
Then in ansible.cfg I have:
[ssh_connection]
ssh_args = -F ./ssh.config
A few things to note:
My private subnet in this case is 10.0.0.0/16 which maps to the host wildcard option above. The bastion proxies all ssh connections to servers on this subnet.
This is a bit brittle in that I can only run my ssh or ansible commands in this directory, because of the ProxyCommand passing the local path to this config file. Unfortunately I don't think there's an ssh variable that maps to the current config file being used so that I could pass the same config file to the ProxyCommand automatically. Depending on your environment it might be better to use an absolute path for this.
The one gotcha is it makes running ansible more complex. Unfortunately, from what I can tell ansible has no support whatsoever for 2FA. So if you have no existing ssh connection to the bastion, ansible will print out Verification code: once for every private server it's connecting to, but it's not actually listening for the input so no matter what you do the connections will fail.
So I first run: ssh -F ssh.config bastion-persistent-connection
This creates the socket file in ~/.ssh/ansible-*, and the ssh agent locally will close & remove that socket after the configurable time (what I have set to 10m).
Once the socket is open I can run ansible commands like normal, e.g. ansible all -m ping and they succeed.

using telnet to connect to a ssh based server

Is it possible to use tunneling to connect to a ssh server via telnet? I'm using an API that can only telnet to a host, but that host will only accept ssh connections. If it is possible, what do I need to do to set that up?
Use netcat and ssh
$ nc -l -p 12345 -c "ssh someone#remotehost.com"
make sure that you have RSA auth setup, since you cannot enter a password.
i think what would work would be to run a telnet server on a local port on the host and use ssh to forward that locally where the api could connect to it; but that's just a bit silly