How to use ansible with two factor authentication? - ssh

I have enabled two factor authentication for ssh using duosecurity (using this playbook https://github.com/CoffeeAndCode/ansible-duo ).
How can I use ansible to manage the server now. The SSH calls fail at gathering facts because of this. I want the person running the playbook to enter the two factor code before the playbook is run.
Disabling two factor for the deployment user is a possible solution but creates a security issue which I would I like to avoid.

It's a hack, but you can tunnel a non-2fac Ansible SSH connection through a 2fac-enabled SSH connection.
Overview
We will setup two users: ansible will be the user Ansible will use. It should be authenticated in a way that's supported by Ansible (i.e., not 2fac). This user will be restricted so it cannot connect from anywhere but 127.0.0.1, so it is not accessible from outside the machine.
The second user, ansible_tunnel will be open to the outside world, but will be authenticated by two factors, and will only allow tunneling of SSH connections to the local machine.
You must be able to configure 2-factor authentication only for some users (not all).
Some info on SSH tunnels.
On the target machine:
Create two users: ansible and ansible_tunnel
Put your public key in ~/.ssh/authorized_keys of both users
Set the shell of ansible_tunnel to /bin/false, or lock the user - it will be used for tunneling exclusively, not running commands
Add the following to /etc/ssh/sshd_config:
AllowTcpForwarding no
AllowUsers ansible#127.0.0.1 ansible_tunnel
Match User ansible_tunnel
AllowTcpForwarding yes
PermitOpen 127.0.0.1:22
ForceCommand echo 'This account can only be used for tunneling SSH sessions'
Setup 2-factor authentication only for ansible_tunnel
Restart sshd
On the machine running Ansible:
Before running Ansible, run the following (on the Ansible machine, not the target):
ssh -N -L 8022:127.0.0.1:22 ansible_tunnel#<host>
You will be authenticated using two factors.
Once the tunnel is up (check with netstat), run Ansible with ansible_ssh_user=ansible, ansible_ssh_port=8022 and ansible_ssh_host=localhost.
Recap
Only ansible_tunnel can connect from the outside, and it will be authenticated using two factors
Once the tunnel is set up, connecting to port 8022 on the local machine is the same as connecting to sshd on the remote machine
We're allowing ansible to connect over SSH only when it is done through the localhost, so only connections that are tunneled are allowed
Scale
This will not scale well for multiple server, due to the need to open a separate tunnel for each machine, which requires manual action. However, if you've chosen 2-factor authentication for your servers you're already willing to do some manual action to connect to each server, and this solution will only add a little overhead with some script-wrapping.
[EDITED TO ADD]
Bonus
For convenience, we may want to log into the maintenance account directly to do some manual work, without going through the process of setting up a tunnel. We can configure SSH to require 2fac authentication in this case, while maintaining the ability to connect without 2fac through the tunnel:
# All users must authenticate using two factors
AuthenticationMethods publickey,keyboard-interactive
# Allow both maintenance user and tunnel user with no restrictions
AllowUsers ansible ansible_tunnel
# The maintenance user is allowed to authenticate using a single factor only
# when connecting from a local address - it should be impossible to connect to
# this user using a single factor from the outside (the only way to do that is
# having an existing access to the machine, or use the two-factor tunnel)
Match User ansible Address 127.0.0.1
AuthenticationMethods publickey

I can use ansible with ssh and 2FA using the ControlMaster feature of ssh and ansible.
My local ssh client is configured to dump a ControlPath socket for multiplexing connection. Ansible is configured to use the same socket.
Local ssh client
This configuration enable multiplexing for all connections. I personally store this configuration in `~/.ssh/config:
Host *
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p.socket
ControlPersist 1m
When a connection is established, a socket appears in the $HOME/.ssh directory. This socket persists during one minute after disconnection.
Configure ansible
Ansible is configured to re-use the local socket.
Add this in your ansible configuration file (for instance, ~/.ansible.cfg):
[ssh_connection]
control_path=~/.ssh/master-%%r#%%h:%%p.socket
Note the double % for variable substitution.
Usage
Connect to your server using ssh regular command (ssh user#server), and perform 2FA;
Launch your ansible command as usual.
The step 2 must be performed within the ControlPersist configuration, or keep an ssh connection in a terminal when you launch ansible command in another one.
You can also force to close connection when you do not need it, using: ssh -O exit user#server.
Note that, if you open a third terminal and run ssh user#server, you will not be asked for credentials: the connection established in 1. will be re-used.
Drawbacks
In case of bad network conditions
Sometimes, when you loose connection, the socket persists. Every further connection hangs. You must manually disconnect this connection, using ssh -O exit user#server. This is the only known drawback for this method.
References:
Ansible parameter ANSIBLE_SSH_CONTROL_PATH
About multiplexing ssh (a very old blog post which makes me discover ssh multiplexing: https://blog.scottlowe.org/2015/12/11/using-ssh-multiplexing/)

Solution using a Bastion Host
Even using an ssh bastion host it took me quite a while to get this working. In case it helps anyone else, here's what I came up with. It uses the ControlMaster ssh config options and since ansible uses regular ssh it can be configured to use the same ssh features and re-use the connection to the bastion host regardless of how many connections it opens to remote hosts. I've seen these Control options recommended in general (presumably for performance reasons if you have a lot of hosts) but not in the context of 2FA to a bastion host.
With this approach you don't need any sshd config changes, so you'll want AuthenticationMethods publickey,keyboard-interactive as the only authentication method setting on the bastion server, and publickey only for all your other servers that you're proxying through the bastion to get to. Since the bastion host is the only one that accepts external connections from the internet, it's the only one that requires 2FA, and internal hosts rely on agent forwarding for public key authentication but don't use 2FA.
On the client, I created a new ssh config file for my ansible environment in the top-level directory that I run ansible from (so sibling of ansible.cfg) called ssh.config. It contains:
Host bastion-persistent-connection
HostName <bastion host>
ForwardAgent yes
IdentityFile ~/.ssh/my-key
ControlMaster auto
ControlPath ~/.ssh/ansible-%r#%h:%p
ControlPersist 10m
Host 10.0.*.*
ProxyCommand ssh -W %h:%p bastion-persistent-connection -F ./ssh.config
IdentityFile ~/.ssh/my-key
Then in ansible.cfg I have:
[ssh_connection]
ssh_args = -F ./ssh.config
A few things to note:
My private subnet in this case is 10.0.0.0/16 which maps to the host wildcard option above. The bastion proxies all ssh connections to servers on this subnet.
This is a bit brittle in that I can only run my ssh or ansible commands in this directory, because of the ProxyCommand passing the local path to this config file. Unfortunately I don't think there's an ssh variable that maps to the current config file being used so that I could pass the same config file to the ProxyCommand automatically. Depending on your environment it might be better to use an absolute path for this.
The one gotcha is it makes running ansible more complex. Unfortunately, from what I can tell ansible has no support whatsoever for 2FA. So if you have no existing ssh connection to the bastion, ansible will print out Verification code: once for every private server it's connecting to, but it's not actually listening for the input so no matter what you do the connections will fail.
So I first run: ssh -F ssh.config bastion-persistent-connection
This creates the socket file in ~/.ssh/ansible-*, and the ssh agent locally will close & remove that socket after the configurable time (what I have set to 10m).
Once the socket is open I can run ansible commands like normal, e.g. ansible all -m ping and they succeed.

Related

Access to jumpbox as normal user and change to root user in ansible

Here is my situation. I want to access a server through a jumpbox/bastion host.
so, I will login as normal user in jumpbox and then change user to root after that login to remote server using root. I dont have direct access to root in jumpbox.
$ ssh user#jumpbox
$ user#jumpbox:~# su - root
Enter Password:
$ root#jumpbox:~/ ssh root#remoteserver
Enter Password:
$ root#remoteserver:~/
Above is the manual workflow. I want to achieve this in ansible.
I have seen something like this.
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#jumpbox"'
This doesnot work when we need to switch to root and login to remote server.
There are a few things to unpack here:
General Design / Issue:
This isn't an Ansible issue, it's an ssh issue/proxy misconfiguration.
A bastion host/ssh proxy isn't meant to be logged into and have commands ran directly on it interactively (like su - root, enter password, then ssh...). That's not really a bastion, that's just a server you're logging into and running commands on. It's not an actual ssh proxy/bastion/jump role. At that point you might as well just run Ansible on the host.
That's why things like ProxyJump and ProxyCommand aren't working. They are designed to work with ssh proxies that are configured as ssh proxies (bastions).
Running Ansible Tasks as Root:
Ansible can run with sudo during task execution (it's called "become" in Ansible lingo), so you should never need to SSH as the literal root user with Ansible (shouldn't ssh as root ever really).
Answering the question:
There are a lot of workarounds for this, but the straightforward answer here is to configure the jump host as a proper bastion and your issue will go away. An example...
As the bastion "user", create an ssh key pair, or use an existing one.
On the bastion, edit the users ~/.ssh/config file to access the target server with the private key and desired user.
EXAMPLE user#bastion's ~/.ssh/config (I cringe seeing root here)...
Host remote-server
User root
IdentityFile ~/.ssh/my-private-key
Add the public key created in step 1 to the target servers ~/.ssh/authorized_keys file for the user you're logging in as.
After that type of config, your jump host is working as a regular ssh proxy. You can then use ProxyCommand or ProxyJump as you had tried to originally without issue.

Ubuntu Jump Host in Open Telekom Cloud not working as expected

Currently, I have built a small datacenter environment in OTC with Terraform. based on Ubuntu 20.04 images.
The idea is to have a jump host in the setup phase and for operational purposes that allows spontaneous access to service frontends via ssh proxy jumps without permanently routing them to the public net.
Basic setup works fine so far - I can access the jump host with ssh, and can access the internal machines from there with ssh when I put the private key onto the jump host. So, cloudwise the security seems to be fine. Key pair is generated with ed25519, I use the same key for jump host and internal servers (for now).
What I cannot achieve is the proxy jump as a chained command from my outside machine.
On the jump host, I set AllowTcpForwarding to "yes" in /etc/ssh/sshd_config and restarted ssh and sshd services.
My current local ssh config looks like this:
Host otc
User ubuntu
Hostname <FloatingIP-Address>
Port 22
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
IdentityFile= ~/.ssh/ssh_access
ControlPath ~/.ssh/cm-%r#%h:%p
ControlMaster auto
ControlPersist 10m
Host 10.*
User ubuntu
Port 22
IdentityFile=~/.ssh/ssh_access
ProxyJump otc
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
I can use this to ssh otc to the jump host.
What I would expect is that I could use e.g. ssh 10.0.0.56 to reach an internal host without further ado. As well I should be able to use commands like ssh -L 8080:10.0.0.56:8080 10.0.0.56 -N to map an internal server's port to a localhost port on my external machine. This is how I managed that successfully on other hosting scenarios in the public cloud.
All I get is:
Stdio forwarding request failed: Session open refused by peer
kex_exchange_identification: Connection closed by remote host
Journal on the Jump host says:
Jul 30 07:19:04 dev-nc-o-bastion sshd[2176]: refused local port forward: originator 127.0.0.1 port 65535, target 10.0.0.56 port 22
What I checked as well:
ufw is off on the Jump Host.
replaced ProxyJump configuration with ProxyCommand
So I am at the end of my knowledge. Has anyone a hint what else could be the reason? Any help welcome!
Ok, cause is found (but not yet fully explained).
My local ssh setting was allowing multiplexed forwards (ControlMaster auto ) which caused the creation of a unix socket file for the Controlpath in ~/.ssh.
I had to login to the jump host to AllowTcpForwarding in the first place.
After rebooting the sshd, I returned to the local machine and the failure occured when trying to forward to the remote internal machine.
After deleting the socket file in ~/.ssh, the connection can now be established as needed. Obviously, the persistent tunnel was not impacted by the restarted daemon on the jump host and simply refused to follow the new directive.
This cost me two days. On the bright side, I learned a lot about ssh :o

Setting up pyCharm pro for remote ssh

I have PyCharm pro and I need to set up a remote repo on it via ssh. There are instructions available for doing this but my requirements are slightly different than what I have seen available. My organisation has ssh set up via cloudfare and all those configs are already put in .ssh/config. It includes hostname, IdentityFile, username, and so on. If I need to access my remote machine, I just do ssh <HOST-NAME> from the terminal. My config looks like this -
Host <HOST-NAME>
HostName <HOST-NAME>.<ORGANISATION>.net
ProxyCommand /usr/local/bin/cloudflared access ssh --hostname %h
IdentityFile ~/.ssh/id_rsa
User <USER-NAME>
The way to set up PyCharm usually includes instructions like specifying host, port, username etc so that PyCharm can essentially run
ssh username#host_ip_address. However, I just need it to run ssh <HOST-NAME> so that it can access everything else using the config file that is already set up.
Is there a way to make this happen?
I use Pycharm Pro with my own .ssh/config and Jump Proxy pseudo hostname and certificates so slightly different. I "ssh pseudo-hostname" so no user as well.
In Pycharm I set a valid username and port (default 22) cause in my case it will get passed on.
Your setup ignore username and port no? If so you can use BS values?

SFTP, SSH & SSH Tunneling

I would like to understand the concept of SSH tunneling in detail as I am learning a few things around this topic. I have gone through some details in public forum but still got a few questions.
An SFTP service is running in a remote server and I have been given credentials to connect to it. I am using GUI like WinScp to connect the remote server. What's the role of SSH tunneling here?
Remote SFTP Server admin asked me to generate RSA public key from my machine and its added to the remote server. Now, I can directly connect to the server from SSH terminal without password. What's the role of SSH tunneling here?
Is tunneling implicit or need to be called explicitly for certain circumstances?
Please clarify.
SSH tunneling, SSH console sessions and SFTP sessions are functionally unrelated things.
They can be used simultaneously during single session but usually it is not the case so do not try to find any relation or role of tunneling in ssh/sftp session.
It does not makes sense to mix ssh tunneling with multiple ssh/sftp sessions.
Basically you would use dedicated ssh session for tunneling and extra sessions for console and transfers.
What the heck SSH tunneling is?
Quite often both parties (you and server) reside in different networks where arbitrary network connections between such networks are impossible.
For example server can see on its network workstation nodes and service nodes which are not visible to outside network due to NAT.
The same is valid for the user who initiates connection to the remote server:
so you (ssh client) can see your local resources (worstation nodes and server nodes) but can't see nodes on network of remote server.
Here comes ssh tunneling.
SSH tunnel is NOT a tool to assist ssh related things like remote console ssh sessions and secure file transfers but quite other way around - it is ssh protocol who assists you with building transport to tunnel generic TCP connections the same way TCP proxy works. Once such pipe is built and in action it does not know what is getting transferred via such pipe/tunnel.
Its concept is similar to TCP proxy.
TCP proxy runs on single node so it serves as acceptor of connections and as iniciator of outgoing connections.
In case of SSH tunneling such concept of TCP proxy is split in two halves - one of the nodes (participating in ssh session) performs role of listener(acceptor of connections) and second node performs role of proxy (i.e. initiates outgoing connections).
When you establish the SSH session to the remote server you can configure two types of tunnels which are active while your ssh connection is active.
Multiple ssh clients use notations like
R [IP1 :] PORT1 : IP2 : PORT2
L [IP1 :] PORT1 : IP2 : PORT2
The most confusing/hard part to understand in this ssh tunneling thing are these L and R markers/switches(or whatever).
Those letter L and R can confuse beginners quite a lot because there are actually 6(!!!) parties in this game(each with its own point of view of what is local and what is remote):
you
ssh server
your neighbors who want to expose theirs ports to anyone who sees the server
your neighbors who want to connect to any service server sees
anyone who sees the server and want to connect to any service your
neighbor provides (opposite side/socket of case #3)
any service in a local network of server who wants to be exposed to
your LAN (opposite side/socket of case#4)
In terms of ssh client these tunnel types are:
"R" tunnel (server listens) - YOU expose network services from your LOCAL LAN to remote LAN (you instruct sshd server to start listening ports at remote side and route all incoming connections )
"L" tunnel (you listens) - Server exposes resources of its REMOTE LAN to your LAN (your ssh client starts listening ports on your workstation. your neighbors can access remote server network services by connecting to the ports of your workstation. server makes outgoing connections to local services on behalf of your ssh client)
So SSH tunneling is about providing access to the service which typically is inaccessible due to network restrictions or limitations.
And here is simple conter-intuitive rule to remember while creating tunnels:
to open access to Remote service you use -L switch
and
to open access to Local service you use -R switch
examples of "R" tunnels:
Jack is your coworker(backend developer) and he develops server-side code at his workstation with IP address 10.12.13.14. You are team lead (or sysadmin) who organizes working conditions. You are sitting in the same office with Jack and want to expose his web server to outside world through remote server.
So you connect to ssh server with following command:
ssh me#server1 -g -R 80:ip-address-of-jack-workstation:80
in such case anyone on the Internet can access Jack's current version of website by visiting http://server1/
Suppose there are many IoT Linux devices (like raspberry pi) in the world sitting in multiple home networks and thus not accessible from outside.
They could connect to the home server and expose theirs own port 22 to the server for admin to be able to connect to all those servers.
So RPi devices could connect to the server in a such way:
RPi device #1
ssh rpi1#server -R 10122:localhost:22
RPi device #2
ssh rpi1#server -R 10222:localhost:22
RPi device #3
ssh rpi1#server -R 10322:localhost:22
and sysadmin while being at server could connect to any of them:
ssh localhost -p 10122 # to connecto first device
ssh localhost -p 10222 # to connecto second device
ssh localhost -p 10322 # to connecto third device
admin on remote premises blocked ssh outgoing connections and you want production server to contact bitbucket through your connection...
#TODO: add example
Typical pitfalls in ssh tunneling:
mapping remote service to local priviledged port
ssh me#server -L 123:hidden-smtp-server:25 # fails
#bind fails due to priviledged ports
#we try to use sudo ssh to allow ssh client to bind to local port switches
sudo ssh me#server -L 123:hidden-smtp-server:25 # fails
#this usually results to rejected public keys because ssh looks for the key in /root/.ssh/id_rsa
#so you need to coerce ssh to use your key while running under root account
sudo ssh me#server -i /home/me/.ssh/id_rsa -L 123:hidden-smtp-server:25
exposing some service from local network to anyone through the public server:
typical command would be
ssh me#server -R 8888:my-home-server:80
#quite often noone can't connect to server:8888 because sshd binds to localhost.
#To make in work you need to edit /etc/ssh/sshd_config file to enable GatewayPorts (the line in file needs to be GatewayPorts yes).
my tunnel works great on my computer for me only but I would like my coworkers to access my tunnel as well
typical working command you start with would be
ssh me#server -L 1234:hidden-smtp-server:25
#by default ssh binds to loopback(127.0.0.1) and that is the reason why noone can use such tunnel.
#you need to use switch -g and probably manually specify bind interface:
ssh me#server -g -L 0.0.0.0:1234:hidden-smtp-server:25

How to reuse an ssh connection

I'm creating a small script to update some remote servers (2+)
I am making multiple connects to each server; is there a way I can reuse the SSH connections so I don't have to open too many at once?
If you open the first connection with -M:
ssh -M $REMOTEHOST
subsequent connections to $REMOTEHOST will "piggyback" on the connection established by the master ssh. Most noticeably, further authentication is not required. See man ssh_config under "ControlMaster" for more details. Use -S to specify the path to the shared socket; I'm not sure what the default is, because I configure connection sharing using the configuration file instead.
In my .ssh/config file, I have the following lines:
host *
ControlMaster auto
ControlPath ~/.ssh/ssh_mux_%h_%p_%r
This way, I don't have to remember to use -M or -S; ssh figures out if a sharable connection already exists for the host/port/username combination and uses that if possible.
This option is available in OpenSSH since 2004.
I prefer the method described at Puppet Labs https://puppetlabs.com/blog/speed-up-ssh-by-reusing-connections
Add these lines to ~/.ssh/config and run mkdir ~/.ssh/sockets
Host *
ControlMaster auto
ControlPath ~/.ssh/sockets/%r#%h-%p
ControlPersist 600
Read the full blog post for more useful information about what these do and the idiosyncrasies of ssh when used like this. I highly recommend reading the blog or you may find things don't work as you expect.
Alternatively, you can do it this way:
$ssh_conn="ssh -t -o ControlPath=~/.ssh/master-$$ -o ControlMaster=auto -o ControlPersist=60"
$ssh_conn user#server
ControlPath=~/.ssh/master-$$ sets up a control path for the ssh
connection limiting connection reuse to the current shell (via the
$$ PID)
ControlMaster=auto allows the connection session to be
shared using the ControlPath
ControlPesist=60 sets the amount of
time the connection should remain open due to inactivity
For modern-distro setups that have a /run/user/$UID/ for just-this-boot runtime stuff,
controlmaster auto
controlpath /run/user/%i/ssh-%C
controlpersist 900
at the top of the config (where no match or host restrictions are in effect) will set all ssh sessions that share hosts, port and remote username to use a single connection. I keep addkeystoagent yes and identityfile ~/.ssh/id_ed25519 up there too so ssh doesn't offer all my keys for every host.