Ansible multi hop design - ssh

I would like to run an ansible playbook on a target host passing through multiple hosts. The scenario looks similar to the one depicted in the picture:
I partially solved issue creating the ssh_config file in the Ansible project directory:
Host IP_HostN
HostName IP_HOST_N
ProxyJump Username1#IP_HOST_2:22,Username2#IP_HOST_2:22
User UsernameN
and defining in the ansible.cfg in the Ansible project directory:
[ssh_connection]
ssh_args= -F "ssh_config"
The problem is that I need to insert automatically for each transient hosts and target host ssh username and password and I don't know how to automate this task. Moreover, python may not be installed on every transient node.

I found a reasonably good workaround. According to the scenario below:
we create an ssh tunnel until the transient host that can directly reach the target host. We also create a local port binding with -L flag:
ssh -J user_1#transient_host1:port_1 -p port_2 user_2#transient_host2 -L LOCAL_PORT:TARGET_HOST_IP:TARGET_HOST_PORT
Then we can directly enter into Target Host using the local binding:
ssh user_target_host#localhost -p LOCAL_PORT
In this way, we can run ansible playbooks on the local host configuring ansible variables accordingly:
ansible_host: localhost
ansible_user: user_target_host
ansible_port: LOCAL_PORT
ansible_password: password_target_host

Related

Access to jumpbox as normal user and change to root user in ansible

Here is my situation. I want to access a server through a jumpbox/bastion host.
so, I will login as normal user in jumpbox and then change user to root after that login to remote server using root. I dont have direct access to root in jumpbox.
$ ssh user#jumpbox
$ user#jumpbox:~# su - root
Enter Password:
$ root#jumpbox:~/ ssh root#remoteserver
Enter Password:
$ root#remoteserver:~/
Above is the manual workflow. I want to achieve this in ansible.
I have seen something like this.
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#jumpbox"'
This doesnot work when we need to switch to root and login to remote server.
There are a few things to unpack here:
General Design / Issue:
This isn't an Ansible issue, it's an ssh issue/proxy misconfiguration.
A bastion host/ssh proxy isn't meant to be logged into and have commands ran directly on it interactively (like su - root, enter password, then ssh...). That's not really a bastion, that's just a server you're logging into and running commands on. It's not an actual ssh proxy/bastion/jump role. At that point you might as well just run Ansible on the host.
That's why things like ProxyJump and ProxyCommand aren't working. They are designed to work with ssh proxies that are configured as ssh proxies (bastions).
Running Ansible Tasks as Root:
Ansible can run with sudo during task execution (it's called "become" in Ansible lingo), so you should never need to SSH as the literal root user with Ansible (shouldn't ssh as root ever really).
Answering the question:
There are a lot of workarounds for this, but the straightforward answer here is to configure the jump host as a proper bastion and your issue will go away. An example...
As the bastion "user", create an ssh key pair, or use an existing one.
On the bastion, edit the users ~/.ssh/config file to access the target server with the private key and desired user.
EXAMPLE user#bastion's ~/.ssh/config (I cringe seeing root here)...
Host remote-server
User root
IdentityFile ~/.ssh/my-private-key
Add the public key created in step 1 to the target servers ~/.ssh/authorized_keys file for the user you're logging in as.
After that type of config, your jump host is working as a regular ssh proxy. You can then use ProxyCommand or ProxyJump as you had tried to originally without issue.

Kubespray with bastion and custom SSH port + agent forwarding

Is it possible to use Kubespray with Bastion but on custom port and with agent forwarding? If it is not supported, what changes does one need to do?
Always, since you can configure that at three separate levels: via the host user's ~/.ssh/config, via the entire playbook with group_vars, or as inline config (that is, on the command line or in the inventory file).
The ssh config is hopefully straightforward:
Host 1.2.* *.example.com # or whatever pattern matches the target instances
ProxyJump someuser#some-bastion:1234
# and then the Agent should happen automatically, unless you mean
# ForwardAgent yes
I'll speak to the inline config next, since it's a little simpler:
ansible-playbook -i whatever \
-e '{"ansible_ssh_common_args": "-o ProxyJump=\"someuser#jump-host:1234\""}' \
cluster.yaml
or via the inventory in the same way:
master-host-0 ansible_host=1.2.3.4 ansible_ssh_common_args="-o ProxyJump='someuser#jump-host:1234'"
or via group_vars, which you can either add to an existing group_vars/all.yml, or if it doesn't exist then create that group_vars directory containing the all.yml file as a child of the directory containing your inventory file
If you have more complex ssh config than you wish to encode in the inventory/command-line/group_vars, you can also instruct the ansible-invoked ssh to use a dedicated config file via the ansible_ssh_extra_args variable:
ansible-playbook -e '{"ansible_ssh_extra_args": "-F /path/to/special/ssh_config"}' ...
In my case where I needed to access the hosts on particular ports, I just had to modify the host's ~/.ssh/config to be:
Host 10.40.45.102
ForwardAgent yes
User root
ProxyCommand ssh -W %h:%p -p 44057 root#example.com
Host 10.40.45.104
ForwardAgent yes
User root
ProxyCommand ssh -W %h:%p -p 44058 root#example.com
Where 10.40.* was the internal IPs.

Is it possible to add an ssh key to the agent for a private repo in an ansible playbook?

I am using Ansible to provision a Vagrant environment. As part of the provisioning process, I need to connect from the currently-provisioning VM to a private external repository using an ssh key in order to use composer to pull in modules for an application. I've done a lot of reading on this before asking this question, but still can't seem to comprehend what's going on.
What I want to happen is:
As part of the playbook, on the Vagrant VM, I add the ssh key to the private repo to the ssh-agent
Using that private key, I am then able to use composer to require modules from the external source
I've read articles which highlight specifying the key in playbook execution. (E.g. ansible-play -u username --private-key play.yml) As far as I understand, this isn't for me, as I'm calling the playbook via Vagrant file. I've also read articles which mention ssh forwarding. (SSH Agent Forwarding with Ansible). Based on what I have read, this is what I've done:
On the VM being provisioned, I insert a known_hosts file which consists of the host entries of the machines which house the repos I need:
On the VM being provisioned, I have the following in ~/.ssh/config:
Host <VM IP>
ForwardAgent yes
I have the following entries in my ansible.cfg to support ssh forwarding:
[defaults]
transport = ssh
[ssh_connection]
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r
[privilege_escalation]
pipelining = False
I have also added the following task to the playbook which tries to
use composer:
- name: Add ssh agent line to sudoers
become: true
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
I exit the ansible provisioner and add the private key on the provisioned VM to the agent via a shell provisioner (This is where I suspect I'm going wrong)
Then, I attempt to use composer, or call git via the command module. Like this, for example, to test:
- name: Test connection
command: ssh -T git#github.com
Finally, just in case I wasn't understanding ssh connection forwarding correctly, I assumed that what was supposed to happen was that I needed to first add the key to my local machine's agent, then forward that through to the provisioned VM to use to grab the repositories via composer. So I used ssh-add on my local machine before executing vagrant up and running the provisioner.
No matter what, though, I always get permission denied when I do this. I'd greatly appreciate some understanding as to what I may be missing in my understanding of how ssh forwarding should be working here, as well as any guidance for making this connection happen.
I'm not certain I understand your question correctly, but I often setup machines that connect to a private bitbucket repository in order to clone it. You don't need to (and shouldn't) use agent forwarding for that ("ssh forwarding" is unclear; there's "authentication agent forwarding" and "port forwarding", but you need neither in this case).
Just to be clear with terminology, you are running Ansible in your local machine, you are provisioning the controlled machine, and you want to ssh from the controlled machine to a third-party server.
What I do is I upload the ssh key to the controlled machine, in /root/.ssh (more generally $HOME/.ssh where $HOME is the home directory of the controlled machine user who will connect to the third-party server—in my case that's root). I don't use the names id_rsa and id_rsa.pub, because I don't want to touch the default keys of that user (these might have a different purpose; for example, I use them to backup the controlled machine). So this is the code:
- name: Install bitbucket aptiko_ro ssh key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa
mode: 0600
content: "{{ aptiko_ro_ssh_key }}"
- name: Install bitbucket aptiko_ro ssh public key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa.pub
content: "{{ aptiko_ro_ssh_pub_key }}"
Next, you need to tell the controlled machine ssh this: "When you connect to the third-party server, use key X instead of the default key, and logon as user Y". You tell it in this way:
- name: Install ssh config that uses aptiko_ro keys on bitbucket
copy:
dest: /root/.ssh/config
content: |
Host bitbucket.org
IdentityFile ~/.ssh/aptiko_ro_id_rsa
User aptiko_ro

Connecting to a remote server from local machine via ssh-tunnel

I am running Ansible on my machine. And my machine does not have ssh access to the remote machine. Port 22 connection originating from local machine are blocked by the institute firewall. But I have access to a machine (ssh-tunnel), through which I can login to the remote machine. Now is there a way we can run ansible playbook from local machine on remote hosts.
In a way is it possible to make Ansible/ssh connect to the remote machine, via ssh-tunnel. But not exactly login to ssh-tunnel. The connection will pass through the tunnel.
Other way is I can install ansible on ssh-tunnel, but that is not the desired and run plays from there. But that would not be a desired solution.
Please let me know if this is possible.
There are two ways to achieve this without install the Ansible on the ssh-tunnel machine.
Solution#1:
Use these variables in your inventory:
[remote_machine]
remote ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='username' ansible_ssh_private_key_file='/home/user/private_key'
hope you understand above parameters, if need help please ask in comments
Solution#2:
Create ~/.ssh/config file and add the following parameters:
####### Access to the Private Server through ssh-tunnel/bastion ########
Host ssh-tunnel-server
HostName x.x.x.x
StrictHostKeyChecking no
User username
ForwardAgent yes
Host private-server
HostName y.y.y.y
StrictHostKeyChecking no
User username
ProxyCommand ssh -q ssh-tunnel-server nc -q0 %h %p
Hope that help you, if you need any help, feel free to ask
No request to install ansible on the jump and remote servers, ansible is ssh service only tool :-)
First make sure you can work it directly with SSH Tunnel.
On local machine (Local_A), you can login to Remote machine (Remote_B) via jump box (Jump_C).
login server Local_A
ssh -f user#remote_B -L 2000:Jump_C:22 -N
The other options are:
-f tells ssh to background itself after it authenticates, so you don't have to sit around running something on the remote server for the tunnel to remain alive.
-N says that you want an SSH connection, but you don't actually want to run any remote commands. If all you're creating is a tunnel, then including this option saves resources.
-L [bind_address:]port:host:hostport
Specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side.
There will be a password challenge unless you have set up DSA or RSA keys for a passwordless login.
There are lots of documents teaching you how to do the ssh tunnel.
Then try below ansible command from Local_A:
ansible -vvvv remote_B -m shell -a 'hostname -f' --ssh-extra-args="-L 2000:Jump_C:22"
You should see the remote_B hostname. Let me know the result.
Let's say you can ssh into x.x.x.x from your local machine, and ssh into y.y.y.y from x.x.x.x, while y.y.y.y is the target of your ansible playbook.
inventory:
[target]
y.y.y.y
playbook.yml
---
- hosts: target
tasks: ...
Run:
ansible-playbook --ssh-common-args="-o ProxyCommand='ssh -W %h:%p root#x.x.x.x'" -i inventory playbook.yml

How to setup SSH connection with Ansible?

I am brand new to learning Ansible. Here is a pretty easy example.
I have computer A, where I will be running playbooks from.
And 10 other host machines that need to be configured. My question is, do I just need to put the public SSH key of my host machine on the 10 hosts in ~/.ssh/authorized_keys ?
I guess my understanding of how to efficiently setup SSH connections between my main computer and all the clients is a little fuzzy. Any help would be appreciated here.
You create a file called hosts with this content
[test-vms]
10.0.0.100 ansible_ssh_pass='password' ansible_ssh_user='username'
In above hosts file leave off ansible_ssh_pass='password' if using ssh keys ... Then you can create a playbook with the commands and call the playbook like below. The first line of the playbook needs to have the hosts declaration
---
- hosts: test-vms
tasks:
-name: "This is a test task"
command: /bin/hostname
Finally, you call the playbook like this
ansible-playbook -i <hosts-file> <playbook.yaml>
Ansible simply uses SSH so you can either copy the public key as you describe or use password authentication using the --user and --ask-pass flags.
Yes. As far as connection to hosts go, Ansible sets up SSH connection between the master machine and the host machines. You have to add the SSH fingerprints to the end machines. You can always skip the Are you sure you want to continue connecting (yes/no/[fingerprint]) step i.e., adding the fingerprints to .ssh/known_hosts by setting host_key_checking=false
I found this great video for initial Ansible Setup - https://youtu.be/-Q4T9wLsvOQ - maybe this can help!