Use of hosts in ansible_ssh_common_args - ssh

I'm using aws_ec2.yml to query AWS and create my inventory. I've setup the vars section of the inventory file as follows to allow use of a bastion host:
[dev:vars]
ansible_ssh_common_args="-o ProxyCommand=\"ssh -q username#1.2.3.4 -o IdentityFile=~/.ssh/keyfile -o Port=22 -W %h:%p\""
ansible_ssh_private_key_file=~/.ssh/keyfile
ansible_user=username
I was wondering if it would be possible to replace 1.2.3.4 in the ansible_ssh_common_args with a variable/hostname from aws_ec2.yml itself, specifically another host lets say tag_Name_dev_bastion. Is it possible to use variables/hostnames from within the inventory file in the common_args itself?

It's been 2 months, so you've possibly resolved it by now (I hope)...
But in case you haven't, I've recently started making ample use of the aws_ec2 Dynamic Inventory, and this is exactly what I was puzzling over for some time.
So say for example purposes I have an aws vpc with a single bastion ec2 in a Public-subnet and a single webserver ec2 in a Private-subnet, to access the private ec2 I need to jump from the public bastion.
I also have a dynamic inventory called inventory.aws_ec2.yml,
---
plugin: aws_ec2
regions:
- eu-west-1
filters:
tag:Group:
- bastion
- webserver_node
instance-state-name: running
keyed_groups:
- key: tags.Group
separator: ''
hostnames:
- network-interface.association.public-ip
- network-interface.addresses.private-ip-address
You can get a nice high level output of that using: ansible-inventory inventory.aws_ec2.yml --list
This output should include a bunch of hostvars for each host returned by your inventory, and an overview of the host groups and their hosts. Lovely stuff!
Using this information is possible with ✨magic variables✨
We know the data we want is in hostvars so we can go the following route:
project/group_vars/webserver_node.yml
---
# SSH / ProxyJump
ansible_user: username
ansible_ssh_private_key_file: ~/.ssh/aws_demo_key
ansible_ssh_common_args: >-
-o ProxyCommand="ssh
-o StrictHostKeyChecking=no
-o UserKnownHostsFile=/dev/null
-W %h:%p
-q {{ ansible_user }}#{{ hostvars[groups["bastion"][0]]["public_ip_address"] }}"
Then when you run the playbook on the webserver_node host, It will jump correctly!
Hope this helps!
I found my answer from this Serverfault answer

Related

Ansible ssh variable options coding issues

What I am trying to accomplish overall is to ssh into systems which are untouched by ansible, and have them set up by ansible, including its account, and ssh keys, and adding to the dynamic inventory... and so on and so forth. In this case, it's via a proxy jump. Unfortunately this means having to ssh into them using the ssh command and the shell module, as well as storing a password. Keep in mind I am on ansible 2.9, and this is a build environment, so passwords can be copied to files during build for use and then deleted at the end of the run, so this isn't a problem. If this succeeds, we can set up accounts and ssh keys, then delete the build files and everyone is happy.
I don't need all that much I hope, I would just like to get one sticky piece of that working better. That part is the ssh options that are needed for a proxyjump connection. ansible-controller doesn't have direct access to host p0, but the ecc67 host does. I have it working in the shell command no problem, but for whatever reason, I can't shift it up to the ansible_ssh_common_args variable where it belongs.
Here is the working example of the task as it functions now:
- name: sshpass attempt with the raw module for testing.
shell: sshpass -p "{{ access_var.ansible_ssh_pass_ssn }}" ssh -o 'ProxyCommand=ssh -W %h:%p bob#ecc67 nc %h %p' bob#p0 "w; exit"
register: output_1
The above works just fine and uses an undefined ansible_ssh_common_args. The nc is the netcat binary and is simply being passed options through the proxy command. Then we have the below playbook in which I tried to complete my stated mission, however, it is not functional, and fails at the sshpass task:
- name: Play that is testing for a successful proxyjump connection to p0 through ecc67.
hosts: ansible-controller
remote_user: bob
become: no
become_method: sudo
gather_facts: no
vars:
ansible_connection: ssh
ansible_ssh_common_args: '-o "ProxyCommand=ssh -W %h:%p bob#ecc67 nc %h %p"'
tasks:
- name: Import the password file so that we have the bob account's password.
include_vars:
file: ~/project/copyable-files/dynamic-files/build/active-vars-repository/access.yml
name: access_var
- name: Set password for the bob account from the file value using previous operator input.
set_fact:
ansible_ssh_pass: "{{ access_var.ansible_ssh_pass_b }}"
ansible_become_password: "{{ access_var.ansible_ssh_pass_b }}"
cacheable: yes
- name: sshpass attempt with the raw module for testing.
shell: sshpass -p "{{ ansible_ssh_pass_b }}" ssh "{{ ansible_ssh_common_args }}" bob#p0 "hostname; exit"
register: output_1
- debug:
var: output_1
The error I get when I attempt to use the above playbook with the reworked task and variables is as follows:
TASK [sshpass attempt with the raw module for testing.] ***********************************************
fatal: [ansible-controller]: UNREACHABLE! => {"changed": false, "msg": "Invalid/incorrect password: Killed by signal 1.", "unreachable": true}
The password is not the issue despite the error stating it is, though it's possible it's accessing something I don't expect. Is there any way to do what I want, heck, is there even just a better way to go about it that I didn't think of? Any suggestions would be helpful thanks!
From your description I understand that there is an issue with special characters in variables, quoting, templating and debugging. Therefore I am explicit not addressing the question "Is there ... a better way to go?".
To address the different topics I've created the following minimal example playbook
---
- hosts: localhost
become: false
gather_facts: false
vars:
ansible_ssh_pass: !unsafe "P4$$w0rd!_%&"
ansible_ssh_common_args: !unsafe '-o "ProxyCommand=ssh -W %h:%p user#jump.example.com nc %h %p"'
tasks:
- name: Debug task to show command content
lineinfile:
path: ssh.file
create: true
line: 'sshpass -p {{ ansible_ssh_pass | quote }} ssh {{ ansible_ssh_common_args }} user#test.example.com "hostname; exit"'
resulting into an output of
sshpass -p 'P4$$w0rd!_%&' ssh -o "ProxyCommand=ssh -W %h:%p user#jump.example.com nc %h %p" user#test.example.com "hostname; exit"
... the content of ssh.file and what the shell would "see"
Further Documentation
Advanced playbook syntax - Unsafe or raw strings for usage of !unsafe
The most common use cases include passwords that allow special characters
Using filters to manipulate data
You can use YAML single quote escaping ... Escaping single quotes within single quotes in YAML is done by doubling the single quote.
Using filters to manipulate data - Manipulating strings for usage of quote
To add quotes for shell usage ... | quote
Templating (Jinja2)
Ansible uses Jinja2 templating to enable dynamic expressions and access to variables and facts.

Ansible dynamically add proxy host, then use proxy host to login to machine behind it, without ssh_config

SO I would like to provision a proxy-host ( i can do this), add it to the dynamic ansible inventory via add_host (done),
Then in the next play, run tasks on that proxy-host, to find another machine behind it, update something ansible side to know this new host's location, and that It needs to be proxy jumped via this current proxy-host,
Then in the next play target this new machine behind the proxy-host.
I am at a lost here, i was hoping to do it without all of this ssh_config changes... is this possible, has anyone done this, thoughts?
I have an answer to my question. I think that it is a perfectly valid question, and alot the documentation from Ansible semi-answers this question, it is not put into the context of being dynamic, nor is it stated that it can be done completely dynamically.
Pretext: Using terraform, within ansible to generate hosts, with the following configuration:
control_box (running ansible/terraform from)----> dynamically created Bastion/proxy/jump_host ---> some_server(behind the bastion)
Playbook:
#Make the bastion host, and add it to the just_created group
- hosts: 127.0.0.1
roles:
- terraform_logic_add_host_logic
- hosts: just_created #aka bastion
tasks:
- name: Include task list in play
include: "get_the_private_ip_and_add_to_behind_bastion_group.yml"
# Login into behind_bastion group.....
- hosts: behind_bastion_group
vars:
- ansible_connection: ssh
- ansible_ssh_common_args: '-o ProxyCommand="ssh -i {{ some_pem_key }} -o StrictHostKeyChecking=no -W %h:%p -q ec2-user#{{ the_bastion_ip }}"'
tasks:
- name: Include task list in play
include: "do_stuff_finally.yml"
I have done my research as well FYI:
Posts such as this, do not show the complete End2End solution of doing this all dynamically...
https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/
Ansible with a bastion host / jump box?
https://selivan.github.io/2018/01/29/ansible-ssh-bastion-host.html

Kubespray with bastion and custom SSH port + agent forwarding

Is it possible to use Kubespray with Bastion but on custom port and with agent forwarding? If it is not supported, what changes does one need to do?
Always, since you can configure that at three separate levels: via the host user's ~/.ssh/config, via the entire playbook with group_vars, or as inline config (that is, on the command line or in the inventory file).
The ssh config is hopefully straightforward:
Host 1.2.* *.example.com # or whatever pattern matches the target instances
ProxyJump someuser#some-bastion:1234
# and then the Agent should happen automatically, unless you mean
# ForwardAgent yes
I'll speak to the inline config next, since it's a little simpler:
ansible-playbook -i whatever \
-e '{"ansible_ssh_common_args": "-o ProxyJump=\"someuser#jump-host:1234\""}' \
cluster.yaml
or via the inventory in the same way:
master-host-0 ansible_host=1.2.3.4 ansible_ssh_common_args="-o ProxyJump='someuser#jump-host:1234'"
or via group_vars, which you can either add to an existing group_vars/all.yml, or if it doesn't exist then create that group_vars directory containing the all.yml file as a child of the directory containing your inventory file
If you have more complex ssh config than you wish to encode in the inventory/command-line/group_vars, you can also instruct the ansible-invoked ssh to use a dedicated config file via the ansible_ssh_extra_args variable:
ansible-playbook -e '{"ansible_ssh_extra_args": "-F /path/to/special/ssh_config"}' ...
In my case where I needed to access the hosts on particular ports, I just had to modify the host's ~/.ssh/config to be:
Host 10.40.45.102
ForwardAgent yes
User root
ProxyCommand ssh -W %h:%p -p 44057 root#example.com
Host 10.40.45.104
ForwardAgent yes
User root
ProxyCommand ssh -W %h:%p -p 44058 root#example.com
Where 10.40.* was the internal IPs.

Is it possible to add an ssh key to the agent for a private repo in an ansible playbook?

I am using Ansible to provision a Vagrant environment. As part of the provisioning process, I need to connect from the currently-provisioning VM to a private external repository using an ssh key in order to use composer to pull in modules for an application. I've done a lot of reading on this before asking this question, but still can't seem to comprehend what's going on.
What I want to happen is:
As part of the playbook, on the Vagrant VM, I add the ssh key to the private repo to the ssh-agent
Using that private key, I am then able to use composer to require modules from the external source
I've read articles which highlight specifying the key in playbook execution. (E.g. ansible-play -u username --private-key play.yml) As far as I understand, this isn't for me, as I'm calling the playbook via Vagrant file. I've also read articles which mention ssh forwarding. (SSH Agent Forwarding with Ansible). Based on what I have read, this is what I've done:
On the VM being provisioned, I insert a known_hosts file which consists of the host entries of the machines which house the repos I need:
On the VM being provisioned, I have the following in ~/.ssh/config:
Host <VM IP>
ForwardAgent yes
I have the following entries in my ansible.cfg to support ssh forwarding:
[defaults]
transport = ssh
[ssh_connection]
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r
[privilege_escalation]
pipelining = False
I have also added the following task to the playbook which tries to
use composer:
- name: Add ssh agent line to sudoers
become: true
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
I exit the ansible provisioner and add the private key on the provisioned VM to the agent via a shell provisioner (This is where I suspect I'm going wrong)
Then, I attempt to use composer, or call git via the command module. Like this, for example, to test:
- name: Test connection
command: ssh -T git#github.com
Finally, just in case I wasn't understanding ssh connection forwarding correctly, I assumed that what was supposed to happen was that I needed to first add the key to my local machine's agent, then forward that through to the provisioned VM to use to grab the repositories via composer. So I used ssh-add on my local machine before executing vagrant up and running the provisioner.
No matter what, though, I always get permission denied when I do this. I'd greatly appreciate some understanding as to what I may be missing in my understanding of how ssh forwarding should be working here, as well as any guidance for making this connection happen.
I'm not certain I understand your question correctly, but I often setup machines that connect to a private bitbucket repository in order to clone it. You don't need to (and shouldn't) use agent forwarding for that ("ssh forwarding" is unclear; there's "authentication agent forwarding" and "port forwarding", but you need neither in this case).
Just to be clear with terminology, you are running Ansible in your local machine, you are provisioning the controlled machine, and you want to ssh from the controlled machine to a third-party server.
What I do is I upload the ssh key to the controlled machine, in /root/.ssh (more generally $HOME/.ssh where $HOME is the home directory of the controlled machine user who will connect to the third-party server—in my case that's root). I don't use the names id_rsa and id_rsa.pub, because I don't want to touch the default keys of that user (these might have a different purpose; for example, I use them to backup the controlled machine). So this is the code:
- name: Install bitbucket aptiko_ro ssh key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa
mode: 0600
content: "{{ aptiko_ro_ssh_key }}"
- name: Install bitbucket aptiko_ro ssh public key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa.pub
content: "{{ aptiko_ro_ssh_pub_key }}"
Next, you need to tell the controlled machine ssh this: "When you connect to the third-party server, use key X instead of the default key, and logon as user Y". You tell it in this way:
- name: Install ssh config that uses aptiko_ro keys on bitbucket
copy:
dest: /root/.ssh/config
content: |
Host bitbucket.org
IdentityFile ~/.ssh/aptiko_ro_id_rsa
User aptiko_ro

Ansible percent expand

I have an ansible playbook which connects to a virtual machine via a non-standard ssh port (forwarded to localhost) and a different user than the host user (vagrant).
The ssh port is specified in the ansible inventory:
[vms]
localhost:2222
The username given on the command line to ansible-playbook:
ansible-playbook -i <inventory from above> <some playbook> -u vagrant
The communication with the VM works correctly, however, %p always expands to 22 and %r to the host username.
Consequently, I cannot flush the SSH connection (for the user's changed group membership to take effect) like this:
- name: flush the ssh connection
command: ssh -o ControlPath="~/.ansible/cp/ansible-ssh-%h-%p-%r" -O stop {{inventory_hostname}}
delegate_to: 127.0.0.1
Am I making a silly mistake somewhere? Alternatively, is there a different way to flush the SSH connection?
The percent expand is not expanded by ansible, but by ssh later on.
Sorry, forgot to add the most important part
Using
command: ssh -o ControlPath=[...] -O stop {{inventory_hostname}}
will use default port, because you didn't specify it on the command-line. You would have to specify also the port to "flush" the connection this way:
command: ssh -o ControlPath=[...] -O stop -p {{inventory_port}} {{inventory_hostname}}
But I don't think it is needed. Ansible should clean up the connections when the playbook ends and I don't see any different reason why to do that.