proxycommand doesnt seem to work with ansible and my environment - ssh

I've tried many combinations to get this to work but cannot for some reason. I am not using keys in our environment so passwords will have to do.
I've tried proxyjump and sshuttle as well.
It's strange has the ping module works but when trying another module or playbook it doesn't work.
Rough set up is:
laptop running ubuntu with ansible installed
[laptop] ---> [productionjumphost] ---> [production_iosxr_router]
ansible.cfg:
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ControlPath=/tmp/ansible-%r#%h:%p -F ssh.config
~/.ssh/config == ssh.cfg: (configured both)
Host modeljumphost
HostName modeljumphost.fqdn.com.au
User user
Port 22
Host productionjumphost
HostName productionjumphost.fqdn.com.au
User user
Port 22
Host model_iosxr_router
HostName model_iosxr_router
User user
ProxyCommand ssh -W %h:22 modeljumphost
Host production_iosxr_router
HostName production_iosxr_router
User user
ProxyCommand ssh -W %h:22 productionjumphost
inventory:
[local]
192.168.xxx.xxx
[router]
production_iosxr_router ansible_connection=network_cli ansible_user=user ansible_ssh_pass=password
[router:vars]
ansible_network_os=iosxr
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#productionjumphost.fqdn.com.au"'
ansible_user=user
ansible_ssh_pass=password
playbook.yml:
---
- name: Network Getting Started First Playbook
hosts: router
gather_facts: no
connection: network_cli
tasks:
- name: show version
iosxr_command:
commands: show version
I can run an ad-hoc ansible command and a successful ping is returned:
result: ansible production_iosxr_router -i inventory -m ping -vvvvv
production_iosxr_router | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"invocation": {
"module_args": {
"data": "pong"
}
},
"ping": "pong"
}
running playbook: ansible-playbook -i inventory playbook.yml -vvvvv
production_iosxr_router | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"msg": "[Errno -2] Name or service not known"
}

Related

Vagrant multi vm ssh connection setup works on one but not the others

I have searched many of the similar issues but can't seem to figure out the one I'm having. I have a Vagrantfile with which I setup 3 VMs. I add a public key to each VM so I can run Ansible against the boxes after vagrant up command (I don't want to use the ansible provisioner). I forward all the SSH ports on each box.
I can vagrant ssh <server_name> on to each box successfully.
With the following:
ssh vagrant#192.168.56.2 -p 2711 -i ~/.ssh/ansible <-- successful connection
ssh vagrant#192.168.56.3 -p 2712 -i ~/.ssh/ansible <-- connection error
ssh: connect to host 192.168.56.3 port 2712: Connection refused
ssh vagrant#192.168.56.4 -p 2713 -i ~/.ssh/ansible <-- connection error
ssh: connect to host 192.168.56.4 port 2713: Connection refused
And
ssh vagrant#localhost -p 2711 -i ~/.ssh/ansible <-- successful connection
ssh vagrant#localhost -p 2712 -i ~/.ssh/ansible <-- successful connection
ssh vagrant#localhost -p 2713 -i ~/.ssh/ansible <-- successful connection
Ansible can connect to the first one (vagrant#192.168.56.2) but not the other 2 also. I can't seem to find out why it connects to one and not the others. Any ideas what I could be doing wrong?
The Ansible inventory:
{
"all": {
"hosts": {
"kubemaster": {
"ansible_host": "192.168.56.2",
"ansible_user": "vagrant",
"ansible_ssh_port": 2711
},
"kubenode01": {
"ansible_host": "192.168.56.3",
"ansible_user": "vagrant",
"ansible_ssh_port": 2712
},
"kubenode02": {
"ansible_host": "192.168.56.4",
"ansible_user": "vagrant",
"ansible_ssh_port": 2713
}
},
"children": {},
"vars": {}
}
}
The Vagrantfile:
# Define the number of master and worker nodes
NUM_MASTER_NODE = 1
NUM_WORKER_NODE = 2
PRIV_IP_NW = "192.168.56."
MASTER_IP_START = 1
NODE_IP_START = 2
# Vagrant configuration
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# default box
config.vm.box = "ubuntu/jammy64"
# automatic box update checking.
config.vm.box_check_update = false
# Provision master nodes
(1..NUM_MASTER_NODE).each do |i|
config.vm.define "kubemaster" do |node|
# Name shown in the GUI
node.vm.provider "virtualbox" do |vb|
vb.name = "kubemaster"
vb.memory = 2048
vb.cpus = 2
end
node.vm.hostname = "kubemaster"
node.vm.network :private_network, ip: PRIV_IP_NW + "#{MASTER_IP_START + i}"
node.vm.network :forwarded_port, guest: 22, host: "#{2710 + i}"
# argo and traefik access
node.vm.network "forwarded_port", guest: 8080, host: "#{8080}"
node.vm.network "forwarded_port", guest: 9000, host: "#{9000}"
# synced folder for kubernetes setup yaml
node.vm.synced_folder "sync_folder", "/vagrant_data", create: true, owner: "root", group: "root"
node.vm.synced_folder ".", "/vagrant", disabled: true
# setup the hosts, dns and ansible keys
node.vm.provision "setup-hosts", :type => "shell", :path => "vagrant/setup-hosts.sh" do |s|
s.args = ["enp0s8"]
end
node.vm.provision "setup-dns", type: "shell", :path => "vagrant/update-dns.sh"
node.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/ansible.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end
end
# Provision Worker Nodes
(1..NUM_WORKER_NODE).each do |i|
config.vm.define "kubenode0#{i}" do |node|
node.vm.provider "virtualbox" do |vb|
vb.name = "kubenode0#{i}"
vb.memory = 2048
vb.cpus = 2
end
node.vm.hostname = "kubenode0#{i}"
node.vm.network :private_network, ip: PRIV_IP_NW + "#{NODE_IP_START + i}"
node.vm.network :forwarded_port, guest: 22, host: "#{2711 + i}"
# synced folder for kubernetes setup yaml
node.vm.synced_folder ".", "/vagrant", disabled: true
# setup the hosts, dns and ansible keys
node.vm.provision "setup-hosts", :type => "shell", :path => "vagrant/setup-hosts.sh" do |s|
s.args = ["enp0s8"]
end
node.vm.provision "setup-dns", type: "shell", :path => "vagrant/update-dns.sh"
node.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/ansible.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end
end
end
Your Vagrantfile confirms what I suspected:
You define port forwarding as follows:
node.vm.network :forwarded_port, guest: 22, host: "#{2710 + i}"
That means, port 22 of the guest is made reachable on the host under port 2710+i. For your 3 VMs, from the host's point of view, this means:
192.168.2.1:22 -> localhost:2711
192.168.2.2:22 -> localhost:2712
192.168.2.3:22 -> localhost:2713
As IP addresses for your VMs you have defined the range 192.168.2.0/24, but you try to access the range 192.168.56.0/24.
If a Private IP address is defined (for your 1st node e.g. 192.168.2.2), Vagrant implements this in the VM on VirtualBox as follows:
Two network adapters are defined for the VM:
NAT: this gives the VM Internet access
Host-Only: this gives the host access to the VM via IP 192.168.2.2.
For each /24 network, VirtualBox (and Vagrant) creates a separate VirtualBox Host-Only Ethernet Adapter, and the host is .1 on each of these networks.
What this means for you is that if you use an IP address from the 192.168.2.0/24 network, an adapter is created on your host that always gets the IP address 192.168.2.1/24, so you have the addresses 192.168.2.2 - 192.168.2.254 available for your VMs.
This means: You have for your master a collision of the IP address with your host!
But why does the access to your first VM work?
ssh vagrant#192.168.56.1 -p 2711 -i ~/.ssh/ansible <-- successful connection
That is relatively simple: The network 192.168.56.0/24 is the default network for Host-Only under VirtualBox, so you probably have a VirtualBox Host-Only Ethernet Adapter with the address 192.168.56.1/24.
Because you have defined a port forwarding in your Vagrantfile a mapping of the 1st VM to localhost:2711 takes place. If you now access 192.168.56.1:2711, this is your own host, thus localhost, and the SSH of the 1st VM is mapped to port 2711 on this host.
So what do you have to do now?
Change the IP addresses of your VMs, e.g. use 192.168.2.11 - 192.168.2.13.
The access to the VMs is possible as follows:
Node
via Guest-IP
via localhost
kubemaster
192.168.2.11:22
localhost:2711
kubenode01
192.168.2.12:22
localhost:2712
kubenode02
192.168.2.13:22
localhost:2713
Note: If you want to access with the guest IP address, use port 22, if you want to access via localhost, use port 2710+i defined by you.

Ansible synchronize (rsync) fails

ansible.posix.synchronize, a wrapper for rsync, is failing with message
"msg": "Warning: Permanently added <host> (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n"
My playbook
---
- name: Test rsync
hosts: all
become: yes
become_user: postgres
tasks:
- name: Populate scripts/common using copy
copy:
src: common/
dest: /home/postgres/scripts/common
- name: Populate scripts/common using rsync
ansible.posix.synchronize:
src: common/
dest: /home/postgres/scripts/common
Populate scripts/common using copy executes with no problem.
Full error output
fatal: [<host>]: FAILED! => {
"changed": false,
"cmd": "sshpass -d3 /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' /opt/common/ pg_deployment#<host>t:/home/postgres/scripts/common",
"invocation": {
"module_args": {
"_local_rsync_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"_local_rsync_path": "rsync",
"_substitute_controller": false,
"archive": true,
"checksum": false,
"compress": true,
"copy_links": false,
"delay_updates": true,
"delete": false,
"dest": "pg_deployment#<host>:/home/postgres/scripts/common",
"dest_port": null,
"dirs": false,
"existing_only": false,
"group": null,
"link_dest": null,
"links": null,
"mode": "push",
"owner": null,
"partial": false,
"perms": null,
"private_key": null,
"recursive": null,
"rsync_opts": [],
"rsync_path": "sudo -u postgres rsync",
"rsync_timeout": 0,
"set_remote_user": true,
"src": "/opt/common/",
"ssh_args": null,
"ssh_connection_multiplexing": false,
"times": null,
"verify_host": false
}
},
"msg": "Warning: Permanently added '<host>' (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n",
"rc": 5
}
Notes:
User pg_deployment has passwordless sudo to postgres. This ansible playbook is being run inside a docker container.
After messing with it a bit more, I found that I can directly run the rsync command (not using ansible)
SSHPASS=<my_ssh_pass> sshpass -e /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres/
The only difference I can see is I used sshpass -e while ansible defaulted to sshpass -d#. Could the credentials ansible was trying to pass in be incorrect? If they are incorrect for ansible.posix.synchronize then why aren't they incorrect for other ansible tasks?
EDIT
Confirmed that if I run
sshpass -d10 /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres
(I chose random number for the file descriptor d10) I get the same error as above
"msg": "Warning: Permanently added <host> (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n"
Suggesting that the problem is whatever ansible is using as the file descriptor? It isn't a huge problem since I can just pass in the sshpass as an env variable in my docker container since it's ephemeral, but I still would like to know what is going on with ansible here.
SOLUTION (using command)
---
- name: Create Postgres Cluster
hosts: all
become: yes
become_user: postgres
tasks:
- name: Create Scripts Directory
file:
path: /home/postgres/scripts
state: directory
- name: Populate scripts/common
command: sshpass -e /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres/scripts
delegate_to: 127.0.0.1
become: no

Ansible SSH error: Failed to connect to the host via ssh: ssh: connect to host 10.10.201.1 port 22: Connection refused

I have an Ansible server that I use to deploy a virtual machine on my VMware datacenter.
Once the VM deployed, Ansible has to connect to VM for adding the sources list and install applications.
However, 9 times out of 10 ansible does not connect to VM:
[2019/10/07 07:04:30][INFO] ansible-playbook /etc/ansible/jobs/deploy_application.yml -e "{'vm_name': '_TEST01', 'datacenter': 'Demo-Center'}" --tags "apt_sources" -vvv
[2019/10/07 07:04:32][INFO] ansible-playbook 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/imperium/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.15+ (default, Jul 9 2019, 16:51:35) [GCC 7.4.0]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
PLAYBOOK: deploy_application.yml *************************************************************************************************
4 plays in /etc/ansible/jobs/deploy_application.yml
PLAY [Création de la VM depuis un template] **************************************************************************************
META: ran handlers
META: ran handlers
META: ran handlers
PLAY [Démarrage de la VM] ********************************************************************************************************
META: ran handlers
META: ran handlers
META: ran handlers
PLAY [Inventaire dynamique de la VM] *********************************************************************************************
META: ran handlers
TASK [include_vars] **************************************************************************************************************
task path: /etc/ansible/jobs/deploy_application.yml:51
ok: [10.10.200.100] => {
"ansible_facts": {
"vsphere_host": "10.10.200.100",
"vsphere_pass": "Demo-NX01",
"vsphere_user": "administrator#demo-nxo.local"
},
"ansible_included_var_files": [
"/etc/ansible/group_vars/vsphere_credentials.yml"
],
"changed": false
}
TASK [Récupération de l'adresse IP de la VM] *************************************************************************************
task path: /etc/ansible/jobs/deploy_application.yml:52
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: imperium
<localhost> EXEC /bin/sh -c 'echo ~imperium && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/imperium/.ansible/tmp/ansible-tmp-1570431871.22-272720170721626 `" && echo ansible-tmp-1570431871.22-272720170721626="` echo /home/imperium/.ansible/tmp/ansible-tmp-1570431871.22-272720170721626 `" ) && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmware/vmware_guest_facts.py
<localhost> PUT /home/imperium/.ansible/tmp/ansible-local-28654XuKRxH/tmpuNQmT5 TO /home/imperium/.ansible/tmp/ansible-tmp-1570431871.22-272720170721626/AnsiballZ_vmware_guest_facts.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/imperium/.ansible/tmp/ansible-tmp-1570431871.22-272720170721626/ /home/imperium/.ansible/tmp/ansible-tmp-1570431871.22-272720170721626/AnsiballZ_vmware_guest_facts.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python2 /home/imperium/.ansible/tmp/ansible-tmp-1570431871.22-272720170721626/AnsiballZ_vmware_guest_facts.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/imperium/.ansible/tmp/ansible-tmp-1570431871.22-272720170721626/ > /dev/null 2>&1 && sleep 0'
ok: [10.10.200.100 -> localhost] => {
"changed": false,
"instance": {
"annotation": "2019/10/07 07:03",
"current_snapshot": null,
"customvalues": {},
"guest_consolidation_needed": false,
"guest_question": null,
"guest_tools_status": "guestToolsRunning",
"guest_tools_version": "10346",
"hw_cluster": "Cluster A",
"hw_cores_per_socket": 1,
"hw_datastores": [
"datastore 1"
],
"hw_esxi_host": "dc-lab-esxi01.datacenter-nxo.local",
"hw_eth0": {
"addresstype": "manual",
"ipaddresses": [
"10.10.201.1",
"fe80::250:56ff:fe91:4f39"
],
"label": "Network adapter 1",
"macaddress": "00:50:56:91:4f:39",
"macaddress_dash": "00-50-56-91-4f-39",
"portgroup_key": null,
"portgroup_portkey": null,
"summary": "App_Repo_Network"
},
"hw_eth1": {
"addresstype": "manual",
"ipaddresses": [
"10.10.200.200",
"fe80::250:56ff:fe91:a20c"
],
"label": "Network adapter 2",
"macaddress": "00:50:56:91:a2:0c",
"macaddress_dash": "00-50-56-91-a2-0c",
"portgroup_key": "dvportgroup-66",
"portgroup_portkey": "33",
"summary": "DVSwitch: 50 11 7a 5c cc a2 c6 a7-1b 91 16 ac e1 16 66 e9"
},
"hw_files": [
"[datastore 1] _TEST01/_TEST01.vmx",
"[datastore 1] _TEST01/_TEST01.nvram",
"[datastore 1] _TEST01/_TEST01.vmsd",
"[datastore 1] _TEST01/_TEST01.vmdk"
],
"hw_folder": "/Demo-Center/vm/Ansible",
"hw_guest_full_name": "Ubuntu Linux (64-bit)",
"hw_guest_ha_state": true,
"hw_guest_id": "ubuntu64Guest",
"hw_interfaces": [
"eth0",
"eth1"
],
"hw_is_template": false,
"hw_memtotal_mb": 4096,
"hw_name": "_TEST01",
"hw_power_status": "poweredOn",
"hw_processor_count": 2,
"hw_product_uuid": "42113a25-217d-29dc-57a8-564c66b40239",
"hw_version": "vmx-13",
"instance_uuid": "50110317-76f7-dd1b-1cd1-e1e7c7ecf6f9",
"ipv4": "10.10.201.1",
"ipv6": null,
"module_hw": true,
"snapshots": [],
"vnc": {}
},
"invocation": {
"module_args": {
"datacenter": "Demo-Center",
"folder": "Ansible",
"hostname": "10.10.200.100",
"name": "_TEST01",
"name_match": "first",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"properties": null,
"schema": "summary",
"tags": false,
"use_instance_uuid": false,
"username": "administrator#demo-nxo.local",
"uuid": null,
"validate_certs": false
}
}
}
TASK [debug] *********************************************************************************************************************
task path: /etc/ansible/jobs/deploy_application.yml:64
ok: [10.10.200.100] => {
"msg": "10.10.201.1"
}
TASK [Création d'un hôte temporaire dans l'inventaire] ***************************************************************************
task path: /etc/ansible/jobs/deploy_application.yml:65
creating host via 'add_host': hostname=10.10.201.1
changed: [10.10.200.100] => {
"add_host": {
"groups": [
"vm"
],
"host_name": "10.10.201.1",
"host_vars": {}
},
"changed": true
}
META: ran handlers
META: ran handlers
PLAY [Mise à jour des adresses des dépôts] ***************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************
task path: /etc/ansible/jobs/deploy_application.yml:73
<10.10.201.1> ESTABLISH SSH CONNECTION FOR USER: ansible
<10.10.201.1> SSH: EXEC sshpass -d11 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/imperium/.ansible/cp/3c7b71383c 10.10.201.1 '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"''
<10.10.201.1> (255, '', 'ssh: connect to host 10.10.201.1 port 22: Connection refused\r\n')
fatal: [10.10.201.1]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: connect to host 10.10.201.1 port 22: Connection refused",
"unreachable": true
}
PLAY RECAP ***********************************************************************************************************************
10.10.200.100 : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.10.201.1 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
On the VM side, the below message is reported:
Connection closed by authenticating user ansible 10.10.201.1 port 53098 [preauth]
The command run by Ansible:
sshpass -d11 ssh -o ControlMaster=auto -o ControlPersist=60s -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/imperium/.ansible/cp/3c7b71383c 10.10.201.1 '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"''
Without "sshpass -d11" the command working.
With the parameter "-d11", I have the below error:
root#IMPERIUM:~# sshpass -d11 ssh -o ControlMaster=auto -o ControlPersist=60s -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/imperium/.ansible/cp/3c7b71383c 10.10.201.1 '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"''
Permission denied, please try again.
Can you help me?

Ansible: setting user on dynamic ec2

I don't appear to be connecting to the remote host. Why not?
Command-line: ansible-playbook -i "127.0.0.1," -c local playbook.yml
This is the playbook. The role, create_ec2_instance, creates the variable ec2hosts used within the second portion of the playbook (ansible/playbook.yml):
# Create instance
- hosts: 127.0.0.1
connection: local
gather_facts: false
roles:
- create_ec2_instance
# Configure and install all we need
- hosts: ec2hosts
remote_user: admin
gather_facts: false
roles:
- show-hosts
- prepare-target-system
- install-project-dependencies
- install-project
This is just a simple ec2 module creation. This works as desired. (ansible/roles/create-ec2-instance/tasks/main.yml):
- name: Create instance
ec2:
region: "{{ instance_values['region'] }}"
zone: "{{ instance_values['zone'] }}"
keypair: "{{ instance_values['key_pair'] }}"
group: "{{ instance_values['security_groups'] }}"
instance_type: "{{ instance_values['instance_type'] }}"
image: "{{ instance_values['image_id'] }}"
count_tag: "{{ instance_values['name'] }}"
exact_count: 1
wait: yes
instance_tags:
Name: "{{ instance_values['name'] }}"
when: ec2_instances.instances[instance_values['name']]|default("") == ""
register: ec2_info
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ ec2_info.instances[0].public_dns_name }}"
port: 22
when: ec2_info|changed
- name: Add new instance to ec2hosts group
add_host:
hostname: "{{ ec2_info.instances[0].public_ip }}"
groupname: ec2hosts
instance_id: "{{ ec2_info.instances[0].id }}"
when: ec2_info|changed
I've included extra methods for transparency, though these are really basic (ansible/roles/show-hosts/tasks/main.yml):
- name: List hosts
debug: msg="groups={{groups}}"
run_once: true
and we have (ansible/roles/prepare-target-system/tasks/main.yml):
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
- name: Add necessary system packages
become: yes
become_method: sudo
package: "name={{item}} state=latest"
with_items:
- software-properties-common
- python-software-properties
- devscripts
- build-essential
- libffi-dev
- libssl-dev
- vim
Edit: I've updated to remote_user above and below is the error output:
TASK [prepare-target-system : debug] *******************************************
task path: <REDACTED>/ansible/roles/prepare-target-system/tasks/main.yml:5
ok: [35.166.52.247] => {
"username_on_the_host": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:00.009067",
"end": "2017-01-07 08:23:42.033551",
"rc": 0,
"start": "2017-01-07 08:23:42.024484",
"stderr": "",
"stdout": "brianbruggeman",
"stdout_lines": [
"brianbruggeman"
],
"warnings": []
}
}
TASK [prepare-target-system : Ensure that we can update apt-repository] ********
task path: /<REDACTED>/ansible/roles/prepare-target-system/tasks/Debian.yml:2
Using module file <REDACTED>/.envs/dg2/lib/python2.7/site-packages/ansible/modules/core/packaging/os/apt.py
<35.166.52.247> ESTABLISH LOCAL CONNECTION FOR USER: brianbruggeman
<35.166.52.247> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" && echo ansible-tmp-1483799022.33-268449475843769="` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" ) && sleep 0'
<35.166.52.247> PUT /var/folders/r9/kv1j05355r34570x2f5wpxpr0000gn/T/tmpK2__II TO <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py
<35.166.52.247> EXEC /bin/sh -c 'chmod u+x <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/ <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py && sleep 0'
<35.166.52.247> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-owktjrfvqssjrqcetaxjkwowkzsqfitq; /usr/bin/python <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py; rm -rf "<REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/" > /dev/null 2>&1'"'"' && sleep 0'
failed: [35.166.52.247] (item=[u'software-properties-common', u'python-software-properties', u'devscripts', u'build-essential', u'libffi-dev', u'libssl-dev', u'vim']) => {
"failed": true,
"invocation": {
"module_name": "apt"
},
"item": [
"software-properties-common",
"python-software-properties",
"devscripts",
"build-essential",
"libffi-dev",
"libssl-dev",
"vim"
],
"module_stderr": "sudo: a password is required\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit #<REDACTED>/ansible/<redacted playbook>.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=6 changed=2 unreachable=0 failed=0
35.166.52.247 : ok=3 changed=1 unreachable=0 failed=1
Use become:
remote_user: ansible
become: true
become_user: root
Ansible docs: Become (Privilege Escalation)
For example: in my scripts i connect to remote host as user 'ansible' (because ssh is disabled for root), and then become 'root'. Rarely, i connect as 'ansible', then become 'apache' user. So, remote_user specify username to connect, become_user is username after connection.
PS Passwordless sudo for user ansible:
- name: nopasswd sudo for ansible user
lineinfile: "dest=/etc/sudoers state=present regexp='^{{ ansible_user }}' line='{{ ansible }} ALL=(ALL) NOPASSWD: ALL'"
This is known workaround, see here: Specify sudo password for Ansible

Ansible-Failed to connect to the host via ssh

I am trying to provison an EC2 instance and to install a LAMP server on it using Ansible from localhost. I have successfully provisioned the instance, but I was not able to install apache,php and mysql due to this error "Failed to connect to the host via ssh.".
OS: El Capitan 10.11.6
Ansible: 2.0.2.0
Here is the playbook: `---
- hosts: localhost
connection: local
gather_facts: no
vars_files:
- "vars/{{ project_name }}.yml"
- "vars/vpc_info.yml"
tasks:
- name: Provision
local_action:
module: ec2
region: "xxxxxx"
vpc_subnet_id: "xxxxxx"
assign_public_ip: yes
key_name: "xxxxxxx"
instance_type: "t2.nano"
image: "xxxxxxxx"
wait: yes
instance_tags:
Name: "LAMP"
class: "test"
environment: "dev"
project: "{{ project_name }}"
az: a
exact_count: 1
count_tag:
Name: "LAMP"
monitoring: yes
register: ec2a
- hosts: lamp
roles:
- lamp_server
The content of the ansible.cfg file:
[defaults]
private_key_file=/Users/nico/.ssh/xxxxx.pem
The inventory:
lamp ansible_ssh_host=<EC2 IP> ansible_user=ubuntu
The command used for running the playbook:
ansible-playbook -i inventory ec2_up.yml -e project_name="lamp_server" -vvvv
Output:
ESTABLISH SSH CONNECTION FOR USER: ubuntu
<xxxxxxxxxx> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/nico/.ssh/xxxxxxx.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/nico/.ansible/cp/ansible-ssh-%h-%p-%r xxxxxxx '/bin/sh -c '"'"'( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475186461.08-93383010782630 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1475186461.08-93383010782630 `" )'"'"''
52.28.251.117 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
I have read a lot of threads regarding this error, but nothing helped me. :(
ansible-playbook -i inventory ec2_up.yml -e project_name="lamp_server" -vvvv -c paramiko