How ansible serialise commands to execute on remote host? - serialization

Without an agent on target host, Ansible is able to perform tasks like for example: adding a user(-m user).
To understand this, I read this article, which says:
"Ansible works by connecting to your nodes and pushing out small programs, called "Ansible modules" to them. These programs are written to be resource models of the desired state of the system."
To understand this point, my interpretation is, user module is python module located in control server and this module is serialized on wire to target host, after running ansible command with -m user option.
Does ansible serialize these programs(user source code) via ssh? to execute on remote host...
Does this serialization involves ssh agent forwarding technique?

When ansible executes a module in your playbook, it serializes the code it needs to run with the encountered parameters into a local python files named <local user home>/.ansible/tmp/ansible-local-<current-run-hash>/tmp<some-other-hash>.
This file is uploaded to the remote host in <remote_user home dir>/.ansible/tmp/ansible-tmp-<current-run-hashed-id>/AnsiballZ_<module_name>.py using the declared connection for this host (ssh, docker, local...).
The python file is executed on the remote host through that connection, result is fetched back to the local machine and the file is cleaned-up.
You can see exactly how all this is executed using the -vvv option to ansible-playbook (or ansible if you are sending ad-hoc commands). Here is an example of running the stat module against a docker host on my local machine.
The task:
- name: Check if SystemD service is installed
stat:
path: /etc/systemd/system/nexus.service
register: nexus_systemd_service_file
Running with -vvv. The file copy to remote starts at line 7.
TASK [nexus3-oss : Check if SystemD service is installed] **********************
task path: /projects/ansible/nexus3-oss/tasks/main.yml:13
<nexus3-oss-debian-stretch> ESTABLISH DOCKER CONNECTION FOR USER: root
<nexus3-oss-debian-stretch> EXEC ['/usr/bin/docker', b'exec', b'-i', 'nexus3-oss-debian-stretch', '/bin/sh', '-c', "/bin/sh -c 'echo ~ && sleep 0'"]
<nexus3-oss-debian-stretch> EXEC ['/usr/bin/docker', b'exec', b'-i', 'nexus3-oss-debian-stretch', '/bin/sh', '-c', '/bin/sh -c \'( umask 77 && mkdir -p "` echo /home/deployuser/.ansible/tmp/ansible-tmp-1555848182.1761565-31974482443721 `" && echo ansible-tmp-1555848182.1761565-31974482443721="` echo /deployuser/.ansible/tmp/ansible-tmp-1555848182.1761565-31974482443721 `" ) && sleep 0\'']
Using module file /home/localuser/.local/lib/python3.6/site-packages/ansible/modules/files/stat.py
<nexus3-oss-debian-stretch> PUT /home/localuser/.ansible/tmp/ansible-local-30458wt820190/tmpq2vjarrv TO /home/deployuser/.ansible/tmp/ansible-tmp-1555848182.1761565-31974482443721/AnsiballZ_stat.py
<nexus3-oss-debian-stretch> EXEC ['/usr/bin/docker', b'exec', b'-i', 'nexus3-oss-debian-stretch', '/bin/sh', '-c', "/bin/sh -c 'chmod u+x /home/deployuser/.ansible/tmp/ansible-tmp-1555848182.1761565-31974482443721/ /home/deployuser/.ansible/tmp/ansible-tmp-1555848182.1761565-31974482443721/AnsiballZ_stat.py && sleep 0'"]
<nexus3-oss-debian-stretch> EXEC ['/usr/bin/docker', b'exec', b'-i', 'nexus3-oss-debian-stretch', '/bin/sh', '-c', '/bin/sh -c \'http_proxy=\'"\'"\'\'"\'"\' https_proxy=\'"\'"\'\'"\'"\' no_proxy=\'"\'"\'\'"\'"\' /usr/bin/python /home/deployuser/.ansible/tmp/ansible-tmp-1555848182.1761565-31974482443721/AnsiballZ_stat.py && sleep 0\'']
<nexus3-oss-debian-stretch> EXEC ['/usr/bin/docker', b'exec', b'-i', 'nexus3-oss-debian-stretch', '/bin/sh', '-c', "/bin/sh -c 'rm -f -r /home/deployuser/.ansible/tmp/ansible-tmp-1555848182.1761565-31974482443721/ > /dev/null 2>&1 && sleep 0'"]
ok: [nexus3-oss-debian-stretch] => {
"changed": false,
"invocation": {
"module_args": {
"checksum_algorithm": "sha1",
"follow": false,
"get_attributes": true,
"get_checksum": true,
"get_md5": null,
"get_mime": true,
"path": "/etc/systemd/system/nexus.service"
}
},
"stat": {
"atime": 1555848116.0796735,
"attr_flags": "",
"attributes": [],
"block_size": 4096,
"blocks": 8,
"charset": "us-ascii",
"checksum": "f1de2c2bc91adc019e58f83a29c970d1d79d5cc9",
"ctime": 1553622777.8884165,
"dev": 77,
"device_type": 0,
"executable": false,
"exists": true,
"gid": 0,
"gr_name": "root",
"inode": 22997,
"isblk": false,
"ischr": false,
"isdir": false,
"isfifo": false,
"isgid": false,
"islnk": false,
"isreg": true,
"issock": false,
"isuid": false,
"mimetype": "text/plain",
"mode": "0644",
"mtime": 1553622777.3485653,
"nlink": 1,
"path": "/etc/systemd/system/nexus.service",
"pw_name": "root",
"readable": true,
"rgrp": true,
"roth": true,
"rusr": true,
"size": 248,
"uid": 0,
"version": "687353",
"wgrp": false,
"woth": false,
"writeable": true,
"wusr": true,
"xgrp": false,
"xoth": false,
"xusr": false
}
}

Related

Ansible synchronize (rsync) fails

ansible.posix.synchronize, a wrapper for rsync, is failing with message
"msg": "Warning: Permanently added <host> (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n"
My playbook
---
- name: Test rsync
hosts: all
become: yes
become_user: postgres
tasks:
- name: Populate scripts/common using copy
copy:
src: common/
dest: /home/postgres/scripts/common
- name: Populate scripts/common using rsync
ansible.posix.synchronize:
src: common/
dest: /home/postgres/scripts/common
Populate scripts/common using copy executes with no problem.
Full error output
fatal: [<host>]: FAILED! => {
"changed": false,
"cmd": "sshpass -d3 /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' /opt/common/ pg_deployment#<host>t:/home/postgres/scripts/common",
"invocation": {
"module_args": {
"_local_rsync_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"_local_rsync_path": "rsync",
"_substitute_controller": false,
"archive": true,
"checksum": false,
"compress": true,
"copy_links": false,
"delay_updates": true,
"delete": false,
"dest": "pg_deployment#<host>:/home/postgres/scripts/common",
"dest_port": null,
"dirs": false,
"existing_only": false,
"group": null,
"link_dest": null,
"links": null,
"mode": "push",
"owner": null,
"partial": false,
"perms": null,
"private_key": null,
"recursive": null,
"rsync_opts": [],
"rsync_path": "sudo -u postgres rsync",
"rsync_timeout": 0,
"set_remote_user": true,
"src": "/opt/common/",
"ssh_args": null,
"ssh_connection_multiplexing": false,
"times": null,
"verify_host": false
}
},
"msg": "Warning: Permanently added '<host>' (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n",
"rc": 5
}
Notes:
User pg_deployment has passwordless sudo to postgres. This ansible playbook is being run inside a docker container.
After messing with it a bit more, I found that I can directly run the rsync command (not using ansible)
SSHPASS=<my_ssh_pass> sshpass -e /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres/
The only difference I can see is I used sshpass -e while ansible defaulted to sshpass -d#. Could the credentials ansible was trying to pass in be incorrect? If they are incorrect for ansible.posix.synchronize then why aren't they incorrect for other ansible tasks?
EDIT
Confirmed that if I run
sshpass -d10 /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres
(I chose random number for the file descriptor d10) I get the same error as above
"msg": "Warning: Permanently added <host> (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n"
Suggesting that the problem is whatever ansible is using as the file descriptor? It isn't a huge problem since I can just pass in the sshpass as an env variable in my docker container since it's ephemeral, but I still would like to know what is going on with ansible here.
SOLUTION (using command)
---
- name: Create Postgres Cluster
hosts: all
become: yes
become_user: postgres
tasks:
- name: Create Scripts Directory
file:
path: /home/postgres/scripts
state: directory
- name: Populate scripts/common
command: sshpass -e /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres/scripts
delegate_to: 127.0.0.1
become: no

Unable to assign default value to Ansible registered variable based on condition

I try to get modification date using ansible shell module command
stat /proc/1178/stat | grep Modify | cut -d' ' -f2,3
Note: the output of the shell i.e starttime.rc will always be true i.e 0 whether an output is returned or not because of pipe cut -d in the command.
I wish to display the time i.e the result of shell module if it returns output else display "Server NOT RUNNING".
Here is my playbook:
- hosts: test_test
any_errors_fatal: true
user: user1
gather_facts: false
tasks:
- shell: "stat /proc/1178/stat | grep Modify | cut -d' ' -f2,3"
register: starttime
- name: Status of Server
set_fact:
starttime: "{{ starttime | default('Server NOT RUNNING') }}"
- debug:
msg: "STARTTIME:{{ starttime }}"
Below is the verbose output where I'm not getting the expected results.
TASK [shell] ************************************************************************************************************************************************
changed: [10.9.9.111] => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"cmd": "stat /proc/1178/stat | grep Modify | cut -d' ' -f2,3",
"delta": "0:00:00.118151",
"end": "2019-11-08 10:46:28.345448",
"invocation": {
"module_args": {
"_raw_params": "stat /proc/1178/stat | grep Modify | cut -d' ' -f2,3",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2019-11-08 10:46:28.227297",
"stderr": "stat: cannot stat â/proc/1178/statâ: No such file or directory",
"stderr_lines": [
"stat: cannot stat â/proc/1178/statâ: No such file or directory"
],
"stdout": "",
"stdout_lines": []
}
TASK [Status of Server] ****************************************************************************************************************************
task path: /app/script/condition_test.yml:14
ok: [10.9.9.111] => {
"ansible_facts": {
"starttime": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"cmd": "stat /proc/1178/stat | grep Modify | cut -d' ' -f2,3",
"delta": "0:00:00.118151",
"end": "2019-11-08 10:46:28.345448",
"failed": false,
"rc": 0,
"start": "2019-11-08 10:46:28.227297",
"stderr": "stat: cannot stat â/proc/1178/statâ: No such file or directory",
"stderr_lines": [
"stat: cannot stat â/proc/1178/statâ: No such file or directory"
],
"stdout": "",
"stdout_lines": []
}
},
"changed": false
}
TASK [debug] ************************************************************************************************************************************************
task path: /app/script/condition_test.yml:19
ok: [10.9.9.111] => {
"msg": "STARTTIME:{'stderr_lines': [u'stat: cannot stat \\u2018/proc/1178/stat\\u2019: No such file or directory'], u'changed': True, u'end': u'2019-11-08 10:46:28.345448', u'stdout': u'', u'cmd': u\"stat /proc/1178/stat | grep Modify | cut -d' ' -f2,3\", u'rc': 0, u'start': u'2019-11-08 10:46:28.227297', 'failed': False, u'stderr': u'stat: cannot stat \\u2018/proc/1178/stat\\u2019: No such file or directory', u'delta': u'0:00:00.118151', 'stdout_lines': [], 'ansible_facts': {u'discovered_interpreter_python': u'/usr/bin/python'}}"
}
META: ran handlers
META: ran handlers
PLAY RECAP **************************************************************************************************************************************************
10.9.9.111 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Can you please suggest how can I handle this ?
Note: the output of the shell i.e starttime.rc will always be true i.e 0 whether an output is returned or not because of pipe cut -d in the command.
One can easily circumvent that with set -o pipefail (which may require updating your shell: to use bash, or another "modern" shell)
- shell: "set -o pipefail; stat /proc/1178/stat | grep Modify | cut -d' ' -f2,3"
register: starttime
Another perfectly reasonable approach would be to actually test that the file exists:
- shell: |
if [ ! -e {{fn}} ]; then exit 1; fi
stat {{fn}} | grep Modify | cut -d' ' -f2,3
vars:
fn: /proc/1178/stat
register: starttime

Ansible SSH error: Failed to connect to the host via ssh: ssh: connect to host 10.10.201.1 port 22: Connection refused

I have an Ansible server that I use to deploy a virtual machine on my VMware datacenter.
Once the VM deployed, Ansible has to connect to VM for adding the sources list and install applications.
However, 9 times out of 10 ansible does not connect to VM:
[2019/10/07 07:04:30][INFO] ansible-playbook /etc/ansible/jobs/deploy_application.yml -e "{'vm_name': '_TEST01', 'datacenter': 'Demo-Center'}" --tags "apt_sources" -vvv
[2019/10/07 07:04:32][INFO] ansible-playbook 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/imperium/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.15+ (default, Jul 9 2019, 16:51:35) [GCC 7.4.0]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
PLAYBOOK: deploy_application.yml *************************************************************************************************
4 plays in /etc/ansible/jobs/deploy_application.yml
PLAY [Création de la VM depuis un template] **************************************************************************************
META: ran handlers
META: ran handlers
META: ran handlers
PLAY [Démarrage de la VM] ********************************************************************************************************
META: ran handlers
META: ran handlers
META: ran handlers
PLAY [Inventaire dynamique de la VM] *********************************************************************************************
META: ran handlers
TASK [include_vars] **************************************************************************************************************
task path: /etc/ansible/jobs/deploy_application.yml:51
ok: [10.10.200.100] => {
"ansible_facts": {
"vsphere_host": "10.10.200.100",
"vsphere_pass": "Demo-NX01",
"vsphere_user": "administrator#demo-nxo.local"
},
"ansible_included_var_files": [
"/etc/ansible/group_vars/vsphere_credentials.yml"
],
"changed": false
}
TASK [Récupération de l'adresse IP de la VM] *************************************************************************************
task path: /etc/ansible/jobs/deploy_application.yml:52
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: imperium
<localhost> EXEC /bin/sh -c 'echo ~imperium && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/imperium/.ansible/tmp/ansible-tmp-1570431871.22-272720170721626 `" && echo ansible-tmp-1570431871.22-272720170721626="` echo /home/imperium/.ansible/tmp/ansible-tmp-1570431871.22-272720170721626 `" ) && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmware/vmware_guest_facts.py
<localhost> PUT /home/imperium/.ansible/tmp/ansible-local-28654XuKRxH/tmpuNQmT5 TO /home/imperium/.ansible/tmp/ansible-tmp-1570431871.22-272720170721626/AnsiballZ_vmware_guest_facts.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/imperium/.ansible/tmp/ansible-tmp-1570431871.22-272720170721626/ /home/imperium/.ansible/tmp/ansible-tmp-1570431871.22-272720170721626/AnsiballZ_vmware_guest_facts.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python2 /home/imperium/.ansible/tmp/ansible-tmp-1570431871.22-272720170721626/AnsiballZ_vmware_guest_facts.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/imperium/.ansible/tmp/ansible-tmp-1570431871.22-272720170721626/ > /dev/null 2>&1 && sleep 0'
ok: [10.10.200.100 -> localhost] => {
"changed": false,
"instance": {
"annotation": "2019/10/07 07:03",
"current_snapshot": null,
"customvalues": {},
"guest_consolidation_needed": false,
"guest_question": null,
"guest_tools_status": "guestToolsRunning",
"guest_tools_version": "10346",
"hw_cluster": "Cluster A",
"hw_cores_per_socket": 1,
"hw_datastores": [
"datastore 1"
],
"hw_esxi_host": "dc-lab-esxi01.datacenter-nxo.local",
"hw_eth0": {
"addresstype": "manual",
"ipaddresses": [
"10.10.201.1",
"fe80::250:56ff:fe91:4f39"
],
"label": "Network adapter 1",
"macaddress": "00:50:56:91:4f:39",
"macaddress_dash": "00-50-56-91-4f-39",
"portgroup_key": null,
"portgroup_portkey": null,
"summary": "App_Repo_Network"
},
"hw_eth1": {
"addresstype": "manual",
"ipaddresses": [
"10.10.200.200",
"fe80::250:56ff:fe91:a20c"
],
"label": "Network adapter 2",
"macaddress": "00:50:56:91:a2:0c",
"macaddress_dash": "00-50-56-91-a2-0c",
"portgroup_key": "dvportgroup-66",
"portgroup_portkey": "33",
"summary": "DVSwitch: 50 11 7a 5c cc a2 c6 a7-1b 91 16 ac e1 16 66 e9"
},
"hw_files": [
"[datastore 1] _TEST01/_TEST01.vmx",
"[datastore 1] _TEST01/_TEST01.nvram",
"[datastore 1] _TEST01/_TEST01.vmsd",
"[datastore 1] _TEST01/_TEST01.vmdk"
],
"hw_folder": "/Demo-Center/vm/Ansible",
"hw_guest_full_name": "Ubuntu Linux (64-bit)",
"hw_guest_ha_state": true,
"hw_guest_id": "ubuntu64Guest",
"hw_interfaces": [
"eth0",
"eth1"
],
"hw_is_template": false,
"hw_memtotal_mb": 4096,
"hw_name": "_TEST01",
"hw_power_status": "poweredOn",
"hw_processor_count": 2,
"hw_product_uuid": "42113a25-217d-29dc-57a8-564c66b40239",
"hw_version": "vmx-13",
"instance_uuid": "50110317-76f7-dd1b-1cd1-e1e7c7ecf6f9",
"ipv4": "10.10.201.1",
"ipv6": null,
"module_hw": true,
"snapshots": [],
"vnc": {}
},
"invocation": {
"module_args": {
"datacenter": "Demo-Center",
"folder": "Ansible",
"hostname": "10.10.200.100",
"name": "_TEST01",
"name_match": "first",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"properties": null,
"schema": "summary",
"tags": false,
"use_instance_uuid": false,
"username": "administrator#demo-nxo.local",
"uuid": null,
"validate_certs": false
}
}
}
TASK [debug] *********************************************************************************************************************
task path: /etc/ansible/jobs/deploy_application.yml:64
ok: [10.10.200.100] => {
"msg": "10.10.201.1"
}
TASK [Création d'un hôte temporaire dans l'inventaire] ***************************************************************************
task path: /etc/ansible/jobs/deploy_application.yml:65
creating host via 'add_host': hostname=10.10.201.1
changed: [10.10.200.100] => {
"add_host": {
"groups": [
"vm"
],
"host_name": "10.10.201.1",
"host_vars": {}
},
"changed": true
}
META: ran handlers
META: ran handlers
PLAY [Mise à jour des adresses des dépôts] ***************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************
task path: /etc/ansible/jobs/deploy_application.yml:73
<10.10.201.1> ESTABLISH SSH CONNECTION FOR USER: ansible
<10.10.201.1> SSH: EXEC sshpass -d11 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/imperium/.ansible/cp/3c7b71383c 10.10.201.1 '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"''
<10.10.201.1> (255, '', 'ssh: connect to host 10.10.201.1 port 22: Connection refused\r\n')
fatal: [10.10.201.1]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: connect to host 10.10.201.1 port 22: Connection refused",
"unreachable": true
}
PLAY RECAP ***********************************************************************************************************************
10.10.200.100 : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.10.201.1 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
On the VM side, the below message is reported:
Connection closed by authenticating user ansible 10.10.201.1 port 53098 [preauth]
The command run by Ansible:
sshpass -d11 ssh -o ControlMaster=auto -o ControlPersist=60s -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/imperium/.ansible/cp/3c7b71383c 10.10.201.1 '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"''
Without "sshpass -d11" the command working.
With the parameter "-d11", I have the below error:
root#IMPERIUM:~# sshpass -d11 ssh -o ControlMaster=auto -o ControlPersist=60s -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/imperium/.ansible/cp/3c7b71383c 10.10.201.1 '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"''
Permission denied, please try again.
Can you help me?

Why is Ansible still able to connect node without ssh

I created two ubuntu docker containers one with control node and another with slave. I ran
ansible all -m service -a "name=ssh state=stopped"
and it shows
172.18.0.3 | CHANGED => {
"changed": true,
"name": "ssh",
"status": {
"enabled": {
"changed": false,
"rc": null,
"stderr": null,
"stdout": null
},
"stopped": {
"changed": true,
"rc": 0,
"stderr": "",
"stdout": " * Stopping OpenBSD Secure Shell server sshd\n ...done.\n"
}
}
}
Then I tried to ssh manually it failed because the openssh server has stopped which is fine. Then I ran another ansible command to start it.
# ansible all -m service -a "name=ssh state=started"
172.18.0.3 | CHANGED => {
"changed": true,
"name": "ssh",
"status": {
"enabled": {
"changed": false,
"rc": null,
"stderr": null,
"stdout": null
},
"started": {
"changed": true,
"rc": 0,
"stderr": "",
"stdout": " * Starting OpenBSD Secure Shell server sshd\n ...done.\n"
}
}
}
I am quiet amazed how was ansible able to connect to the node when I have already stopped the ssh service of the node ? Is there some alternative method that ansible is connecting to node other than ssh?
Ansible can connect to targets through a variety of protocols.
Take a look at the connection plugins list
In your case, for Docker containers, it uses the Docker API.

Packer.io fails using puppet provisioner: /usr/bin/puppet: line 3: rvm: command not found

I'm trying to build a Vagrant box file using Packer.io and Puppet.
I have this template as a starting point:
https://github.com/puphpet/packer-templates/tree/master/centos-7-x86_64
I added the Puppet provisioner after the shell provisioner:
{
"type": "puppet-masterless",
"manifest_file": "../../puphpet/puppet/site.pp",
"manifest_dir": "../../puphpet/puppet/nodes",
"module_paths": [
"../../puphpet/puppet/modules"
],
"override": {
"virtualbox-iso": {
"execute_command": "echo 'vagrant' | {{.FacterVars}}{{if .Sudo}} sudo -S -E bash {{end}}/usr/bin/puppet apply --verbose --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}"
}
}
}
When I start building the image like
packer-io build -only=virtualbox-iso template.json
Then I get this error:
==> virtualbox-iso: Provisioning with Puppet...
virtualbox-iso: Creating Puppet staging directory...
virtualbox-iso: Uploading manifest directory from: ../../puphpet/puppet/nodes
virtualbox-iso: Uploading local modules from: ../../puphpet/puppet/modules
virtualbox-iso: Uploading manifests...
virtualbox-iso:
virtualbox-iso: Running Puppet: echo 'vagrant' | sudo -S -E bash /usr/bin/puppet apply --verbose --modulepath='/tmp/packer-puppet-masterless/module-0' --manifestdir='/tmp/packer-puppet-masterless/manifests' --detailed-exitcodes /tmp/packer-puppet-masterless/manifests/site.pp
virtualbox-iso: /usr/bin/puppet: line 3: rvm: command not found
==> virtualbox-iso: Unregistering and deleting virtual machine...
==> virtualbox-iso: Deleting output directory...
Build 'virtualbox-iso' errored: Puppet exited with a non-zero exit status: 127
If I log in into the box via tty, I can run both rvm and puppet commands as vagrant user.
What did I do wrong?
I am trying out the exact same route as you are:
Use relevant scripts for provisioning the vm from this repo.
Use the puppet scripts from a puphpet.com configuration to further provision the vm using puppet-masterless provioner in packer.
Still working on it, not a successful build yet, but I can share the following:
Inspect line 50 from puphpet/shell/install-puppet.sh. So the puppet command will trigger rvm to be executed.
Inspect your packer output during provisioning. Your read something along the lines of:
...
Creating alias default for ruby-1.9.3-p551
To start using RVM you need to run `source /usr/local/rvm/scripts/rvm` in all
your open shell windows, in rare cases you need to reopen all shell windows.
Cleaning up rvm archives
....
Apparently the command source /usr/local/rvm/scripts/rvm is needed for each user that needs to run rvm. It is executed and set to bash profiles in the script puphpet/shell/install-ruby.sh. However, this does not seem to affect the context/scope of the puppet masterless provisioning execute_command of packer. Reason for the line /usr/bin/puppet: line 3: rvm: command not found in your output.
My current way forward is the following configuration in template.json (packer template), the second and third line will help get beyond the point where you are stuck currently:
{
"type": "puppet-masterless",
"prevent_sudo": true,
"execute_command": "{{if .Sudo}}sudo -E {{end}}bash -c \"source /usr/local/rvm/scripts/rvm; {{.FacterVars}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}\"",
"manifest_file": "./puphpet/puppet/site.pp",
"manifest_dir": "./puphpet/puppet",
"hiera_config_path": "./puphpet/puppet/hiera.yaml",
"module_paths": [
"./puphpet/puppet/modules"
],
"facter": {
"ssh_username": "vagrant",
"provisioner_type": "virtualbox",
"vm_target_key": "vagrantfile-local"
}
},
Note the following things:
Probably running puppet as vagrant user will not complete provisioning due to permission issues. In that case we need a way to run source /usr/local/rvm/scripts/rvm in a sudo and affect the scope of the puppet provisioning command.
The puphpet.com output scripts have /vagrant/puphpet hardcoded in their puppet scripts (e.g. puphpet/puppet/nodes/Apache.pp first line). So you might require a packer file provisioning to your vm before you execute puppet masterless, in order for it to find the dependencies in /vagrant/.... My packer.json conf for this:
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"mkdir /vagrant",
"chown -R vagrant:vagrant /vagrant"
]
},
{
"type": "file",
"source": "./puphpet",
"destination": "/vagrant"
},
Puppet will need some Facter variables as they are expected in the puphpet/puppet/nodes/*.pp scripts. Refer to my template.json above.
As said. No success in a complete puppet provisioning yet on my side, but the above got me beyond the point where you are stuck currently. Hope it helps.
Update:
I replaced my old execute command for puppet provisioner
"execute_command": "source /usr/local/rvm/scripts/rvm && {{.FacterVars}}{{if .Sudo}} sudo -E{{end}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}"
with a new one
"execute_command": "{{if .Sudo}}sudo -E {{end}}bash -c \"source /usr/local/rvm/scripts/rvm; {{.FacterVars}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}\""
This will ensure puppet (rvm) is running as root and finishes provisioning successfully.
As an alternative to my other answer, I hereby provide my steps and configuration to get this provisioning scenario working with packer & puphpet.
Assuming the following to be in place:
./: a local directory acting as your own repository being
./ops/: a directory ops inside which holds packer scripts and required files
./ops/template.json: the packer template used to build the VM
./ops/template.json expects the following is in place:
./ops/packer-templates/: a clone of this repo
./ops/ubuntu-14.04.2-server-amd64.iso: the iso for the ubuntu you want to have running in your vm
./puphpet: the output of walking through the configuration steps on puphpet.com (so this is one level up from ops)
The contents of template.json:
{
"variables": {
"ssh_name": "vagrant",
"ssh_pass": "vagrant",
"local_packer_templates_dir": "./packer-templates/ubuntu-14.04-x86_64",
"local_puphput_dir": "../puphpet",
"local_repo_dir": "../",
"repo_upload_dir": "/vagrant"
},
"builders": [
{
"name": "ubuntu-14.04.amd64.virtualbox",
"type": "virtualbox-iso",
"headless": false,
"boot_command": [
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg ",
"debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
"hostname={{ .Name }} ",
"fb=false debconf/frontend=noninteractive ",
"keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA keyboard-configuration/variant=USA console-setup/ask_detect=false ",
"initrd=/install/initrd.gz -- <enter>"
],
"boot_wait": "10s",
"disk_size": 20480,
"guest_os_type": "Ubuntu_64",
"http_directory": "{{user `local_packer_templates_dir`}}/http",
"iso_checksum": "83aabd8dcf1e8f469f3c72fff2375195",
"iso_checksum_type": "md5",
"iso_url": "./ubuntu-14.04.2-server-amd64.iso",
"ssh_username": "{{user `ssh_name`}}",
"ssh_password": "{{user `ssh_pass`}}",
"ssh_port": 22,
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo '/sbin/halt -h -p' > shutdown.sh; echo '{{user `ssh_pass`}}'|sudo -S bash 'shutdown.sh'",
"guest_additions_path": "VBoxGuestAdditions_{{.Version}}.iso",
"virtualbox_version_file": ".vbox_version",
"vboxmanage": [
["modifyvm", "{{.Name}}", "--memory", "2048"],
["modifyvm", "{{.Name}}", "--cpus", "4"]
]
}
],
"provisioners": [
{
"type": "shell",
"execute_command": "echo '{{user `ssh_pass`}}'|sudo -S bash '{{.Path}}'",
"scripts": [
"{{user `local_packer_templates_dir`}}/scripts/base.sh",
"{{user `local_packer_templates_dir`}}/scripts/virtualbox.sh",
"{{user `local_packer_templates_dir`}}/scripts/vagrant.sh",
"{{user `local_packer_templates_dir`}}/scripts/puphpet.sh",
"{{user `local_packer_templates_dir`}}/scripts/cleanup.sh",
"{{user `local_packer_templates_dir`}}/scripts/zerodisk.sh"
]
},
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"mkdir {{user `repo_upload_dir`}}",
"chown -R vagrant:vagrant {{user `repo_upload_dir`}}"
]
},
{
"type": "file",
"source": "{{user `local_repo_dir`}}",
"destination": "{{user `repo_upload_dir`}}"
},
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"rm -fR {{user `repo_upload_dir`}}/.vagrant",
"rm -fR {{user `repo_upload_dir`}}/ops"
]
},
{
"type": "puppet-masterless",
"execute_command": "{{if .Sudo}}sudo -E {{end}}bash -c \"source /usr/local/rvm/scripts/rvm; {{.FacterVars}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}\"",
"manifest_file": "{{user `local_puphput_dir`}}/puppet/site.pp",
"manifest_dir": "{{user `local_puphput_dir`}}/puppet",
"hiera_config_path": "{{user `local_puphput_dir`}}/puppet/hiera.yaml",
"module_paths": [
"{{user `local_puphput_dir`}}/puppet/modules"
],
"facter": {
"ssh_username": "{{user `ssh_name`}}",
"provisioner_type": "virtualbox",
"vm_target_key": "vagrantfile-local"
}
},
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"echo '{{user `repo_upload_dir`}}/puphpet' > '/.puphpet-stuff/vagrant-core-folder.txt'",
"sudo bash {{user `repo_upload_dir`}}/puphpet/shell/important-notices.sh"
]
}
],
"post-processors": [
{
"type": "vagrant",
"output": "./build/{{.BuildName}}.box",
"compression_level": 9
}
]
}
Narration of what happens:
execute the basic provisioning of the VM using the scripts that are used to build puphpet boxes (first shell provisioner block)
create a directory /vagrant in the VM and set permissions for vagrant user
upload local repository to /vagrant (important as puphpet/puppet expects it to exist at that location in its scripts)
remove some unneeded stuff from /vagrant after upload
start puppet provisioner with custom execute_command and facter configuration
process the remaining provisioning scripts. To be extended with exec once/always, start once/always files
Note: you might need to prepare some more things before the puppet provisioner kicks off. E.g. I need a directory in place that will be the docroot of a vhost in apache. Use shell provisioning to complete the template for your own puphpet configuration.