Fetch a file from task in same Ansible playbook - automation

How do I transfer a file I have created from a previous task in my Ansible playbook? Here is what I got so far:
- name: Create Yum Report
shell: |
cd /tmp
yum history info > $(hostname -s)_$(date "+%d-%m-%Y").txt
register: after_pir
- name: Transfer PIR
fetch:
src: /tmp/{{ after_pir }}
dest: /tmp/
However, I receive this error message when I run my playbook.
TASK [Transfer PIR] ************************************************************************************************************
failed: [x.x.x.x] (item=after_pir) => {"ansible_loop_var": "item", "changed": false, "item": "after_pir", "msg": "the remote file does not exist, not transferring, ignored"}
I have tried to run different fetch, synchronzie and pull methods but I'm not sure what the issue is.

One way to do that:
- name: Create Yum Report
command: yum history info
register: yum_report
- name: Dump report on local disk for each host
copy:
content: "{{ yum_report.stdout }}"
dest: "/tmp/{{ inventory_hostname_short }}-{{ '%d-%m-%Y' | strftime }}"
delegate_to: localhost

Related

Ansible ssh error: mux_client_read_packet: read header failed: Broken pipe Received exit status from master

I have a script /wd/remoteuser/stopALL.sh on remotehost i.e 10.0.0.211 and takes 3 seconds to complete execution and has full permission 775 for remoteuser.
Note: /wd/remoteuser/stopALL.sh does not exist on the host where ansible runs.
I wish to trigger the stop script on remotehost from my ansiblehost.
Below is how i run my ansible playbook.
ansible-playbook /app/playbook/ovs.yml -i /app/playbook/ovs.hosts -t stop -f 5 -e Environment=PROD -e Country=SRILANKA -vvvv
cat /app/playbook/ovs.yml
---
- name: Play 1- check for login and mount point
hosts: "*{{ Country }}_{{ Environment }}"
user: "{{ USER }}"
any_errors_fatal: true
vars:
ansible_ssh_extra_args: -o StrictHostKeyChecking=no -o ConnectTimeout=90 -o ServerAliveInterval=50
ansible_ssh_private_key_file: /app/ssh_keys/id_rsa
gather_facts: false
tasks:
- name: Execute backup stop1 script
tags: stop,restart
script: "{{ stopscript }}"
args:
chdir: "{{ stopscript | dirname }}"
register: stopscriptoutput
- name: Debug stopscript
tags: stop,restart
debug:
msg: "{{ stopscriptoutput.stdout }}"
cat /app/playbook/ovs.hosts
[APP_SRILANKA_PROD]
10.0.0.211 USER=remoteuser stopscript=/wd/remoteuser/stopALL.sh countrydet=SRILANKA evt=PROD
Output:
<10.0.0.211> (0, '', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256#libssh.org,diffie-hellman-group14-sha1,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group-exchange-sha256,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,gss-gex-sha1-,gss-group14-sha1-]\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 190236\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
fatal: [10.0.0.211]: FAILED! => {
"changed": false,
"msg": "Could not find or access '/wd/remoteuser/stopALL.sh' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"
}
NO MORE HOSTS LEFT *****************************************************************************************************************************************************
PLAY RECAP *************************************************************************************************************************************************************
10.0.0.211 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
I get this ssh read header failed: Broken pipe error even if I use the shell module as shown below.
- name: Execute backup stop1 script
tags: stop
shell: "sleep 90; {{ stopscript }}; sleep 90"
register: stopscriptoutput
Kindly suggest how can I resolve the ssh broken pipe error and get the script to execute remotely?
Set the proxy in environment variable and it started working

Ansible not reporting distribution info on Ubuntu 20.04?

Example on Ubuntu 18.04 reporting distribution info in 'ansible_facts':
$ ansible -i hosts ubuntu1804 -u root -m setup -a "filter=ansible_distribution*"
ubuntu1804 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Ubuntu",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "18",
"ansible_distribution_release": "bionic",
"ansible_distribution_version": "18.04"
},
"changed": false
}
Example of same command against Ubuntu 20.04:
$ ansible -i hosts ubuntu2004 -u root -m setup -a "filter=ansible_distribution*"
ubuntu2004 | SUCCESS => {
"ansible_facts": {},
"changed": false
}
Is this an issue with Ubuntu or Ansible? Is there a workaround?
Issue resolved with today's update to ansible 2.9.7.
After very research to find out Ubuntu 20.04 version then we have got released a version using ansible version-2.5.1
- hosts: localhost
become: true
gather_facts: yes
tasks:
- name: System details
debug:
msg: "{{ ansible_facts['lsb']['release'] }}"
- name: ubuntu 18
shell: echo "hello 18"
register: ub18
when: ansible_facts['lsb']['release'] == "18.04"
- debug:
msg: "{{ ub18 }}"
- name: ubuntu 20
shell: echo "hello 20"
register: ub20
when: ansible_facts['lsb']['release'] == "20.04"
- debug:
msg: "{{ ub20 }}"

Multiple Ansible archive(s) are not created

im trying to create 2 archives out of 2 folders by using archive module.
Unfortunately it´s not working without any error.
My tasks looks like this:
tasks:
- name: create a tarball of logfiles
archive:
path: "{{ item.path }}"
dest: /tmp/{{ ansible_hostname }}_{{ item.name }}_{{ ansible_date_time.date }}.tar.gz
register: ausgabe
with_items:
- { name: 'xxxxxx', path: '/opt/jira/xxx/xxxxxx' }
- { name: 'xxxxxxx', path: '/opt/jira/xxxx/xxxxxxx' }
Output:
TASK [create a tarball of logfiles] ************************************************************************************************************************************************
ok: [xxxxxxx] => (item={u'path': u'/opt/jira/xxx/xxxx', u'name': u'xxxxx'})
ok: [xxxxxxx] => (item={u'path': u'/opt/jira/xxx/xxxx', u'name': u'xxxxxx'})
The tar.gz files are not created.
Can somebody help me on this?
Thx
Harry
Whenever you are using variables or 'templating' your playbook make sure that you use " (inverted-commas) properly.
I have modified your archive module's statements and I got the required result.
archive:
dest: "/tmp/{{ ansible_hostname }}_{{ item.name }}_{{ ansible_date_time.date }}.tar.gz"
path: "{{ item.path }}"
Output:
myHost_xxxxxx_2018-06-12.tar.gz
myHost_xxxxxxx_2018-06-12.tar.gz

"Failed to connect to the host via ssh" error Ansible

I am trying to run the following playbook on Ansible:
- hosts: localhost
connection: local
remote_user: test
gather_facts: no
vars_files:
- files/aws_creds.yml
- files/info.yml
tasks:
- name: Basic provisioning of EC2 instance
ec2:
assign_public_ip: no
aws_access_key: "{{ aws_id }}"
aws_secret_key: "{{ aws_key }}"
region: "{{ aws_region }}"
image: "{{ standard_ami }}"
instance_type: "{{ free_instance }}"
key_name: "{{ ssh_keyname }}"
count: 3
state: present
group_id: "{{ secgroup_id }}"
wait: no
#delete_on_termination: yes
instance_tags:
Name: Dawny33Template
register: ec2
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: launched
with_items: "{{ ec2.instances }}"
## Here lies the SSH code
- name: Wait for SSH to come up
wait_for:
host: "{{ item.public_ip }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}"
- name: Configure instance(s)
hosts: launched
become: True
gather_facts: True
#roles:
# - my_awesome_role
# - my_awesome_test
- name: Terminate instances
hosts: localhost
connection: local
tasks:
- name: Terminate instances that were previously launched
ec2:
state: 'absent'
instance_ids: '{{ ec2.instance_ids }}'
I am getting the following error:
TASK [setup] *******************************************************************
fatal: [52.32.183.176]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '52.32.183.176' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\n", "unreachable": true}
fatal: [52.34.255.16]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '52.34.255.16' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\n", "unreachable": true}
fatal: [52.34.253.51]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '52.34.253.51' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\n", "unreachable": true}
My ansible.cfg file already has the following:
[defaults]
host_key_checking = False
Yet, the playbook run is failing. Can someone help me with what I am doing wrong?
The answer has to lie in:
Permission denied (publickey).
You got past host key checking - your problem is with authentication.
Are you intending to use key-based authentication? If so, does
ssh <host> -l <ansible_user>
work for you, or does it produce a password prompt?
Are you trying to use password authentication? If so, it looks like your node does not allow it.
Edit:
adding -vvvv to your playbook enables SSH debugging.
is SSH setup properly? the logs indicate your public key isn't working

Ansible: setting user on dynamic ec2

I don't appear to be connecting to the remote host. Why not?
Command-line: ansible-playbook -i "127.0.0.1," -c local playbook.yml
This is the playbook. The role, create_ec2_instance, creates the variable ec2hosts used within the second portion of the playbook (ansible/playbook.yml):
# Create instance
- hosts: 127.0.0.1
connection: local
gather_facts: false
roles:
- create_ec2_instance
# Configure and install all we need
- hosts: ec2hosts
remote_user: admin
gather_facts: false
roles:
- show-hosts
- prepare-target-system
- install-project-dependencies
- install-project
This is just a simple ec2 module creation. This works as desired. (ansible/roles/create-ec2-instance/tasks/main.yml):
- name: Create instance
ec2:
region: "{{ instance_values['region'] }}"
zone: "{{ instance_values['zone'] }}"
keypair: "{{ instance_values['key_pair'] }}"
group: "{{ instance_values['security_groups'] }}"
instance_type: "{{ instance_values['instance_type'] }}"
image: "{{ instance_values['image_id'] }}"
count_tag: "{{ instance_values['name'] }}"
exact_count: 1
wait: yes
instance_tags:
Name: "{{ instance_values['name'] }}"
when: ec2_instances.instances[instance_values['name']]|default("") == ""
register: ec2_info
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ ec2_info.instances[0].public_dns_name }}"
port: 22
when: ec2_info|changed
- name: Add new instance to ec2hosts group
add_host:
hostname: "{{ ec2_info.instances[0].public_ip }}"
groupname: ec2hosts
instance_id: "{{ ec2_info.instances[0].id }}"
when: ec2_info|changed
I've included extra methods for transparency, though these are really basic (ansible/roles/show-hosts/tasks/main.yml):
- name: List hosts
debug: msg="groups={{groups}}"
run_once: true
and we have (ansible/roles/prepare-target-system/tasks/main.yml):
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
- name: Add necessary system packages
become: yes
become_method: sudo
package: "name={{item}} state=latest"
with_items:
- software-properties-common
- python-software-properties
- devscripts
- build-essential
- libffi-dev
- libssl-dev
- vim
Edit: I've updated to remote_user above and below is the error output:
TASK [prepare-target-system : debug] *******************************************
task path: <REDACTED>/ansible/roles/prepare-target-system/tasks/main.yml:5
ok: [35.166.52.247] => {
"username_on_the_host": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:00.009067",
"end": "2017-01-07 08:23:42.033551",
"rc": 0,
"start": "2017-01-07 08:23:42.024484",
"stderr": "",
"stdout": "brianbruggeman",
"stdout_lines": [
"brianbruggeman"
],
"warnings": []
}
}
TASK [prepare-target-system : Ensure that we can update apt-repository] ********
task path: /<REDACTED>/ansible/roles/prepare-target-system/tasks/Debian.yml:2
Using module file <REDACTED>/.envs/dg2/lib/python2.7/site-packages/ansible/modules/core/packaging/os/apt.py
<35.166.52.247> ESTABLISH LOCAL CONNECTION FOR USER: brianbruggeman
<35.166.52.247> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" && echo ansible-tmp-1483799022.33-268449475843769="` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" ) && sleep 0'
<35.166.52.247> PUT /var/folders/r9/kv1j05355r34570x2f5wpxpr0000gn/T/tmpK2__II TO <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py
<35.166.52.247> EXEC /bin/sh -c 'chmod u+x <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/ <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py && sleep 0'
<35.166.52.247> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-owktjrfvqssjrqcetaxjkwowkzsqfitq; /usr/bin/python <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py; rm -rf "<REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/" > /dev/null 2>&1'"'"' && sleep 0'
failed: [35.166.52.247] (item=[u'software-properties-common', u'python-software-properties', u'devscripts', u'build-essential', u'libffi-dev', u'libssl-dev', u'vim']) => {
"failed": true,
"invocation": {
"module_name": "apt"
},
"item": [
"software-properties-common",
"python-software-properties",
"devscripts",
"build-essential",
"libffi-dev",
"libssl-dev",
"vim"
],
"module_stderr": "sudo: a password is required\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit #<REDACTED>/ansible/<redacted playbook>.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=6 changed=2 unreachable=0 failed=0
35.166.52.247 : ok=3 changed=1 unreachable=0 failed=1
Use become:
remote_user: ansible
become: true
become_user: root
Ansible docs: Become (Privilege Escalation)
For example: in my scripts i connect to remote host as user 'ansible' (because ssh is disabled for root), and then become 'root'. Rarely, i connect as 'ansible', then become 'apache' user. So, remote_user specify username to connect, become_user is username after connection.
PS Passwordless sudo for user ansible:
- name: nopasswd sudo for ansible user
lineinfile: "dest=/etc/sudoers state=present regexp='^{{ ansible_user }}' line='{{ ansible }} ALL=(ALL) NOPASSWD: ALL'"
This is known workaround, see here: Specify sudo password for Ansible