Use copy module in playbook with variables - variables

I am trying to write a playbook that will let me copy any file on my local machine to any location on the destination machine. Permissions aren't an issue because the copy source and destination are within the permissions of the user.
In all examples I see for the copy module, the source, and destination are hard-coded into the playbook as such:
tasks:
- name: stuff
copy:
src: /my/path/file.name
dest: /my/remote/path/file.name
That is all well and good, except it hardcodes things that I want to set on the command line with --extra-vars. My need is to be able to define the source file and destination on the command line so I can copy any file to any destination with the same playbook and without modifications to the playbook.
How do I set up the playbook to accept variables for both src and dest, so I can use this sort of command line to call it?
shell> ansible-playbook -e "host_list=myhosts srcvar=/my/path/file.name destvar=/my/remote/path/file.name" playbook.yml
I've tried using the Jinja2 notation src: "{{ srcvar }}" and dest: "{{ destvar }}" in the playbook and then calling it on the command line with
-e "srcvar=/my/path/file.name destvar=/my/remote/path/file.name"
, but it gives this error under TASK [Copy files]:
fatal: [test_server]: FAILED! => {"msg": "'str object' has no attribute 'files'"}

The project below works as expected
shell> tree .
.
├── ansible.cfg
├── hosts
└── playbook.yml
0 directories, 3 files
shell> cat ansible.cfg
[defaults]
gathering = explicit
collections_path = $HOME/.local/lib/python3.9/site-packages/
inventory = $PWD/hosts
roles_path = $PWD/roles
remote_tmp = ~/.ansible/tmp
retry_files_enabled = false
stdout_callback = yaml
shell> cat hosts
[myhosts]
test_11
test_13
[myhosts:vars]
ansible_connection=ssh
ansible_user=admin
ansible_become=yes
ansible_become_user=root
ansible_become_method=sudo
ansible_python_interpreter=/usr/local/bin/python3.8
ansible_perl_interpreter=/usr/local/bin/perl
The playbook
shell> cat playbook.yml
- hosts: "{{ host_list }}"
tasks:
- copy:
src: "{{ srcvar }}"
dest: "{{ destvar }}"
shell> ll /tmp/file.name
-rw-r--r-- 1 admin admin 3248 Feb 4 03:26 /tmp/file.name
copied the file to the remote hosts
shell> ansible-playbook -e "host_list=myhosts srcvar=/tmp/file.name destvar=/tmp/file.name" playbook.yml
PLAY [myhosts] *******************************************************************************
TASK [copy] **********************************************************************************
changed: [test_11]
changed: [test_13]
PLAY RECAP ***********************************************************************************
test_11: ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test_13: ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
shell> ssh admin#test_11 ls -la /tmp/file.name
-rw-r--r-- 1 root wheel 3248 Feb 4 02:35 /tmp/file.name
shell> ssh admin#test_13 ls -la /tmp/file.name
-rw-r--r-- 1 root wheel 3248 Feb 4 02:35 /tmp/file.name
The play is idempotent
shell> ansible-playbook -e "host_list=myhosts srcvar=/tmp/file.name destvar=/tmp/file.name" playbook.yml
PLAY [myhosts] *******************************************************************************
TASK [copy] **********************************************************************************
ok: [test_11]
ok: [test_13]
PLAY RECAP ***********************************************************************************
test_11: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test_13: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Notes:
You can use the option -e multiple times making the command easier to read and less error-prone (probably)
shell> ansible-playbook -e "host_list=myhosts" -e "srcvar=/tmp/file.name" -e "destvar=/tmp/file.name" playbook.yml
You can put the parameters into a YAML file
shell> cat params.yml
host_list: myhosts
srcvar: /tmp/file.name
destvar: /tmp/file.name
and use it in the command
shell> ansible-playbook -e #params.yml playbook.yml

Related

ansible playbook / bash script

bonjour / hello
First of all sorry for my very poor english, I'm french and I used google translator.
I'm using an ansible playbook to run a bash script. In my test environment it works, but not in my production environment and I don't understand why.
It's a script that allows to scan for violations, check the status of the services and at the end it generates a report.
In ansible the task is accomplished but the report is not generated, whereas when I run the script directly from the terminal of my vm (hosted by aws) the script works and generates the report for me.
Can you help me please ?
- name: run the scan to generate deviation report
become: yes
command: sh /<path>
I tried several commands but I always have the same result. I tried the ansible script module and launched it in debug mode with bash -x
i tested like below
script:
root#swarm01:/myworkspace/ansible# cat hello.sh
#!/usr/bin/sh
touch abc
Playbook.
root#swarm01:/myworkspace/ansible# cat test.yml
- name: simple script
hosts: localhost
tasks:
- name: simple script
command: sh hello.sh
root#swarm01:/myworkspace/ansible# ansible-playbook test.yml
PLAY [simple script] ***********************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************
ok: [localhost]
TASK [simple script] ***********************************************************************************************************************************
changed: [localhost]
PLAY RECAP *********************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
It worked for me, i see a abc file got created.
Suggest using the script module
This will allow the PB to run the script on remote systems as well as localhost. Probably best to make sure you have the sh-bang on the first line of the script.
- name: simple script
hosts: localhost
tasks:
- name: simple script
script:
cmd: hello.sh

Using Ansible authorized_key module to copy SSH key fails with sshpass needed erro

I am trying to copy my .pub key file located in ~/.ssh/mykey.pub to one of the remote hosts using Ansible.
I have a very simple playbook containing this task:
- name: SSH-copy-key to target
hosts: all
tasks:
- name: Copying local SSH key to target
ansible.posix.authorized_key:
user: '{{ ansible_user_id }}'
state: present
key: "{{ lookup('file', '~/.ssh/mykey.pub') }}"
Due to the fact that the host is 'new', I am adding a --ask-pass parameter to be asked for the SSH password on the first connection attempt.
However, I receive the error that I need to install the sshpass program.
The following is being returned:
➜ ansible ansible-playbook -i inventory.yaml ssh.yaml --ask-pass
SSH password:
PLAY [SSH-copy-key to target] ********************************************************************
TASK [Gathering Facts] ***************************************************************************
fatal: [debian]: FAILED! => {"msg": "to use the 'ssh' connection type with passwords or pkcs11_provider, you must install the sshpass program"}
PLAY RECAP ***************************************************************************************
debian : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
➜ ansible
I am executing Ansible from a MacBook. I tried replacing the 'key' key with the URL to my github account. The same error appears.
key: https://github.com/myuseraccount.keys
Any ideas?

Adding file with specific content from secretsmanager into machine on startup using AWS CloudFormation

I am using CloudFormation to provision a Linux Instance. During startup I want to add a file into a specific folder which content comes from a secretstring which resides in the secrets manager.
I tried to add the file using UserData and MetaData, however, instead of what it should do, namely, adding the correct content from the secrets manager to the file, it just adds the string as is which depicts the location of the content instead of the content itself. This is my code:
Metadata:
AWS::CloudFormation::Init:
config:
files:
/home/ansible/.ssh/authorized_keys:
content: !Sub |
'{{ resolve:secretsmanager:
arn:aws:secretsmanager:eu-central-1:account:secret:secretname:
SecretString:
secretstringpath }}'
mode: "000644"
owner: "ansible"
group: "ansible"
Properties:
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
yum update -y
groupadd -g 110 ansible
adduser ansible -g ansible
mkdir /home/ansible/.ssh
yum install -y aws-cfn-bootstrap
/opt/aws/bin/cfn-init -v \
--stack ${AWS::StackName} \
--resource LinuxEC2Instance \
--region ${AWS::Region}
cat /home/ansible/.ssh/authorized_keys
What the cat command prints is this here:
{{ resolve:secretsmanager:
arn:aws:secretsmanager:eu-central-1:account:secret:secretname:
SecretString:
secretstringpath }}
instead of the the pathname of the file.
How do I ensure that it adds the content of the file?
you cannot have dynamic references to secretsmanager within Cloudformation::Init
Could be as simple as switching your quotation marks from single to double:
Metadata:
AWS::CloudFormation::Init:
config:
files:
/home/ansible/.ssh/authorized_keys:
content: !Sub |
"{{ resolve:secretsmanager:
arn:aws:secretsmanager:eu-central-1:account:secret:secretname:
SecretString:
secretstringpath }}"
mode: "000644"
owner: "ansible"
group: "ansible"

Ansible ssh fails with error: Data could not be sent to remote host

I have an ansible playbook that executes a shell script on remote host "10.8.8.88" as many times as the number files provided as parameter
ansible-playbook test.yml -e files="file1,file2,file3,file4"
playbook looks like below:
- name: Call ssh
shell: ~./execute.sh {{ item }}
with_items: {{ files.split(',') }}
This works fine for fewer files say 10 to 15 files. But I happen to have 145 files in the argument.
This is when the execution broke and play failed mid-way with below error message:
TASK [shell] *******************************************************************
[WARNING]: conditional statements should not include jinja2 templating
delimiters such as {{ }} or {% %}. Found: entrycurrdb.stdout.find("{{ BASEPATH
}}/{{ vars[(item | splitext)[1].split('.')[1]] }}/{{ item | basename }}") == -1
and actualfile.stat.exists == True
[WARNING]: sftp transfer mechanism failed on [10.8.8.88]. Use ANSIBLE_DEBUG=1
to see detailed information
[WARNING]: scp transfer mechanism failed on [10.8.8.88]. Use ANSIBLE_DEBUG=1
to see detailed information
fatal: [10.8.8.88]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"10.8.8.88\". Make sure this host can be reached over ssh: ", "unreachable": true}
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
10.8.8.88 : ok=941 changed=220 unreachable=1 failed=0 skipped=145 rescued=0 ignored=0
localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I have the latest Ansible and the "pipeline" and "ssh" settings in ansible.cfg are as defaults.
I have the following questions.
How can I resolve the above issue?
I guess this could be due to a network issue. Is it possible to run infinite ssh ping to the remote server for testing purposes to see if the ansible command-line breaks? It will help me prove my case. A sample command that keeps pinging the remote using ssh is what I'm looking for.
It is possible to force ansible to retry ssh connection a few times in case of such failures so that it may connect in during retries. If so, I would appreciate where and how that can be set in ansible-playbook code as vars variable and not in ansible.cfg file?
https://docs.ansible.com/ansible/2.4/intro_configuration.html#retries
Something similar to:
vars:
ansible_ssh_private_key_file: "{{ key1 }}"
Many Thanks !!

Problems with ansible 2.7.9 using host_vars (undefined variable)

Ansible 2.7.9 is not using host_vars
Ive setup a very simple setup with 3 hosts, mainly for testing purposes . I have hosts :
- ansible1 (this is where I store the code)
- ansible2
- ansible3
My inventory :
[ansible#ansible1 ~]$ cat /etc/ansible/hosts
[common]
ansible1
ansible2
ansible3
My cfg looks like that :
[ansible#ansible1 ~]$ cat /etc/ansible/ansible.cfg
[defaults]
roles_path = /etc/ansible/roles
inventory = /etc/ansible/hosts
[privilege_escalation]
[paramiko_connection]
[ssh_connection]
pipelining = True
control_path = /tmp/ansible-ssh-%%h-%%p-%%r
pipelining = False
[accelerate]
[selinux]
[colors]
I have defined a master playbook called common which calls common :
[ansible#ansible1 ~]$ ls /etc/ansible/roles/
common common.retry common.yml
[ansible#ansible1 ~]$ cat /etc/ansible/roles/common.yml
--- # Playbook for webservers
- hosts: common
roles:
- common
[ansible#ansible1 ~]$
The task/main.yml :
[ansible#ansible1 ~]$ cat /etc/ansible/roles/common/tasks/main.yml
- name: test ansible1
lineinfile:
dest: /tmp/ansible.txt
create: yes
line: "{{ myvar }}"
- name: set ansible2
lineinfile:
dest: /tmp/ansible2.txt
create: yes
line: "hi"
[ansible#ansible1 ~]$
[ansible#ansible1 ~]$ cat /etc/ansible/roles/common/vars/main.yml
copyright_msg: "Copyrighta 2019"
myvar: "value of myvar from common/vars"
THen I placed some info at /etc/ansible/host_vars
[ansible#ansible1 ~]$ ls /etc/ansible/hosts_vars/
ansible2.yml
[ansible#ansible1 ~]$ cat /etc/ansible/hosts_vars/ansible2.yml
myvar: "myvar from host_vars"
[ansible#ansible1 ~]$
This works great with playbook :
[ansible#ansible1 ~]$ ansible-playbook /etc/ansible/roles/common.yml --limit ansible2
PLAY [common] ******************************************************************
TASK [Gathering Facts] *********************************************************
ok: [ansible2]
TASK [common : test ansible1] **************************************************
changed: [ansible2]
TASK [common : set ansible2] ***************************************************
changed: [ansible2]
PLAY RECAP *********************************************************************
ansible2 : ok=3 changed=2 unreachable=0 failed=0
I see the file with the content of myvar :
[root#ansible2 ~]# cat /tmp/ansible.txt
value of myvar from common/vars
[root#ansible2 ~]#
But then I dont understand why it is not taking the value from /etc/ansible/hosts_vars/ansible2.yml , in fact if I comment the line from /etc/ansible/roles/common/vars/main.yml it says undefined variable :
[ansible#ansible1 ansible]$ cat /etc/ansible/roles/common/vars/main.yml
copyright_msg: "Copyrighta 2019"
myvar: "value of myvar from common/vars"
This is as expected the main.yml will get sourced automatically while executing the playbook. consider this file as global variables.
The reason ansible2.yml is not getting sourced is because ansible expects you to source that explicitly while executing.
You can use below code for that(generic).
---
- name: play
hosts: "{{ hosts }}"
tasks:
- include_vars: "{{ hosts }}.yml"
trigger -->
ansible-playbook -i inventory --extra-vars "hosts=ansible2"
Ansible uses that priority for values from vars :
From least to most important
role defaults
inventory file or script group vars
inventory group_vars/all
playbook group_vars/all
inventory group_vars/*
playbook group_vars/*
inventory file or script host vars
inventory host_vars/*
playbook host_vars/*
host facts
play vars
play vars_prompt
play vars_files
role vars (defined in role/vars/main.yml)
block vars (only for tasks in block)
task vars (only for the task)
role (and include_role) params
include params
include_vars
set_facts / registered vars
extra vars (always win precedence)
So is better to forget about using roles/vars because it takes precedence over host_vars, so I should use instead roles/defaults, which has a lower priority .