new to ansible ..
Trying to get this result:
variable tgt_wls_pwd = the result of env1.wls_pwd = 1234
variable tgt_apps_pwd = the result of env1.apps_pwd = 5678
The unencrypted password file vault.yml
env1:
wls_pwd: 1234
apps_pwd: 5678
Playbook
ansible-playbook tgt-app-stop.yml --extra-vars="target_clone=env1"
- name: Stop Application Tier(s) process
hosts: " {{ target_clone }}-app01"
any_errors_fatal: true
remote_user: ansible
become: yes
become_user: install
roles:
- oraapp-stop
vars_files:
- vault.yml
tasks:
- set_fact:
target_clone: "{{ target_clone }}"
vars:
# tgt_wls_pwd: "{{ target_clone }}||{{ wls_pwd }}"
# tgt_apps_pwd: "{{ target_clone }}||{{ apps_pwd }}"
# tgt_wls_pwd: "{{ target_clone ['{{ wls_pwd }}'] }}"
# tgt_apps_pwd: "{{ target_clone ['{{ apps_pwd }}'] }}"
tgt_wls_pwd: "{{ target_clone.wls_pwd }}"
tgt_apps_pwd: "{{ target_clone.apps_pwd }}"
I've tried quite a few permutations
target_clone is an extra variable passed to the playbook when running.
Thanks.
You'll need the vars lookup plugin. See
shell> ansible-doc -t lookup vars
For example, given the file
shell> cat vault.yml
env1:
wls_pwd: 1234
apps_pwd: 5678
and the inventory
shell> cat hosts
env1-app01
The playbook
shell> cat tgt-app-stop.yml
- hosts: "{{ target_clone }}-app01"
gather_facts: false
vars_files:
- vault.yml
vars:
tgt_wls_pwd: "{{ lookup('vars', target_clone).wls_pwd }}"
tgt_apps_pwd: "{{ lookup('vars', target_clone).apps_pwd }}"
tasks:
- debug:
msg: |
tgt_wls_pwd: {{ tgt_wls_pwd }}
tgt_apps_pwd: {{ tgt_apps_pwd }}
gives
shell> ansible-playbook tgt-app-stop.yml -e "target_clone=env1"
PLAY [env1-app01] ****************************************************************************
TASK [debug] *********************************************************************************
ok: [env1-app01] =>
msg: |-
tgt_wls_pwd: 1234
tgt_apps_pwd: 5678
PLAY RECAP ***********************************************************************************
env1-app01: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Related
An answer on StackOverflow suggests using - debug: var=vars or - debug: var=hostvars to print out all the variables used by an Ansible playbook.
Using var=hostvars did not print out all of the variables. But I did get all of the variables printed out when I added the following lines to the top of the main.yml file of the role executed by my playbook:
- name: print all variables
debug:
var=vars
The problem is that the values of the variables printed out are not fully evaluated if they are dependent on the values of other variables. For example, here is a portion of what gets printed out:
"env": "dev",
"rpm_repo": "project-subproject-rpm-{{env}}",
"index_prefix": "project{{ ('') if (env=='prod') else ('_' + env) }}",
"our_server": "{{ ('0.0.0.0') if (env=='dev') else ('192.168.100.200:9997') }}",
How can I get Ansible to print out the variables fully evaluated like this?
"env": "dev",
"rpm_repo": "project-subproject-rpm-dev",
"index_prefix": "project_dev",
"our_server": "0.0.0.0",
EDIT:
After incorporating the tasks section in the answer into my playbook file and removing the roles section, my playbook file looks like the following (where install-vars.yml contains some variable definitions):
- hosts: all
become: true
vars_files:
- install-vars.yml
tasks:
- debug:
msg: |-
{% for k in _my_vars %}
{{ k }}: {{ lookup('vars', k) }}
{% endfor %}
vars:
_special_vars:
- ansible_dependent_role_names
- ansible_play_batch
- ansible_play_hosts
- ansible_play_hosts_all
- ansible_play_name
- ansible_play_role_names
- ansible_role_names
- environment
- hostvars
- play_hosts
- role_names
_hostvars: "{{ hostvars[inventory_hostname].keys() }}"
_my_vars: "{{ vars.keys()|
difference(_hostvars)|
difference(_special_vars)|
reject('match', '^_.*$')|
list|
sort }}"
When I try to run the playbook, I get this failure:
shell> ansible-playbook playbook.yml
SSH password:
SUDO password[defaults to SSH password]:
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [192.168.100.111]
TASK [debug] *******************************************************************
fatal: [192.168.100.111]: FAILED! => {"failed": true, "msg": "lookup plugin (vars) not found"}
to retry, use: --limit #/usr/local/project-directory/installer-1.0.0.0/playbook.retry
PLAY RECAP *********************************************************************
192.168.100.111 : ok=1 changed=0 unreachable=0 failed=1
The minimal playbook below
shell> cat pb.yml
- hosts: localhost
gather_facts: false
vars:
test_var1: A
test_var2: "{{ test_var1 }}"
tasks:
- debug:
var: vars
reproduces the problem you described. For example,
shell> ansible-playbook pb.yml | grep test_var
test_var1: A
test_var2: '{{ test_var1 }}'
Q: How can I print out the actual values of all the variables used by an Ansible playbook?
A: You can get the actual values of the variables when you evaluate them. For example,
shell> cat pb.yml
- hosts: localhost
gather_facts: false
vars:
test_var1: A
test_var2: "{{ test_var1 }}"
tasks:
- debug:
msg: |-
{% for k in _my_vars %}
{{ k }}: {{ lookup('vars', k) }}
{% endfor %}
vars:
_special_vars:
- ansible_dependent_role_names
- ansible_play_batch
- ansible_play_hosts
- ansible_play_hosts_all
- ansible_play_name
- ansible_play_role_names
- ansible_role_names
- environment
- hostvars
- play_hosts
- role_names
_hostvars: "{{ hostvars[inventory_hostname].keys() }}"
_my_vars: "{{ vars.keys()|
difference(_hostvars)|
difference(_special_vars)|
reject('match', '^_.*$')|
list|
sort }}"
gives the evaluated playbook's vars
msg: |-
test_var1: A
test_var2: A
Looking for an answer to the same question, I found the following solution from this link:
- name: Display all variables/facts known for a host
debug:
var: hostvars[inventory_hostname]
tags: debug_info
If the user does not pass dest_path parameter or if dest_path is EMPTY i.e contains only whitespaces then dest_path should be the same as source_file value.
Below is my playbook:
---
- name: "Play 1"
hosts: localhost
any_errors_fatal: false
gather_facts: false
tasks:
- set_fact:
dest_path: "{{ dest_path | default(source_file) }}"
- set_fact:
dest_path: "{{ source_file }}"
when: dest_path | length == 0
- debug:
msg: "DESTINATION PATH IS: {{ dest_path }} and the SOURCE PATH is: {{ source_file }}"
This is how you run this playbook:
ansible-playbook -i /web/playbooks/allmwhosts.hosts /web/playbooks/test.yml -e '{ source_file: /web/bea_apps/CURRENT }' -e dest_path=
In the above example when the user does not specify any value for dest_path I m expecting dest_path should be source_file i.e /web/bea_apps/CURRENT
However, as you can see in the output below that is not the case:
Output:
PLAY [Play 1] *******************
TASK [set_fact] **************************************************************************************
ok: [localhost]
TASK [set_fact] **************************************************************************************
ok: [localhost]
TASK [debug] *****************************************************************************************
ok: [localhost] => {
"msg": "DESTINATION PATH IS: and the SOURCE PATH is: /web/bea_apps/CURRENT"
}
PLAY RECAP *******************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
Can you please suggest?
The extra variable that you pass as parameter is overriding the variable in set_fact variable dest_path. Below is the working code. In set_fact instead of dest_path I replaced with path.
---
- name: "Play 1"
hosts: localhost
any_errors_fatal: false
gather_facts: false
tasks:
- set_fact:
path: "{{ source_file }}"
when: (dest_path is not defined) or (dest_path | length == 0)
- set_fact:
path: "{{ dest_path }}"
when: (dest_path is defined) or (dest_path | length != 0)
- debug:
msg: "DESTINATION PATH IS: {{ path }} and the SOURCE PATH is: {{ source_file }}"
I have a remote host called dest_nodes which has a set_fact called mailsub
I wish to get the value of mailsub ina different localhost play and i use map('extract') for the same.
However, I dont get the value assigned to set_fact in localhost when i debug MAILSUBISX.
Below is my playbook:
- name: "Play 2 Configure Destination nodes"
hosts: dest_nodes
any_errors_fatal: True
tasks:
- set_fact:
mailsub: "WebSphere | on IP and NUMBER OF PROFILES"
- name: "Play 3-Construct inventory"
hosts: localhost
any_errors_fatal: True
gather_facts: no
tasks:
- debug:
msg: "MAILSUBISX:{{ groups['dest_nodes'] | map('extract', hostvars, 'mailsub') }}"
- set_fact:
mailfrom: "creator#myshop.com"
Here is the Error received when I run the playbook and debug the variable in a different host:
Output:
"msg": "MAILSUBISX:<generator object do_map at 0x7fba4b5698c0>"
I also tried the following:
- debug:
msg: "MAILSUBISX:{{ hostvars[groups['dest_nodes']]['mailsub'] }}"
and
- debug:
msg: "MAILSUBISX:{{ hostvars['dest_nodes']['mailsub'] }}"
But they too do not work.
This is how dest_nodes is constructed:
---
- name: "Play 1-Construct inventory"
hosts: localhost
any_errors_fatal: True
gather_facts: no
tasks:
- name: Add hosts
include_tasks: "{{ playbook_dir }}/gethosts1.yml"
loop:
- 01
- 02
- 03
loop_control:
loop_var: my_result
cat gethosts1.yml
---
- name: Generate JSON data
add_host:
hostname: "{{ item }}_{{ my_result }}"
groups: dest_nodes
ansible_host: "{{ item }}"_NUMBER }}"
with_items: "{{ SERVER_IP.split(',') }}"
Can you please suggest?
I need to deploy TICK.
How do you use variables in kapacitor.conf?
EX: username = "{{ admin }}"
I have a kapacitor.conf with variables to replace, and I have a file default.yml with variables.
Kapacitor.conf
username = "{{ admin }}"
password = "{{ admin_password }}"
default.yml
---
admin: admin
admin_password: admin
An option would be to use lineinfile. Given the variables
> cat default.yml
username: admin
password: admin_password
the playbook below
- hosts: localhost
vars_files:
- default.yml
tasks:
- lineinfile:
path: Kapacitor.conf
regexp: "^{{ item.key }}:"
line: "{{ item.key }}:{{ item.value }}"
create: yes
loop:
- {key: 'admin', value: "{{ username }}"}
- {key: 'admin_password', value: "{{ password }}"}
gives:
> cat Kapacitor.conf
admin:admin
admin_password:admin_password
next (for some first) option would be template.
I am using ansible to create several ec2 instances, copy files into those newly created servers and run commands on those servers. The issue is that after creating the servers I still have to enter yes in the following ssh prompt:
TASK [Adding /etc/rc.local2 to consul servers] *********************************
changed: [localhost -> 172.31.52.147] => (item={u'ip': u'172.31.52.147', u'number': 0})
The authenticity of host '172.31.57.20 (172.31.57.20)' can't be established.
ECDSA key fingerprint is 5e:c3:2e:52:10:29:1c:44:6f:d3:ac:10:78:10:01:89.
Are you sure you want to continue connecting (yes/no)? yes
changed: [localhost -> 172.31.57.20] => (item={u'ip': u'172.31.57.20', u'number': 1})
The authenticity of host '172.31.57.19 (172.31.57.19)' can't be established.
ECDSA key fingerprint is 4e:71:15:fe:c9:ec:3f:54:65:e8:a1:66:74:92:f4:ff.
Are you sure you want to continue connecting (yes/no)? yes
How can I have ansible ignore this prompt and just answer yes automatically? For reference here is my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
sudo: yes
vars_files:
- ami-keys.yml
- ami-image.yml
tasks:
- name: create 3 consul servers
ec2:
aws_access_key: '{{ aws_access_key }}'
aws_secret_key: '{{ aws_secret_key }}'
key_name: terra
group: default
instance_type: t2.micro
image: '{{ ami }}'
region: '{{ region }}'
wait: true
exact_count: 3
count_tag:
Name: consul-server
instance_tags:
Name: consul-server
register: ec2
- name: Wait for SSH to come up
wait_for: host={{ item }} port=22 delay=1 timeout=480 state=started
with_items:
- "{{ ec2['tagged_instances'][0]['private_ip'] }}"
- "{{ ec2['tagged_instances'][1]['private_ip'] }}"
- "{{ ec2['tagged_instances'][2]['private_ip'] }}"
# shows the json data for the instances created
- name: consul server ec2 instance json data
debug:
msg: "{{ ec2['tagged_instances'] }}"
# bootstrapping
- name: Adding /etc/rc.local2 to consul servers
template:
src: template/{{ item.number }}.sh
dest: /etc/rc.local2
delegate_to: "{{ item.ip }}"
with_items:
- ip: "{{ ec2['tagged_instances'][0]['private_ip'] }}"
number: 0
- ip: "{{ ec2['tagged_instances'][1]['private_ip'] }}"
number: 1
- ip: "{{ ec2['tagged_instances'][2]['private_ip'] }}"
number: 2
ignore_errors: true
- name: give /etc/rc.local2 permissions to run and starting swarm
shell: "{{ item[1] }}"
delegate_to: "{{ item[0] }}"
with_nested:
- [ "{{ ec2['tagged_instances'][0]['private_ip'] }}",
"{{ ec2['tagged_instances'][1]['private_ip'] }}",
"{{ ec2['tagged_instances'][2]['private_ip'] }}" ]
- [ "sudo chmod +x /etc/rc.local2",
"sleep 10",
"consul reload",
"docker run --name swarm-manager -d -p 4000:4000 --restart=unless-stopped \
swarm manage -H :4000 \
--replication --advertise \
$(hostname -i):4000 \
consul://$(hostname -i):8500" ]
ignore_errors: true
Note: I have already tried running:
ansible-playbook -e 'host_key_checking=False' consul-server.yml
and it does not remove the prompt.
Going into /etc/ansible/ansible.cfg and uncommenting the line host_key_checking=False does remove the prompt however I want to avoid doing this and either enter something into my playbook or the command line when I run my playbook instead.
The common recommendation is to set host_key_checking=False in the Ansible configuration. This is a bad idea, because it assumes your network connection will never be compromised.
A much better idea that only assumes the network isn't MitMed when you first create the servers is to use ssh-keyscan to add the servers' fingerprints to the known hosts file:
- name: accept new ssh fingerprints
shell: ssh-keyscan -H {{ item.public_ip }} >> ~/.ssh/known_hosts
with_items: '{{ ec2.instances }}'