How to query ansible dictionary with dynamic variables - variables

How do I query Ansible dictionary with dynamic variables?
I want to use ansible to read the serial number from the idrac of dell, and then set the address according to the serial number.
My source code:
---
- hosts: all
name: set iDRAC Ipaddr
gather_facts: False
vars:
svctag_test: xxx30S2
network_configs:
xxx30S2:
ip: 192.168.192.86
tasks:
- name: get dell server service-tag
raw: racadm getsvctag
register: svctag
- name: show svctag
debug:
msg="{{ svctag }}"
- name: show network
debug:
msg="{{ network_configs[svctag_test].ip }}"
- name: set idrac ip svctag to vars
set_fact:
SVCTAG: "{{ svctag.stdout_lines }}"
- name: show SVCTAG
debug:
msg="{{ SVCTAG }}"
- name: show network 2
debug:
msg="{{ network_configs[SVCTAG].ip }}"
#msg="{{ network_configs[SVCTAG] }}"
#msg="{{ hostvars[inventory_hostname][network_configs][SVCTAG] }}"
#msg="{{ lookup('vars', network_configs )[SVCTAG]}}"
- name: set dell server idrac ip form service-tag
raw: racadm config -g cfgLanNetworking -o cfgNicIpAddress "{{ network_configs[SVCTAG].ip }}"

- name: set idrac ip svctag to vars
set_fact:
SVCTAG: "{{ svctag.stdout_lines }}"
svctag.stdout_lines is a list, not a string. Try:
- name: set idrac ip svctag to vars
set_fact:
SVCTAG: "{{ svctag.stdout_lines[0] }}"

Related

Retrieve variable values from var file in ansible playbook

Need some help to retrieve child values from var file.
Below is my var file
api_user: root
api_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;isi33835326238346338613761366531376662613865376263656262
6138
Huston:
onefs_host: 10.88.55.00
Phoenix:
onefs_host: 10.76.52.01
Below is my playbook
---
- name: isi_increase
hosts: localhost
connection: local
vars_files:
- isilonvars.yml
tasks:
- name: Print
debug:
msg:
- "{{ Huston.onefs_host }}"
- "{{ api_user }}"
- "{{ api_password }}"
This code works perfectly
TASK [Print] ********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": [
"10.88.55.00",
"root",
"Sh00tm3!"
]
}
But as per my requirement i have to retrieve onefs_host IP as per location in my playbook. I am using extra vars here -e "location=Huston"
- name: Print
debug:
msg:
# - "{{ wcc.onefs_host }}"
- "{{ {{location}}.onefs_host }}"
- "{{ api_user }}"
- "{{ api_password }}"
I am getting the below error.
fatal: [localhost]: FAILED! => {"msg": "template error while templating string: expected token ':', got '}'. String: {{ {{isilon_location}}.onefs_host }}"}
Can you try this way
- name: Print
debug:
msg:
- "{{ vars[location]['onefs_host'] }}"
- "{{ api_user }}"
- "{{ api_password }}"

Ansible "msg": "Unable to connect to vCenter or ESXi API at IP on TCP/443: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618)"

I'm running a playbook against a host and getting this error: ` "msg": "Unable to connect to vCenter or ESXi API at 192.11.11.111 on TCP/443: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618)"
We are using vcenter 6.5. I have a playbook that should let ansible controller talk to the vsphere vcenter. I exported the trusted root SSL certificates from the vsphere home page. Copied over to my ansible controller and installed with:
sudo mv 9dab0099.0.crt 9dab0099.r0.crl 11ec582d.0.crt /etc/pki/ca-trust/source/anchors
sudo update-ca-trust force -enable
sudo update-ca-trust extract
able to telnet to the host on 443
able to ping host continously
tried changing validate_certs to: no
tried changing validate_certs to: yes
tried changing validate_certs to: false
My playbook:
- name: Add an additional cpu to virtual machine server
hosts: '{{ target }}'
tasks:
- name: Login into vCenter and get cookies
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
folder: "{{ vm_folder }}"
cluster: "{{ vcenter_cluster }}"
datacenter: "{{ vcenter_datacenter }}"
name: "{{ vm_name }}"
- name:
uri:
url: https://{{ vcenter_hostname }} #/rest/com/vmware/cis/session
force_basic_auth: yes
validate_certs: no
method: POST
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
folder: "{{ vm_folder }}"
cluster: "{{ vcenter_cluster }}"
datacenter: "{{ vcenter_datacenter }}"
name: "{{ vm_name }}"
#register: login
- name: Stop virtual machine
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: "{{ vm_folder }}"
cluster: "{{ vcenter_cluster }}"
datacenter: "{{ vcenter_datacenter }}"
name: "{{ vm_name }}"
state: "poweredoff"
- name: reconfigure CPU and RAM of VM
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
cluster: "{{ vcenter_cluster }}"
datacenter: "{{ vcenter_datacenter }}"
name: "{{ vm_name }}"
state: "present"
validate_certs: "false"
folder: "{{ vm_folder }}"
hardware:
memory_gb: "{{ memory }}"
num_cpus: "{{ cpu }}"
scsi: "lsilogic"
ESXi fw rules are open.
Reproduced error with python 2.7.5 and python 3.6. Newest version of pyvmomi installed.
Can someone point me in the right direction from here?
Try to put
validate_certs: no
into the tasks

How can we access set_fact of a remote host to a localhost in Ansible

I have a remote host called dest_nodes which has a set_fact called mailsub
I wish to get the value of mailsub ina different localhost play and i use map('extract') for the same.
However, I dont get the value assigned to set_fact in localhost when i debug MAILSUBISX.
Below is my playbook:
- name: "Play 2 Configure Destination nodes"
hosts: dest_nodes
any_errors_fatal: True
tasks:
- set_fact:
mailsub: "WebSphere | on IP and NUMBER OF PROFILES"
- name: "Play 3-Construct inventory"
hosts: localhost
any_errors_fatal: True
gather_facts: no
tasks:
- debug:
msg: "MAILSUBISX:{{ groups['dest_nodes'] | map('extract', hostvars, 'mailsub') }}"
- set_fact:
mailfrom: "creator#myshop.com"
Here is the Error received when I run the playbook and debug the variable in a different host:
Output:
"msg": "MAILSUBISX:<generator object do_map at 0x7fba4b5698c0>"
I also tried the following:
- debug:
msg: "MAILSUBISX:{{ hostvars[groups['dest_nodes']]['mailsub'] }}"
and
- debug:
msg: "MAILSUBISX:{{ hostvars['dest_nodes']['mailsub'] }}"
But they too do not work.
This is how dest_nodes is constructed:
---
- name: "Play 1-Construct inventory"
hosts: localhost
any_errors_fatal: True
gather_facts: no
tasks:
- name: Add hosts
include_tasks: "{{ playbook_dir }}/gethosts1.yml"
loop:
- 01
- 02
- 03
loop_control:
loop_var: my_result
cat gethosts1.yml
---
- name: Generate JSON data
add_host:
hostname: "{{ item }}_{{ my_result }}"
groups: dest_nodes
ansible_host: "{{ item }}"_NUMBER }}"
with_items: "{{ SERVER_IP.split(',') }}"
Can you please suggest?

Iterating over variables in Ansible

I'm trying to use the same set of variables for the various modules in my play (with some slight variations as you will see).
It seemed logical to include them as 'vars' at the top of my play, but i then have trouble referring to them later on. So far i've done this :
- name: destruction instance sur GCP
hosts: localhost
gather_facts: no
vars:
gcp_project: ansible-test-248409
gcp_cred_kind: serviceaccount
gcp_cred_file: /google/service-accounts/ansible-test-248409-fbadc808948d.json
zone: europe-west1-b
region: europe-west1
machine_type: n1-standard-1
machines:
- webserver-1
- webserver-2
- webserver-3
- devops-1
- devops-2
tasks:
- name: destruction des machines
gcp_compute_instance:
name: "{{ machines }}"
state: absent
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
- name: destruction des disques
gcp_compute_disk:
name: "{{ machines }}-disk"
state: absent
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
Which gives me this error message
[WARNING]: The value ['webserver-1', 'webserver-2', 'webserver-3', 'devops-1', 'devops-2'] (type list) in a string field was
converted to u"['webserver-1', 'webserver-2', 'webserver-3', 'devops-1', 'devops-2']" (type string). If this does not look like what
you expect, quote the entire value to ensure it does not change.
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Invalid JSON response with error: <HTML>\n<HEAD>\n<TITLE>Bad Request</TITLE
>\n</HEAD>\n<BODY BGCOLOR=\"#FFFFFF\" TEXT=\"#000000\">\n<H1>Bad Request</H1>\n<H2>Error 400</H2>\n</BODY>\n</HTML>\n"}
Using 'lookup' or 'query' doesn't work either. Can anyone see what i'm missing ?
you use with_items: option.
tasks:
- name: destruction des machines
gcp_compute_instance:
name: "{{ item }}"
state: absent
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
with_items: "{{ machines }}"

Ansible - How to ssh into an instance without the 'authenticity of host' prompt?

I am using ansible to create several ec2 instances, copy files into those newly created servers and run commands on those servers. The issue is that after creating the servers I still have to enter yes in the following ssh prompt:
TASK [Adding /etc/rc.local2 to consul servers] *********************************
changed: [localhost -> 172.31.52.147] => (item={u'ip': u'172.31.52.147', u'number': 0})
The authenticity of host '172.31.57.20 (172.31.57.20)' can't be established.
ECDSA key fingerprint is 5e:c3:2e:52:10:29:1c:44:6f:d3:ac:10:78:10:01:89.
Are you sure you want to continue connecting (yes/no)? yes
changed: [localhost -> 172.31.57.20] => (item={u'ip': u'172.31.57.20', u'number': 1})
The authenticity of host '172.31.57.19 (172.31.57.19)' can't be established.
ECDSA key fingerprint is 4e:71:15:fe:c9:ec:3f:54:65:e8:a1:66:74:92:f4:ff.
Are you sure you want to continue connecting (yes/no)? yes
How can I have ansible ignore this prompt and just answer yes automatically? For reference here is my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
sudo: yes
vars_files:
- ami-keys.yml
- ami-image.yml
tasks:
- name: create 3 consul servers
ec2:
aws_access_key: '{{ aws_access_key }}'
aws_secret_key: '{{ aws_secret_key }}'
key_name: terra
group: default
instance_type: t2.micro
image: '{{ ami }}'
region: '{{ region }}'
wait: true
exact_count: 3
count_tag:
Name: consul-server
instance_tags:
Name: consul-server
register: ec2
- name: Wait for SSH to come up
wait_for: host={{ item }} port=22 delay=1 timeout=480 state=started
with_items:
- "{{ ec2['tagged_instances'][0]['private_ip'] }}"
- "{{ ec2['tagged_instances'][1]['private_ip'] }}"
- "{{ ec2['tagged_instances'][2]['private_ip'] }}"
# shows the json data for the instances created
- name: consul server ec2 instance json data
debug:
msg: "{{ ec2['tagged_instances'] }}"
# bootstrapping
- name: Adding /etc/rc.local2 to consul servers
template:
src: template/{{ item.number }}.sh
dest: /etc/rc.local2
delegate_to: "{{ item.ip }}"
with_items:
- ip: "{{ ec2['tagged_instances'][0]['private_ip'] }}"
number: 0
- ip: "{{ ec2['tagged_instances'][1]['private_ip'] }}"
number: 1
- ip: "{{ ec2['tagged_instances'][2]['private_ip'] }}"
number: 2
ignore_errors: true
- name: give /etc/rc.local2 permissions to run and starting swarm
shell: "{{ item[1] }}"
delegate_to: "{{ item[0] }}"
with_nested:
- [ "{{ ec2['tagged_instances'][0]['private_ip'] }}",
"{{ ec2['tagged_instances'][1]['private_ip'] }}",
"{{ ec2['tagged_instances'][2]['private_ip'] }}" ]
- [ "sudo chmod +x /etc/rc.local2",
"sleep 10",
"consul reload",
"docker run --name swarm-manager -d -p 4000:4000 --restart=unless-stopped \
swarm manage -H :4000 \
--replication --advertise \
$(hostname -i):4000 \
consul://$(hostname -i):8500" ]
ignore_errors: true
Note: I have already tried running:
ansible-playbook -e 'host_key_checking=False' consul-server.yml
and it does not remove the prompt.
Going into /etc/ansible/ansible.cfg and uncommenting the line host_key_checking=False does remove the prompt however I want to avoid doing this and either enter something into my playbook or the command line when I run my playbook instead.
The common recommendation is to set host_key_checking=False in the Ansible configuration. This is a bad idea, because it assumes your network connection will never be compromised.
A much better idea that only assumes the network isn't MitMed when you first create the servers is to use ssh-keyscan to add the servers' fingerprints to the known hosts file:
- name: accept new ssh fingerprints
shell: ssh-keyscan -H {{ item.public_ip }} >> ~/.ssh/known_hosts
with_items: '{{ ec2.instances }}'