Include variables generated in first play of playbook - automation

I have a playbook which consists of two plays:
1: Create inventory file and variables file on localhost
2: Use the variables in commands on generated inventory
Example playbook:
---
- name: Generating inventory and variables
hosts: localhost
vars_files:
- variables.yml #variables file used for automating
tasks:
- name: Creating inventory template
template:
src: hosts.j2
dest: "./inventories/{{location}}/hosts"
mode: 0777
force: yes
ignore_errors: yes
run_once: true
- meta: refresh_inventory
- name: Creating predefined variables from a template
template:
src: predefined-vars.yml.j2
dest: "./variables/predefined-vars.yml"
- name: Setting vlan to network devices
remote_user: Ansible
hosts: all
vars_files:
- variables.yml #variables file used for automating.
- variables/predefined-vars.yml
tasks:
- name: configure Junos ROUTER for vlan
include_tasks: ./roles/juniper/tasks/add_vlan_rt.yml
when:
- inventory_hostname in groups['junos_routers']
- groups['junos_routers'] | length == 1
- location == inventory_name
This gives undefined variable error (for a variable created in the first play).
Is there a way to do this? I use this for generating variables like router_port_name and so on - the variables depend on location and dedicated server, which are defined in variables.yml
Any help is really appreciated.
Thanks
EDIT: However, I have noticed that this playbook:
---
- hosts: localhost
gather_facts: false
name: 1
vars_files:
- variables.yml
tasks:
- name: Creating predefined variables from a template
template:
src: predefined-vars.yml.j2
dest: "./variables/predefined-vars.yml"
- name: Generate hosts file
hosts: all
vars_files:
- variables.yml
- ./variables/predefined-vars.yml
tasks:
- name: test
debug: msg="{{ router_interface_name }}"
show the variables created in the first play.
The difference I see is that the first playbook reads all variable files (even predefined-vars.yml <- created at first play, used at the other) used in the playbook at the start of the first play (generating inventory and creating variable file) while the second playbook reads variables.yml in first play and only at the start of the second play reads the predefined-vars.yml .
Any Ideas how to make the first playbook behave the same way?

So I have found the solution to the problem, based on the documentation and suggestions from other people.
What I understood about the problem:
A playbook will read all the variables (of all plays) provided into the cache for later use, so if I include my predefined-vars.yml into vars_files, then after changing it in first play, the changes will not be used by later plays because they will use cache for that.
Thus I had to create another task in second play, which would read (load into cache) my newly generated file (for that play):
- name: Include predefined vars
include_vars: ./variables/predefined-vars.yml
run_once: true
Hope this helps you!
Still have no idea why second play shows the variables...

Related

How to dynamically set the hosts field in Ansible playbooks with a variable generated during execution?

I am trying to test something at home with the variables mechanism Ansible offers, which I am about to implement in one of my projects at work. So, been searching for a while now, but seems I can't get it working that easily, even with others` solutions here and there.
I will represent my project logic at work now, by demonstrating with my test directory & files structure at home. Here's the case, I have the following playbooks:
main.yaml
pl1.yaml
pl2.yaml
Contents of ./main.yaml:
- import_playbook: /home/martin/ansible/pl1.yaml
- import_playbook: /home/martin/ansible/pl2.yaml
Contents of ./pl1.yaml:
- name: Test playbook 1
hosts: localhost
tasks:
- name: Discovering the secret host
shell: cat /home/martin/secret
register: whichHostAd
- debug:
msg: "{{ whichHostAd.stdout }}"
- name: Discovering my hostname
shell: hostname
register: myHostnameAd
- set_fact:
whichHost: "{{ whichHostAd.stdout }}"
myHostname: "{{ myHostnameAd.stdout }}"
cacheable: yes
- name: Test playbook 1 part 2
hosts: "{{ hostvars['localhost']['ansible_facts']['whichHost'] }}"
tasks:
- name: Structuring info
shell: hostname
register: secretHostname
- name: Showing the secret hostname
debug:
msg: "{{ secretHostname.stdout }}"
Contents of ./pl2.yaml:
- name: Test Playbook 2
hosts: "{{ whichHost }}"
tasks:
- name: Finishing up
shell: echo "And here am i again.." && hostname
- name: Showing var myHostname
debug:
msg: "{{ myHostname.stdout }}"
The whole idea is to have a working variable on the go at the hosts field between the plays. How do we do that?
The playbook does not run at all if I won't define the whichHost variable as an extra arg, and that's ok, I can do it each time, but during the execution I would like to have that variable manageable and changeable. In the test case above, I want whichHost to be used everywhere across the plays/playbooks included in main.yaml, specifically to reflect the output of the first task in pl1.yaml (or the output of the whichHostAd.stdout variable), so I can determine the host I am about to target in pl2.yaml.
According to docs, I should be able to at least access it with hostvars (as in my playbook), but this is the output I get when I try the above example:
ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: 'dict object' has no attribute 'whichHost'
The error appears to have been in '/home/martin/ansible/pl1.yaml': line 22, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Test playbook 1 part 2
^ here
set_fact also does not seem to be very helpful. Any help will be appreciated!
Ok, I've actually figured it out pretty fast.
So, we definitely need to have a fact task, holding the actual data/output:
- hosts: localhost
tasks:
- name: Saving variable
set_fact:
whichHost: "{{ whichHostAd.stdout }}"
After that, when you want to invoke the var in other hosts and plays, we have to provide the host and the fact:
"{{ hostvars['localhost']['whichHost'] }}"
Like in my test above, but without ['ansible_facts']

Ansible playbook for Mapping all Routers and Switches in the network using CDP Neighbor

Hello I need help with writing a play for finding all routers and switches in the network.
Environment:
Ansible 2.8
Python 2.7
Test network in eve-ng
Router and Switches have ios
Problem Statement:
Start at the core switch and by using cdp neighbors traverse all the paths till the last switch/router inside the domain.
The depth of the network is unknown.
Output:
JSON containing a hierarchical ordering of network devices.
{
A:{A1,A2},
C:{C1,C5:{C5i:{..},C5j}
}
My Attempt:
---
- name: Backup show run (enable mode commands)
hosts: ["testrouter"]
gather_facts: false
connection: network_cli
vars:
ansible_network_os: ios
grand_parent: ["testrouter"]
tasks:
- name: CDP for "{{ inventory_hostname }}"
register: all_facts
ios_facts:
gather_subset: all
- name: filter cdp neighbors for all facts
debug:
msg: "Child of {{ inventory_hostname }} is {{ item.value[0].host }}"
loop: "{{ lookup('dict', all_facts.ansible_facts.ansible_net_neighbors) }}"
when: item.value|length == 1
- name: Remove previous grand_parent
set_fact:
root: "['{{ parent[0] }}']"
when: parent|length == 2
- name: Add the latest host as grand_parent
set_fact:
root: "{{ parent + [ inventory_hostname ] }}"
when: parent|length == 1
I have written this script in python using netmiko previously but now we have a requirement for it to be written in Ansible.
Problems:
I don't know how to modify hosts dynamically as I discovery new hosts with cdp neighbors.
Plus I need recursion to explore to unknown depth
Also since, I am learning Ansible for first time I am worried I would over complicate things and write bloated code.
Thank you for your time.
What you are doing here is a programming. You are trying to write a program using a tool which is less suited for programming than any programming language out there. May be brainfuck is worse, but I'm not sure.
There is no good answer to your question on 'how to do this complicated business logic with Ansible', like there is no good answer on question 'how to tighten a nut with a hammer'.
What you need to do (either):
Write an stand-alone application and use it in conjunction with Ansible (via rest API, inventory, stdin/out, you name it)
Write a module for Ansible. You got json at stdin, you answer with json on stdout. There are ansible heplers for Python, but you are free to use any language for module.
Write a lookup plugin for Ansible. This is more tricky and you need to keep it operational as Ansible evolves.
I advise you to go for No 1 or 2.

Why can't I access vars loaded via file in Ansible?

Why can't I access these Ansible file variables from within the Ansible task?
I've tried vars_files as well wit combinations of calling global.varname and global[varname]
- hosts: localhost
gather_facts: True
remote_user: root
- include_vars: site_vars.yml
tasks:
- digital_ocean:
state: present
command: droplet
unique_name: yes
name: do_16_04_common
api_token: "{{HOST_PROVIDER}}"
global_vars.yml:
global:
WORKER_TAG_PREFIX:"dev"
HOST_PROVIDER:"heroku-prod"
Error:
fatal: [localhost]: FAILED! => {"failed": true, "msg": "ERROR! 'ansible.parsing.yaml.objects.AnsibleUnicode object' has no attribute 'WORKER_TAG_PREFIX'"}
Firstly, your vars file is broken - it requires spaces between : and the values (and you don't need quotes for strings in this example):
global:
WORKER_TAG_PREFIX: dev
HOST_PROVIDER: heroku-prod
The above is the reason for the included error, but then the code has also syntax error which should be thrown first:
The correct syntax to include vars files at the play level is to define a vars_files key containing a list of files:
- hosts: localhost
gather_facts: True
remote_user: root
vars_files:
- site_vars.yml
tasks:
# ...
On the other hand, include_vars is a module (action) name for a task.
If you wanted to use it, you should add it to the tasks list:
- hosts: localhost
gather_facts: True
remote_user: root
tasks:
- include_vars: site_vars.yml

Ansible include roles that have been defined in hostvars

I am trying to do the following:
define appropriate host roles in hostvars
create a role to call ONLY the roles that relate to specific host and have been defined in a variable in hostvars
Is there a way to do this?
eg:
host_vars/hostname_one/mail.yml
roles_to_install:
- role_one
- role_two
- ...
run_all_roles.yml
---
- hosts: '{{ TARGET }}'
become: yes
...
roles:
- { role: "roles_to_install"}
Obviously this does not work.
Is there a way to make ansible-playbook -i <hosts_file> run_all_roles.yml -e "TARGET=hostname_one" to run?
This is not how you should be approaching your roles and inventories.
Instead, if you put your hosts in the inventory in appropriate groups you can use the hosts parameter of the playbook to drive what gets installed where.
For example I might have a typical web application that is running on NGINX with some application specific things (such as a Python environment), but is also fronted by some NGINX servers that may serve static content and there could also be a typical database.
My inventory might then look like this:
[frontend-web-nodes]
web-1.example.org
web-2.example.org
[application-nodes]
app-1.example.org
app-2.example.org
[database-nodes]
database.example.org
Now, I can create a playbook for my database role that goes and installs some database and configures and set hosts: database-nodes to make sure the play (and so the role(s) that it runs only targets the database.example.org box.
So something like this:
- name: database
hosts: database-nodes
roles:
- database
For my frontend and application web nodes I have a shared dependency on installing and configuring NGINX but my application servers also need some other things. So my front end web nodes can be configured with a simple play like this:
- name: frontend-web
hosts: frontend-web-nodes
roles:
- nginx
While for my application nodes I might either have something like this:
- name: application
hosts: application-nodes
roles:
- nginx
- application
Or I could just do this:
- name: application
hosts: application-nodes
roles:
- application
And in my roles/application/meta/main.yml define a dependency on the nginx role:
dependencies:
- role: nginx
As I commented the solution was easier than expected:
---
- hosts: '{{ TARGET }}'
become: yes
vars_files:
- ./vars/main.yml
roles:
- { role: "roleA", when: "'roleA' in roles_to_install" }
- { role: "roleB", when: "'roleB' in roles_to_install" }
...
Assuming that a correct roles_to_install var is defined inside host_vars/$fqdn/main.yml like so:
---
roles_to_install:
- roleA
- roleB
- ...
Thank you for you assistance guys
What about this:
playfile.yml:
- hosts: all
tasks:
- when: host_roles is defined
include_role:
name: "{{ role_item }}"
loop: "{{ host_roles }}"
loop_control:
loop_var: role_item
hostvars_file.yml:
host_roles:
- name: myrole1
myrole1_var1: "myrole1_value1"
myrole1_var2: "myrole1_value2"
- name: myrole2
myrole2_var1: "myrole2_value1"
myrole2_var2: "myrole2_value2"
but then your hostvar_roles would be run during task-execution, normally roles will be executed before tasks.
Alternativly why dont have a role for this:
roles/ansible.hostroles/tasks/main.yml:
---
# tasks file for ansible.hostroles
- when: host_roles is defined
include_role:
name: "{{ role_item }}"
loop: "{{ host_roles }}"
loop_control:
loop_var: role_item
playfile.yml:
- hosts: all
roles:
- ansible.hostroles

Ansible gets variable only from second execution

I've got a strange behavior of Ansible "copy" module when it is working with variables.
So, I have:
1. Config.yml:
- hosts: temp
vars_prompt:
- name: server_name
prompt: "Enter server number: 1, 2, 3..."
private: no
default: 5
- name: server_role
prompt: "Enter server role: app, admin"
private: no
default: admin
- name: server_type
prompt: "Enter server type: stage, prod"
private: no
default: stage
pre_tasks:
- name: Types and roles
set_fact:
servername: "{{ server_name }}"
serverrole: "{{ server_role }}"
servertype: "{{ server_type }}"
vars_files:
- "vars/variables"
roles:
- configs
"Configs" role with main.yml:
---
- set_fact: folder=server
when: serverrole == "app"
- set_fact: folder=admin-server
when: serverrole == "admin"
- set_fact: stageorprod=stage01
when: servertype == "stage"
- set_fact: stageorprod=prod
when: servertype == "prod"
- set_fact: fast={{ stageorprod }}/{{ folder }}/{{ servername }}
- name: Base copying admin-server
copy: src=admin-server/config dest=/home/tomcat/config/{{ fast }}/
when: serverrole == "admin"
Config files in ansible/roles/configs/files/admin-server/config.
When I run playbook with default values of variables (5, admin, stage), I've got:
TASK: [configs | set_fact fast={{stageorprod}}/{{folder}}/{{servername}}] *****
ok: [testcen04] => {"ansible_facts": {"fast": "stage01/admin-server/5"}, "item": ""}
TASK: [configs | Base copying admin-server] ***********************************
failed: [testcen04] => {"failed": true, "item": "", "md5sum": "cb2547d6235c078cfda365a5fb3c27c3",
"path": "/home/tomcat/config/stage01/admin-server/config", "state": "absent"}
msg: path /home/tomcat/config/stage01/admin-server/config does not exist
When I run this task one more time with same values, everything goes ok. But if I change some variable, it appears again.
I have noticed, that other modules, like "Template", works fine in same playbook with this variables. Maybe something wrong with "copy"?
As you can see, variable "fast" gets right values, but somehow, value of "servername" disappeared.
Your question is quite ambiguous to how you are executing your playbook and what you are trying to accomplish with the prompted variables. If you are trying to spin up servers, it's likely better to declare them in the inventory without prompting. If you are trying to access specific ones, you should likely be using groups to limit their range.
If you are using ansible to generate a set of hosts for you. You will likely want to store this information somewhere consistently, probably in a instance tags, a key-value store such as redis, database, or in files, before you spin up the hosts and bootstrap them. Then run a second playbook to include the role.
If you are not in a public cloud, and for some reason cannot tag instances or group them inventory, you can also try using facts.d to set the facts on the server and have them persist across runs, not just plays. Note that once you write to facts.d files, you should re-run setup module to gather facts again. Even though i use public cloud, I often make use of facts.d still.