Tailor ansible roles to environment - roles

I have a number of environments, that require a bunch of text files to be tailored in order for things like mule to speak to the right endpoints.
For this environment, this works:
ansible-playbook test03.yml
The only difference between an environment (from ansible's perspective) is the information held in ./roles/esb/vars/main.yml.
I've considered using svn to keep a vars/main.yml for each environment, so each time I need to configure an environment I check out roles and then vars/main.yml for that environment, before I run the command above.
To me, not an elegant solution. How can I do this better?
Directory structure
./test03.yml
./roles/esb/vars/main.yml
./roles/esb/tasks/main.yml
./roles/esb/templates/trp.properties.j2
./test03.yml
---
- hosts: test03-esb
gather_facts: no
roles:
- esb
./roles/esb/vars/main.yml
---
jndiProviderUrl: 'jnp://mqendpoint.company.com:1099'
trp_endpoint_estask: 'http://tmfendpoint.company.com:8080/tmf/estask'
trp_endpoint_builderQ: 'jnp://mqendpoint.company.com:1099'
./roles/esb/tasks/main.yml
---
- name: replace variables in templates
template: src=trp.properties.j2 dest=/path/to/mule/deploy/conf/trp.properties
./roles/esb/templates/trp.properties.j2
trp.endpoint.estask={{ trp_endpoint_estask }}
trp.endpoint.builderQ={{ trp_endpoint_builderQ }}

In order to use specific values for different environments all you need to do is move your variables from the role itself to a variables file for that specific enrivonment e.g.
production
|- group_vars
|- servers
|- inventory
staging
|- group_vars
|- servers
|- inventory
development
|- group_vars
|- servers
|- inventory
roles
|- esb
|- tasks
|- main.yml
|- templates
|- trp.properties.j2
etc.
Inside each of the environments' group_vars/servers you could specify variable specific for that environment, e.g.
$ cat production/group_vars/servers
---
jndiProviderUrl: 'jnp://mqendpoint.company.com:1099'
trp_endpoint_estask: 'http://tmfendpoint.company.com:8080/tmf/estask'
trp_endpoint_builderQ: 'jnp://mqendpoint.company.com:1099'
$ cat staging/group_vars/servers
---
jndiProviderUrl: 'jnp://staging.mqendpoint.company.com:1099'
trp_endpoint_estask: 'http://staging.tmfendpoint.company.com:8080/tmf/estask'
trp_endpoint_builderQ: 'jnp://staging.mqendpoint.company.com:1099'
$ cat development/group_vars/servers
---
jndiProviderUrl: 'jnp://dev.mqendpoint.company.com:1099'
trp_endpoint_estask: 'http://dev.tmfendpoint.company.com:8080/tmf/estask'
trp_endpoint_builderQ: 'jnp://dev.mqendpoint.company.com:1099'
The Jinja2 template can then remain the same (it doesn't care where the variables come from after all).
You would then execute:
# Provision production
$ ansible-playbook $playbook -i production/inventory
# Provision staging
$ ansible-playbook $playbook -i staging/inventory
# Provision development
$ ansible-playbook $playbook -i development/inventory

I tackle it this way
-inv
|
---prod
|
---qa
Then within the prod/qa inventory files, I specify top level groups like so:
#inv/prod
[prod:children]
#all the rest of prod groups
#inv/qa
[qa:children]
#all the qa groups
Which allows me to have the following group_var structure, where I keep all the relevant environment variables within these files (including 'env: qa' or 'env: prod', so I can perform operations based on the actual group membership, or based on the value of the env variable. The 'environment' variable is reserved and should not be used. Could also use the variable name 'stage' for this purpose.
inv
|
|
--group_vars
|
|
---qa
|
---prod
When I run a playbook, I do it like this:
ansible-playbook appservers.yml -i inv/prod

Related

Ansible 2.8 Roles - using the vars/main directory

I am trying to split my Ansible role variables into multiple files - as per this answer, it should be possible to create a vars/main directory, and all the .yml files in that directory should be automatically loaded.
However, this does not seem to happen in my case.
My directory structure:
vars
└── main
├── gce_settings.yml
├── vsphere_settings.yml
└── vsphere_zone.yml
However, when I try to use a variable defined inside vsphere_settings.yml, Ansible complains that the variable is undefined:
{"msg": "The task includes an option with an undefined variable. The error was: 'vsphere_user' is undefined
If I move the variable declaration into vars/main.yml, everything works as expected. But, of course, I would prefer to separate my variables into multiple files.
I was unable to find any reference to this "feature" in the official Ansible documentation, and I do not know how I could troubleshoot it. Can anyone point me in the right direction?
My ansible version:
ansible 2.8.5 on Ubuntu 16.04
And before you ask: yes, I did make sure that main.yml was not present when trying to load vars/main/*.yml...
The example below
$ cat play.yml
- hosts: localhost
roles:
- role1
$ cat roles/role1/tasks/main.yml
- debug:
var: var1
$ cat roles/role1/vars/main/var1.yml
var1: test_var1
gives
"var1": "test_var1"
TL;DR version: it is a bug in ansible, caused by the presence of empty .yml files in vars/main. There is a PR out for it already. See
here .
The actual result (as mentioned in the comments) actually depends on the order the files are processed (by default, it looks like my Ansible processes them in alphabetical order - but this might depend on the version, or the underlying OS):
If the empty file is processed first, you get an error message: ERROR! failed to combine variables, expected dicts but got a 'NoneType' and a 'AnsibleMapping'
If the empty file is processed after other files, there is no error message, but all the variables that have been set up to this point are destroyed
More details: since I have no idea how to troubleshoot this in Ansible itself, I went to the code and started to add print statements to see what is happening.
In my case, I had some empty .yml files in the vars/main directory (files that I was planning to populate later on. Well... it looks like when the code encounters such an empty file, it destroys the entire dictionary that it has built so far.
Either I'm missing something very important, or this is a bug... Steps to reproduce:
create a role
create the vars/main directory, and populate it with some .yml files
add an empty .yml file (named so that it is processed after the other files)
try to print the variables from the first files
ansible# tree roles
roles
+-- test
+-- tasks
¦   +-- main.yml
+-- vars
+-- main
+-- correct_vars.yml
4 directories, 2 files
ansible# cat roles/test/vars/main/correct_vars.yml myname: bogd
ansible# ansible-playbook -i inventory.yml test.yml
...
ok: [localhost] => {
"myname": "bogd" }
...
ansible# echo > roles/test/vars/main/emptyfile.yml
ansible# ansible-playbook -i inventory.yml test.yml
... ok: [localhost] => {
"myname": "VARIABLE IS NOT DEFINED!" }
...
Later edit: yep, it's a bug... See here.

How to dynamically load var files and combine them into one variable using ansible

I want to dynamically include var files in ansible. Dynamically means, that the user can provide a list as an extra-var, that list will be transformed into an array and defines the files to load. This is possible so far. What makes it hard is the fact that those files shall result in a single object holding the information.
This works and loads all the files in the folder and creates a variable (projects) out of the values specified there:
- name: Load project-specific Configuration
include_vars:
name: projects
dir: "{{project_vars_dir}}"
extensions:
- yml
To reach my goal and give the ability to specify which files to load, I tried the following:
- name: Load project-specific Configuration (requested projects only)
include_vars:
name: projects
file: "{{project_vars_dir}}/{{item}}.yml"
with_items: "{{projectlist.split(',') | list}}"
I can now call my playbook and specify --extra-vars like so: --extra-vars projectlist=projectA,projectB
Loading these files works, but the last file always overwrites the projects variable. How can I combine it?
Many thanks in advance
This is a somewhat complex loop so you'll need 2 files and the include_tasks module:
In tasks.yml you put:
- include_vars:
name: file_vars
file: "{{ item }}"
- set_fact:
all_vars: "{{ file_vars | combine(all_vars | default({})) }}"
In playbook.yml you put:
- hosts: all
tasks:
- include_tasks: tasks.yaml
loop: "{{projectlist.split(',') | list}}"
Ansible is not meant to be used as a programming language so complex loops are hard to write elegantly. Ideally you should look for built-in modules which handle your use case (which isn't the case here, as far as I know), write your own custom module or look into prepackaged roles written by someone else.

How come data is not coming from my hiera yaml file?

I am using Puppet Enterprise 3.7.2 and on one of my nodes I create the file:
[root#vii-osc4-mgmt-001 ~]# cat /etc/profile.d/POD_prefix.sh
export FACTER_pod_prefix=vii-osc4
Then I rebooted that node and logged back in and verified that
the FACTER_pod_prefix gets set and facter pod_prefix outputs the
expected value.
[root#vii-osc4-mgmt-001 ~]# env | grep FACTER_pod_prefix
FACTER_pod_prefix=vii-osc4
[root#vii-osc4-mgmt-001 ~]# facter pod_prefix
vii-osc4
On my PE 3.7 Puppet master I created the file /var/lib/hiera/vii-osc4.yaml.
I created the /var/lib/hiera/vii-osc4.yaml from the /var/lib/hiera/defaults.yaml
file that I had been using like so:
# cp /var/lib/hiera/defaults.yaml /var/lib/hiera/vii-osc4.yaml
This file has a bunch of class parameter values. For example there is this
line in the file:
controller_vip_name: vii-osc4.example.com
Then I changed my hiera.yaml file to look like this:
[root#osc4-ppt-001 ~]# cat /etc/puppetlabs/puppet/hiera.yaml
---
:backends:
- yaml
:hierarchy:
- "%{pod_prefix}"
- defaults
- "%{clientcert}"
- "%{environment}"
- global
:yaml:
# datadir is empty here, so hiera uses its defaults:
# - /var/lib/hiera on *nix
# - %CommonAppData%\PuppetLabs\hiera\var on Windows
# When specifying a datadir, make sure the directory exists.
:datadir:
Then I restarted my pe-httpd service like so (RHEL7):
# systemctl restart pe-httpd
Then I make a small change to the /var/lib/hiera/vii-osc4.yaml for example
I change the line ...
controller_vip_name: vii-osc4.example.com
... to ...
controller_vip_name: VII-osc4.example.com
But when I run puppet agent -t --noop on my node, vii-osc4-mgmt-001, I do not see the change
that I expected to see. If I make the change in the /var/lib/hiera/defaults.yaml and then
run puppet agent -t --noop on my node I do see the expected changes. What am I doing wrong here?
UPDATE: using /etc/facter/facts.d method of setting custom facts.
I looked into using /etc/facter/facts.d for what I am trying to do. What I am trying to do is set a custom fact "pod_prefix". I want to use this fact in my hiera.yaml like so ...
---
:backends:
- yaml
:hierarchy:
- "%{::pod_prefix}"
- defaults
- "%{clientcert}"
- "%{environment}"
- global
:yaml:
# datadir is empty here, so hiera uses its defaults:
# - /var/lib/hiera on *nix
# - %CommonAppData%\PuppetLabs\hiera\var on Windows
# When specifying a datadir, make sure the directory exists.
:datadir:
... so that nodes that have pod_prefix set to vii-osc4 will obtain their class parameters from the file /var/lib/hiera/vii-osc4/yaml and host that pod_prefix set to ix-xyz will get their class params from /var/lib/hiera/ix-xyz.yaml. I do not see how creating the file /etc/facter/facts.d/pod_prefix.txt on my puppet master that contains something like this ...
# cat pod_prefix.txt
pod_prefix=vii-osc4
... could possibly be a solution to my problem. I guess I must be misunderstanding something here. Can someone help?
UPDATE 2.
The /etc/facter/facts.d/pod_prefix.txt file goes on my nodes.
I think my biggest problem is that just execute systemctl restart pe-httpd was not sufficient and things didn't start working until I did a full reboot of my puppet master. I need to go look at the docs and figure out what is the correct way to restart the "puppet master".
The very approach of managing custom facts through environment variables is quite brittle. In this case, I suspect it does not work because you changed the environment of login shells via /etc/profile.d. System services don't run in such shells, though.
A clean approach would be to define your fact value in /etc/facter/facts.d instead.

How to set an Ansible role's variables file relative to the host?

Here is the detail of my playbook:
Playbook tree
├─ devops
| ├─ roles
| | ├─ mongodb
| | ├─ haproxy
| | ├─ monit
| | | ├─ vars
| | | | └─ main.yml
| | | └─ ...
| | └─ ...
| ├─ hosts
| ├─ play1.yml
| └─ play2.yml
hosts
[play1]
...instructions...
[play2]
...instructions...
play1.yml
---
- hosts: play1
user: root
roles:
- haproxy
- monit
play2.yml
---
- hosts: play2
user: root
roles:
- mongodb
- monit
Question
I would like to use a different variables file for monit depending on the host (play1.yml or play2.yml). How I can do the trick?
Many thanks
According to http://docs.ansible.com/playbooks_best_practices.html#directory-layout the recommended layout is as follows:
production # inventory file for production servers
stage # inventory file for stage environment
group_vars/
group1 # here we assign variables to particular groups
group2 # ""
host_vars/
hostname1 # if systems need specific variables, put them here
hostname2 # ""
library/ # if any custom modules, put them here (optional)
filter_plugins/ # if any custom filter plugins, put them here (optional)
site.yml # master playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tier
roles/
common/ # this hierarchy represents a "role"
tasks/ #
main.yml # <-- tasks file can include smaller files if warranted
handlers/ #
main.yml # <-- handlers file
templates/ # <-- files for use with the template resource
ntp.conf.j2 # <------- templates end in .j2
files/ #
bar.txt # <-- files for use with the copy resource
foo.sh # <-- script files for use with the script resource
vars/ #
main.yml # <-- variables associated with this role
defaults/ #
main.yml # <-- default lower priority variables for this role
meta/ #
main.yml # <-- role dependencies
webtier/ # same kind of structure as "common" was above, done for the webtier role
monitoring/ # ""
fooapp/ # ""
Notice the host_vars/ directory. There you can have include host-specific variables that your role can use later on.
Malo,
You should use "host_vars" and not hosts_vars
/host_vars/play1/mongodb.yml
Also, play1 should match the name of the host that you have configured in your hosts inventory.
Ansible allows you to separate your data from code. Now this data is what you define in the form of variables.
When it comes to variables, there are precedence rules which come in to play when you have the same variable defined at multiple places. The recommendation is
Provide default variables in roles -> defaults/ directory. Thats what it is for. Sane defaults.
Override those defaults from other places, such as host_vars. thats where you would put host specific vars. And thats the answer to your question.
However, if you specify the same var in roles -> vars directory, that would take higher precedence. so be careful about this one.
Apart from these, there are few more precedence rules. However, creators of ansible recommend defining variable only at one place. I personally would not follow that rule, and would use sane defaults and hosts/group specific vars.

ansible playbooks get which variable file by default if not defined

I have a devop directory containing ansible's varible directroy , plabooks and inventory directory
The directory looks like this
|groups_vars
-all.yml
-development.yml
-staging.yml
|inventroy
- staging
- development
configure.yml
deploy.yml
configure.yml and deploy.yml contains task that are applied to either staging or development machines using variable in groups_vars
Know if i call ansible-playbook command with staging inventory. How will it know which variable file to use. The varfile task in not added to configure.yml and deploy.yml
By the way am using an example from the company i work and the example is working I just want to know the magic that is happening it is using the right variable file though the var file is not incuded in the configure.yml nor deploy.yml
Ansible uses a few conventions to load vars files:
group_vars/[group]
host_vars/[host]
So if you have an inventory file that looks like this:
[staging]
some-host.name.com
Then These files will be included (optional extension .yml or .yaml also):
/group_vars/all
/group_vars/staging
/host_vars/some-host.name.com
I think this is the "magic" you are referring to.
You can find more on the subject here: http://docs.ansible.com/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
And here: http://docs.ansible.com/playbooks_best_practices.html