ansible-playbook not limiting hosts - roles

I've got a hosts file, specifying a server belonging to multiple groups:
[web]
192.168.45.37
[integration]
192.168.45.37
[database]
192.168.45.37
The different groups have different roles applied to them in the playbook:
- hosts: all
roles:
- { role: base, tags: ['base'] }
- { role: logstash, tags: ['logstash'] }
- hosts: database
roles:
- { role: mysql, tags: ['database', 'mysql'] }
- { role: mysql-backup, tags: ['database', 'mysql', 'backup'] }
- hosts: web
roles:
- { role: nginx, tags: ['web', 'nginx'] }
- { role: ssl-certs, tags: ['web', 'ssl-certs'] }
- hosts: integration
roles:
- { role: jetty, tags: ['integration', 'jetty'] }
My problem is that when I go to run the playbook, trying to limit it to only the "roles" required by specifying the "group" with the "--limit" argument e.g.
ansible-playbook -i hosts site.yml -l integration
It ends up running all of the plays against the server. Why does it do this? Can I get it to just run the set of plays/roles associated with that particular server group?

This is by design- under the covers, limits are implemented as a list of hosts, though the limit expression can be an arbitrarily complex combination of both hosts and groups. We don't exclude group definitions that aren't specified in the limit expression (it sounds like that's what you want)- that would significantly hamper the utility of limit expressions for more complex use cases.
For example: if you had a play that targeted an intersection of two groups, "mysite:&myrole", I think the expectation would be that if you passed a limit expression of mysite, that it would run. If we explicitly dropped hosts for group defs that weren't specified in the limit expression, it wouldn't.
Tags are definitely the right thing to use here, and they can be specified at the play level for the role-specific stuff so you don't have to repeat that part for each role/task underneath. The pre_tasks section should behave the same way with tags (ie, the tasks need to be tagged to run, though make sure you know about "always")- if they don't, that's definitely an issue you should report.

Related

Ansible Tower with dynamic inventory issue: "The task includes an option with an undefined variable"

I use AWS EC2 dynamic inventory in my Ansible Tower and my instances are tagged with their environment. For example:
Key: Environment
Value: NonProd
This creates a group tag_Environment_NonProd which contains tagged hosts. Now I want to "set_fact" using this group:
- name: Determine nodes to join in NonProd
hosts: tag_Group_Elasticsearch
become: true
tasks:
- name: Setting nodes IPs
set_fact:
NonProd_list: "{{ groups['tag_Environment_NonProd'] | map('extract', hostvars, ['ansible_host']) | list }}"
I spin up NonProd tagged instances only from time to time so tag_Environment_NonProd group is not available all the time and that is the reason I am facing this issue.
I tried with the following conditionals but it didn't help:
when: tag_Environment_NonProd is defined
when: ('tag_Environment_NonProd' in group_names)
I also tried to ignore_error but apparently it also doesn't work with "undefined variable".
Anybody has an idea how to resolve this?
Many thanks.
Dragan
The global approach is to make sure you always have a value when the variable is not defined. Use the default filter for that.
The following will set NonProd_list to an empty list when the group does not exists (or is empty). This way you fix you current error and you don't have to test later that the set var exists.
- name: Setting nodes IPs
set_fact:
NonProd_list: >-
{{
groups['tag_Environment_NonProd']
| default([])
| map('extract', hostvars, ['ansible_host'])
| list
}}

what are drone.io 0.8.5 plugin/gcr secretes' acceptable values?

I'm having trouble pushing to gcr with the following
gcr:
image: plugins/gcr
registry: us.gcr.io
repo: dev-221608/api
tags:
- ${DRONE_BRANCH}
- ${DRONE_COMMIT_SHA}
- ${DRONE_BUILD_NUMBER}
dockerfile: src/main/docker/Dockerfile
secrets: [GOOGLE_CREDENTIALS]
when:
branch: [prod]
...Where GOOGLE_CREDENTIALS will work, but if named say GOOGLE_CREDENTIALS_DEV it will not be properly picked up. GCR_JSON_KEY works fine. I recall reading legacy documentation that spelled out the acceptable variable names, of which GOOGLE_CREDENTIALS and GCR_JSON_KEY were listed among other variants but as of version 1 they've done some updates omitting that info.
So, question is, is the plugin capable of accepting whatever variable name or is it expecting specific variable names and if so what are they?
The Drone GCR plugin accepts the credentials in a secret named PLUGIN_JSON_KEY, GCR_JSON_KEY, GOOGLE_CREDENTIALS, or TOKEN (see code here)
If you stored the credentials in drone as GOOGLE_CREDENTIALS_DEV then you can rename it in the .drone.yml file like this:
...
secrets:
- source: GOOGLE_CREDENTIALS_DEV
target: GOOGLE_CREDENTIALS
...

Notify handler or register ansible variable when change is detected in include_role?

After lots of searching, I come to the conclusion that ansible (I use the latest stable as of now version, v2.5.3) most likely does not support registering variables or notifications from include_role and import_role statements.
There is a similar question here and the suggestion in one of the answers is: Each individual task within your include file can register variables, and you can reference those variables elsewhere.
However, if I follow this suggestion then I need to add extra unnecessary code in all of my included roles, just because I may need a workaround in a special server. Things can quickly get out of control and become messy, especially in the case of nested role inclusions (i.e. when an included role contains more included roles). Moreover, if I use roles from ansible-galaxy, I would want to stick to the upstream versions (treat the roles as external libraries), meaning that ideally I would not want to change the code of the role as it does not sound very intuitive to have to maintain forks of all the roles one has to use (otherwise the external roles/libraries pretty much lose their meaning).
So what is the suggested solution for such a problem when one wants to reuse code from external roles, and based on if any change happened by the called role do something? Am I thinking totally wrong here in terms of how I have implemented my ansible playbook logic?
Take a look at the following concrete example of what I'm trying to do:
I have split tasks that I want to reuse in smaller roles. In my common role I have an add-file.yml set of tasks that looks like this (roles/common/tasks/add-file.yml):
- name: Copying file "{{ file.src }}" to "{{ file.dest }}"
copy:
src: "{{ file.src }}"
dest: "{{ file.dest }}"
owner: "{{ file.owner | default(ansible_user_id) }}"
group: "{{ file.group | default(ansible_user_id) }}"
mode: "{{ file.mode | default('preserve') }}"
when:
file.state is not defined or file.state != 'absent'
- name : Ensuring file "{{ file.dest }}" is absent
file:
path: "{{ file.dest }}"
state: "{{ file.state }}"
when:
- file.state is defined
- file.state == 'absent'
This is basically a generic custom task to support state: absent for file copying until this bug gets fixed.
Then in another role (let's call this setup-XY) I do this in the file roles/setup-XY/tasks/main.yml:
- name: Copying X-file
import_role:
name: common
tasks_from: add-file.yml
vars:
file:
state: present
src: X-file
dest: /home/user/X-file
mode: '0640'
- name: Ensuring Yline in Z-file
lineinfile:
dest: /etc/default/Z-file
regexp: '^Yline'
line: 'Yline=123'
Then in a third role (let's call it z-script) I want something like this in the file roles/z-script/tasks/main.yml:
- name: Setup-XY
include_role:
name: setup-XY
register: setupxy
- name: Run Z script if setupXY changed
shell: /bin/z-script
when: setupxy.changed
Unfortunately the above doesn't work since the register: setupxy line registers a setupxy variable that always returns "changed": false. If I use the import_role instead of include_role, the variable is not registered at all (remains undefined).
Note that in the z-script role I want to run the /bin/z-script shell command whenever any change is detected in the role setup-XY, i.e. if the X-file or Z-file were changed, and in reality I might be having many more tasks in the setup-XY role.
Moreover, note that the z-script is unrelated to the setup-XY role (e.g. the z-script only needs to run in a particular problematic server) so the code for executing the z-script ideally should not be shipped together with (and pollute) the setup-XY role. Look at the setup-XY as the external/upstream role in this case.
You cannot do this. You should understand that roles are like functions in other languages. You cannot rely on what happens internally.
That is why handlers are usable only in the current context, a role or a playbook and you cannot cross call them.

How to override environment variables in jenkins_job_builder at job level?

I am trying to find a way to inherit/override environment variables in jenkins jobs defined via jenkins-job-builder (jjb).
Here is one template that does not work:
#!/usr/bin/env jenkins-jobs test
- defaults: &sample_defaults
name: sample_defaults
- job-template:
name: 'sample-{product_version}'
project-type: pipeline
dsl: ''
parameters:
- string:
name: FOO
default: some-foo-value-defined-at-template-level
- string:
name: BAR
default: me-bar
- project:
defaults: sample_defaults
name: sample-{product_version}
parameters:
- string:
name: FOO
value: value-defined-at-project-level
jobs:
- 'sample-{product_version}':
product_version:
- '1.0':
parameters:
- string:
name: FOO
value: value-defined-at-job-level-1
- '2.0':
# this job should have:
# FOO=value-defined-at-project-level
# BAR=me-bar
Please note that it is key to be able to override these parameters at job or project level instead of template.
Requirements
* be able to add as many environment variables like this without having to add one JJB variable for each of them
* user should not be forced to define these at template or job levels
* those var need to endup being exposed as environment variables at runtime for pipelines and freestyle jobs.
* syntax is flexible but a dictionary approach would be highly appreciated, like:
vars:
FOO: xxx
BAR: yyy
The first thing to understand is how JJB priorities where it will pull variables in from.
job-group section definition
project section definition
job-template variable definition
defaults definition
(This is not an exhaustive list but it's covers the features I use)
From this list we can immediately see that if we want to make job-templates have override-able then using JJB defaults configuration is useless as it has the lowest precedence when JJB is deciding where to pull from.
On the other side of the spectrum, job-groups has the highest precedence. Which unfortunately means if you define a variable in a job-group with the intention of of overriding it at the project level then you are out of luck. For this reason I avoid setting variables in job-groups unless I want to enforce a setting for a set of jobs.
Declaring variable defaults
With that out of the way there are 2 ways JJB allows us to define defaults for a parameter in a job-template:
Method 1) Using {var|default}
In this method we can define the default along with the definition of the variable. For example:
- job-template:
name: '{project-name}-verify'
parameters:
- string:
name: BRANCH
default: {branch|master}
However where this method falls apart if you need to use the same JJB variable in more than one place as you will have multiple places to define the default value for the template. For example:
- job-template:
name: '{project-name}-verify'
parameters:
- string:
name: BRANCH
default: {branch|master}
scm:
- git:
refspec: 'refs/heads/{branch|master}'
As you can see we now have 2 places were we are declaring {branch|master} not ideal.
Method 2) Defining the default variable value in the job-template itself
With this method we declare the default value of the variable in the job-template itself just once. I like to section off my job-templates like this:
- job-template:
name: '{project-name}-verify'
#####################
# Variable Defaults #
#####################
branch: master
#####################
# Job Configuration #
#####################
parameters:
- string:
name: BRANCH
default: {branch}
scm:
- git:
refspec: 'refs/heads/{branch}'
In this case there is still 2 branch definitions for the job-template. However we also provide the default value for the {branch} variable at the top of the file. Just once. This will be the value that the job takes on if it is not passed in by a project using the template.
Overriding job-templates variables
When a project now wants to use a job-template I like to use one of 2 methods depending on the situation.
- project:
name: foo
jobs:
- '{project-name}-merge'
- '{project-name}-verify'
branch: master
This is the standard way that most folks use and it will set branch: master for every job-template in the list. However sometimes you may want to provide an alternative value for only 1 job in the list. In this case the more specific declaration takes precendence.
- project:
name: foo
jobs:
- '{project-name}-merge':
branch: production
- '{project-name}-verify'
branch: master
In this case the verify job will get he value "master" but the merge job will instead get the branch value "production".

Is it possible to use an ansible role to set a _passed_ variable value?

My scenario consists of two playbooks:
playbook P1 uses two roles A, and B,
playbook P2 uses just the role A.
Now, the roles A and B need to perform couple of additional steps, doing the same operations, but resulting in a different variable. Example:
# role A
- name: get the current_path
shell: pwd
register: A_path_variable
# role B, in a separate file
- name: get the current_path
shell: pwd
register: B_path_variable
The operations needed are the same - but the resulting variable name is different.
Is there some in Ansible to tell a role to use a specific variable? E.g.: one could separate the "shell: pwd" to a new role, then ask it to use "specific_variable" to register the final result.
You can use role parameters:
roles/util:
- shell: pwd
register: util_output
playbook:
roles:
- util
- { role: role_a, my_variable_a: "{{util_output}}" }
- { role: role_b, my_variable_b: "{{util_output}}" }
After investigation - it turns out the variable resolving is simply string based.
Role A and B, executed in a single playbook, both invoking a role UTIL, expecting it to write to different variables. Role UTIL should process some data and write result to a variable.
Role A:
...
vars:
output_var_name: my_variable_A
roles:
- UTIL
Role B:
...
vars:
output_var_name: my_variable_B
roles:
- UTIL
UTIL, in the end
- name: output result
set_fact:
"{{ output_var_name }}={{ local_variable }}"
Result: my_variable_B and my_variable_A are both set accordingly. My understanding is that "vars" gets executed before "roles", so it can change at the right moment, the right value.