How to set constants in an ansible role? - variables

In an ansible role, i need to define constants for some paths, not changeable by users in their playbook.
Here's my need:
the role will have a {{app_base_path}} variable (changeable by user), and then i want to set 2 constants:
app_instance_path: "{{app_base_path}}/appinstance"
app_server_path: "{{app_instance_path}}/appserver"
I need each value several times in my tasks so i can't set only one variable for it
What's the best way to do it?
Thanks.

As far as I know, ansible has no constants.
You can do the following:
In the file <rolname>/defaults/main.yml
---
# Don't change this variables
app_instance_path: "{{ app_base_path }}/appinstance"
app_server_path: "{{ app_instance_path }}/appserver"
And add an assertion task in to the <rolename>/tasks/main.yml file:
---
# ...
- name: Check some constants
assert:
that:
- "app_instance_path == app_base_path + '/appinstance'"
- "app_server_path == app_instance_path + '/appserver'"
Further more you can document for the users to only set app_base_path and leave app_instance_path and app_server_path as it is.

Finally, i got it with set_fact, unfortunately, it seems to have a very low priority in variables order, so my role execution can fail if the user defines extra_vars in his playbook...

Related

Problem passing user defined variables (JMeter Script)

I don't know how to pass User Defined Variables (from JMeter .jmx Script) on jenkins-taurus.yml (Taurus BlazeMeter configuration file).
It keeps pushing the fixed variables:
[1]: https://i.stack.imgur.com/igIK3.png
I need these fields (User Defined Variables) to be blank, and the info inside them to be pushed from the Taurus configuration file:
As you can see, I'm trying to pass the parameters through Taurus configuration file (.yml)
[2]: https://i.stack.imgur.com/kMpRx.png
SI need to know how to pass these variables inside Taurus script,
should I use user.{userDefinedParametersHere} or is there another kind of syntax?
This is necessary because the server URL and login/password could be changed easily this way.
You're using incorrect keyword, if you want to populate the User Defined Variables via Taurus you should use variables, not properties
---
execution:
- scenario:
variables:
foo: bar
baz: qux
script: test.jmx
It will create another instances of User Defined Variables called Variables from Taurus
If you additionally need to disable all existing User Defined Variables instances you could do something like:
---
execution:
- scenario:
variables:
foo: bar
baz: qux
script: test.jmx
#if you want to additionally disable User Defined Variables:
modifications:
disable: # Names of the tree elements to disable
- User Defined Variables
If you have defined your variables at Test Plan level - don't worry, just override them via Taurus and the script will use the "new" values (the ones you supply via variables keyword)

Ansible: ignoring erros in include tasks

I have in my yml include to a certain yml can I add ignore_erors like:
include: ../test.tml
ignore_errors: yes
or only in the playbook I included itself
thanks.
Playbook keywords can be applied to 4 objects: a play, role, block, task. ignore_errors can be applied to all of them.
Correct syntax
In your example, include is a task. The correct syntax is
- include: ../test.yml
ignore_errors: yes
Include is deprecated
Quoting from include - Include a play or task list
The include action was too confusing, dealing with both plays and tasks, being both dynamic and static. This module will be removed in version 2.8. As alternatives use include_tasks, import_playbook, import_tasks.

Notify handler or register ansible variable when change is detected in include_role?

After lots of searching, I come to the conclusion that ansible (I use the latest stable as of now version, v2.5.3) most likely does not support registering variables or notifications from include_role and import_role statements.
There is a similar question here and the suggestion in one of the answers is: Each individual task within your include file can register variables, and you can reference those variables elsewhere.
However, if I follow this suggestion then I need to add extra unnecessary code in all of my included roles, just because I may need a workaround in a special server. Things can quickly get out of control and become messy, especially in the case of nested role inclusions (i.e. when an included role contains more included roles). Moreover, if I use roles from ansible-galaxy, I would want to stick to the upstream versions (treat the roles as external libraries), meaning that ideally I would not want to change the code of the role as it does not sound very intuitive to have to maintain forks of all the roles one has to use (otherwise the external roles/libraries pretty much lose their meaning).
So what is the suggested solution for such a problem when one wants to reuse code from external roles, and based on if any change happened by the called role do something? Am I thinking totally wrong here in terms of how I have implemented my ansible playbook logic?
Take a look at the following concrete example of what I'm trying to do:
I have split tasks that I want to reuse in smaller roles. In my common role I have an add-file.yml set of tasks that looks like this (roles/common/tasks/add-file.yml):
- name: Copying file "{{ file.src }}" to "{{ file.dest }}"
copy:
src: "{{ file.src }}"
dest: "{{ file.dest }}"
owner: "{{ file.owner | default(ansible_user_id) }}"
group: "{{ file.group | default(ansible_user_id) }}"
mode: "{{ file.mode | default('preserve') }}"
when:
file.state is not defined or file.state != 'absent'
- name : Ensuring file "{{ file.dest }}" is absent
file:
path: "{{ file.dest }}"
state: "{{ file.state }}"
when:
- file.state is defined
- file.state == 'absent'
This is basically a generic custom task to support state: absent for file copying until this bug gets fixed.
Then in another role (let's call this setup-XY) I do this in the file roles/setup-XY/tasks/main.yml:
- name: Copying X-file
import_role:
name: common
tasks_from: add-file.yml
vars:
file:
state: present
src: X-file
dest: /home/user/X-file
mode: '0640'
- name: Ensuring Yline in Z-file
lineinfile:
dest: /etc/default/Z-file
regexp: '^Yline'
line: 'Yline=123'
Then in a third role (let's call it z-script) I want something like this in the file roles/z-script/tasks/main.yml:
- name: Setup-XY
include_role:
name: setup-XY
register: setupxy
- name: Run Z script if setupXY changed
shell: /bin/z-script
when: setupxy.changed
Unfortunately the above doesn't work since the register: setupxy line registers a setupxy variable that always returns "changed": false. If I use the import_role instead of include_role, the variable is not registered at all (remains undefined).
Note that in the z-script role I want to run the /bin/z-script shell command whenever any change is detected in the role setup-XY, i.e. if the X-file or Z-file were changed, and in reality I might be having many more tasks in the setup-XY role.
Moreover, note that the z-script is unrelated to the setup-XY role (e.g. the z-script only needs to run in a particular problematic server) so the code for executing the z-script ideally should not be shipped together with (and pollute) the setup-XY role. Look at the setup-XY as the external/upstream role in this case.
You cannot do this. You should understand that roles are like functions in other languages. You cannot rely on what happens internally.
That is why handlers are usable only in the current context, a role or a playbook and you cannot cross call them.

How to override environment variables in jenkins_job_builder at job level?

I am trying to find a way to inherit/override environment variables in jenkins jobs defined via jenkins-job-builder (jjb).
Here is one template that does not work:
#!/usr/bin/env jenkins-jobs test
- defaults: &sample_defaults
name: sample_defaults
- job-template:
name: 'sample-{product_version}'
project-type: pipeline
dsl: ''
parameters:
- string:
name: FOO
default: some-foo-value-defined-at-template-level
- string:
name: BAR
default: me-bar
- project:
defaults: sample_defaults
name: sample-{product_version}
parameters:
- string:
name: FOO
value: value-defined-at-project-level
jobs:
- 'sample-{product_version}':
product_version:
- '1.0':
parameters:
- string:
name: FOO
value: value-defined-at-job-level-1
- '2.0':
# this job should have:
# FOO=value-defined-at-project-level
# BAR=me-bar
Please note that it is key to be able to override these parameters at job or project level instead of template.
Requirements
* be able to add as many environment variables like this without having to add one JJB variable for each of them
* user should not be forced to define these at template or job levels
* those var need to endup being exposed as environment variables at runtime for pipelines and freestyle jobs.
* syntax is flexible but a dictionary approach would be highly appreciated, like:
vars:
FOO: xxx
BAR: yyy
The first thing to understand is how JJB priorities where it will pull variables in from.
job-group section definition
project section definition
job-template variable definition
defaults definition
(This is not an exhaustive list but it's covers the features I use)
From this list we can immediately see that if we want to make job-templates have override-able then using JJB defaults configuration is useless as it has the lowest precedence when JJB is deciding where to pull from.
On the other side of the spectrum, job-groups has the highest precedence. Which unfortunately means if you define a variable in a job-group with the intention of of overriding it at the project level then you are out of luck. For this reason I avoid setting variables in job-groups unless I want to enforce a setting for a set of jobs.
Declaring variable defaults
With that out of the way there are 2 ways JJB allows us to define defaults for a parameter in a job-template:
Method 1) Using {var|default}
In this method we can define the default along with the definition of the variable. For example:
- job-template:
name: '{project-name}-verify'
parameters:
- string:
name: BRANCH
default: {branch|master}
However where this method falls apart if you need to use the same JJB variable in more than one place as you will have multiple places to define the default value for the template. For example:
- job-template:
name: '{project-name}-verify'
parameters:
- string:
name: BRANCH
default: {branch|master}
scm:
- git:
refspec: 'refs/heads/{branch|master}'
As you can see we now have 2 places were we are declaring {branch|master} not ideal.
Method 2) Defining the default variable value in the job-template itself
With this method we declare the default value of the variable in the job-template itself just once. I like to section off my job-templates like this:
- job-template:
name: '{project-name}-verify'
#####################
# Variable Defaults #
#####################
branch: master
#####################
# Job Configuration #
#####################
parameters:
- string:
name: BRANCH
default: {branch}
scm:
- git:
refspec: 'refs/heads/{branch}'
In this case there is still 2 branch definitions for the job-template. However we also provide the default value for the {branch} variable at the top of the file. Just once. This will be the value that the job takes on if it is not passed in by a project using the template.
Overriding job-templates variables
When a project now wants to use a job-template I like to use one of 2 methods depending on the situation.
- project:
name: foo
jobs:
- '{project-name}-merge'
- '{project-name}-verify'
branch: master
This is the standard way that most folks use and it will set branch: master for every job-template in the list. However sometimes you may want to provide an alternative value for only 1 job in the list. In this case the more specific declaration takes precendence.
- project:
name: foo
jobs:
- '{project-name}-merge':
branch: production
- '{project-name}-verify'
branch: master
In this case the verify job will get he value "master" but the merge job will instead get the branch value "production".

Is it possible to use an ansible role to set a _passed_ variable value?

My scenario consists of two playbooks:
playbook P1 uses two roles A, and B,
playbook P2 uses just the role A.
Now, the roles A and B need to perform couple of additional steps, doing the same operations, but resulting in a different variable. Example:
# role A
- name: get the current_path
shell: pwd
register: A_path_variable
# role B, in a separate file
- name: get the current_path
shell: pwd
register: B_path_variable
The operations needed are the same - but the resulting variable name is different.
Is there some in Ansible to tell a role to use a specific variable? E.g.: one could separate the "shell: pwd" to a new role, then ask it to use "specific_variable" to register the final result.
You can use role parameters:
roles/util:
- shell: pwd
register: util_output
playbook:
roles:
- util
- { role: role_a, my_variable_a: "{{util_output}}" }
- { role: role_b, my_variable_b: "{{util_output}}" }
After investigation - it turns out the variable resolving is simply string based.
Role A and B, executed in a single playbook, both invoking a role UTIL, expecting it to write to different variables. Role UTIL should process some data and write result to a variable.
Role A:
...
vars:
output_var_name: my_variable_A
roles:
- UTIL
Role B:
...
vars:
output_var_name: my_variable_B
roles:
- UTIL
UTIL, in the end
- name: output result
set_fact:
"{{ output_var_name }}={{ local_variable }}"
Result: my_variable_B and my_variable_A are both set accordingly. My understanding is that "vars" gets executed before "roles", so it can change at the right moment, the right value.