Ansible "stat" module doesn't work with the when clause - module

I have a playbooks that checks to see if the endpoint is registered to Spacewalk using the stat module
- name: "Check spacewalk registraton"
stat:
path: /usr/sbin/rhn_check
register: sw_registered
- debug:
msg: "{{ sw_registered }}"
Output is:
TASK [Check spacewalk registraton] *********************************************
ok: [hostname]
TASK [debug] *******************************************************************
ok: [hostname] =>
msg:
changed: false
failed: false
stat:
atime: 1670244246.6493175
attr_flags: e
attributes:
- extents
block_size: 4096
blocks: 32
charset: us-ascii
checksum: 7b22e2e756706ef1b81e50cda7c41005e15441d7
ctime: 1623819058.4283004
dev: 64768
device_type: 0
executable: true
**exists: true**
gid: 0
gr_name: root
inode: 143991
isblk: false
ischr: false
isdir: false
isfifo: false
isgid: false
islnk: false
isreg: true
issock: false
isuid: false
mimetype: text/x-python
mode: '0755'
mtime: 1536233638.0
nlink: 1
path: /usr/sbin/rhn_check
pw_name: root
readable: true
rgrp: true
roth: true
rusr: true
size: 15291
uid: 0
version: '290956743'
wgrp: false
woth: false
writeable: true
wusr: true
xgrp: true
xoth: true
xusr: true
So the sw_registered.stat.exists is a value of true
Further in my role are tasks based on this variable
- name: "Yum update for RHEL6 and above using RedHat Satellite"
yum:
name: '*'
state: latest
exclude: rhn-client-tools
when: (ansible_distribution_major_version >= "6") and (sw_registered.stat.exists is not defined and sw_registered.stat.exists is false)
Output from that task is
TASK [QL-patching : Yum update for RHEL6 and above using RedHat Satellite] *****
skipping: [hostname]
I would expect that task to be skipped but the next task is:
- name: "Yum update for RHEL6 and above using spacewalk"
yum:
name: '*'
state: latest
disable_gpg_check: yes
when: (ansible_distribution_major_version >= "6") and (sw_registered.stat.exists is defined and sw_registered.stat.exists is true )
Output from that task is:
TASK [QL-patching : Yum update for RHEL6 and above using spacewalk] ************
skipping: [hostname]
I expect this task be executed and not skipped. What am I missing here?

Based on the comment of Zeitounator you may have a look into the following minimal example
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Test file
stat:
path: "/home/{{ ansible_user }}/test.file"
register: result
- name: Show result
debug:
msg: "{{ result.stat.exists }}"
resulting into an output of
TASK [Show result] ******
ok: [localhost] =>
msg: false
TASK [Show result] ******
ok: [localhost] =>
msg: true
and depending on if the file to test exists or not.
The key result.stat.exists will be defined in both cases if the stat test task was executed successful. This is because of Return Values of the stat module. Therefore the Conditional task based on registered variables could be simplified to something like
- name: Show result
debug:
msg: "File exists."
when: result.stat.exists
resulting into an output of
TASK [Show result] ******
ok: [localhost] =>
msg: File exists.
if the file is available or skipped if not.
You may also consinder to Provide default values as also mentioned and to catch corner cases like a former failing task because of insufficient access rights or if the task wasn't running because of Check mode. In such cases the result set could look like
TASK [Test file] ***************************
fatal: [localhost]: FAILED! => changed=false
msg: Permission denied
...ignoring
TASK [Show result] *************************
ok: [localhost] =>
msg:
changed: false
failed: true
msg: Permission denied

Related

Ansible - set_facts Set a boolean value based on register.stdout_lines containing a string

How can I set a variable using set_facts to True or False based on whether the register.stdout_lines contains a specific string.
Ansible version - 2.9
This is my stdout_lines output
{
"msg": [
"● confluent-server.service - Apache Kafka - broker",
" Loaded: loaded (/usr/lib/systemd/system/confluent-server.service; enabled; vendor preset: disabled)",
" Drop-In: /etc/systemd/system/confluent-server.service.d",
" └─override.conf",
" Active: active (running) since Tue 2023-01-31 20:57:00 EST; 19h ago",
" Docs: http://docs.confluent.io/",
" Main PID: 6978 (java)",
" CGroup: /system.slice/confluent-server.service",
]
}
And I want to set a variable server_running to True if the above output contains the string active (running) ( which it does in above case - otherwise that should be set to false )
I tried this - but that is not correct
- name: success for start
set_fact:
start_success: >-
"{{ confluent_status.stdout_lines | join('') | search(' active (running)') }}"
I want start_success above to have true or false value
I am not yet familiar with how to process using filters in ansible output, so trying different things found on the net.
Can I define a variable to true or false as output of whether a condition is true or not ? How would I go about it?
stdout_lines is basically a list containing an item for each stdout line.
If you only need to access the output, you could easily use stdout instead.
The following example shows how to determine the value of a variable based on
a condition:
- name: success for start
set_fact:
start_success: >-
{% if ' active (running)' in confluent_status.stdout %}True{% else %}False{% endif %}
It's also possible to set stdout_callback = yaml in ansible.cfg for a better formatted output.
You can use regex_search to check the string you're looking for.
Below I provide an example of a playbook.
- name: Check status
hosts: localhost
gather_facts: no
vars:
confluent_status:
stdout_lines: [
"● confluent-server.service - Apache Kafka - broker",
" Loaded: loaded (/usr/lib/systemd/system/confluent-server.service; enabled; vendor preset: disabled)",
" Drop-In: /etc/systemd/system/confluent-server.service.d",
" └─override.conf",
" Active: active (running) since Tue 2023-01-31 20:57:00 EST; 19h ago",
" Docs: http://docs.confluent.io/",
" Main PID: 6978 (java)",
" CGroup: /system.slice/confluent-server.service",
]
tasks:
- name: set_status either to True or False
set_fact:
set_status: "{% if (confluent_status.stdout_lines | regex_search('active \\(running\\)')) %}True{% else %}False{% endif %}"
- name: output set_status variable set in the previous task
debug:
msg: "{{ set_status }}"
- name: just a debug that outputs directly the status so you can use the condition directly to any task if needed.
debug:
msg: "{% if (confluent_status.stdout_lines | regex_search('active \\(running\\)')) %}True{% else %}False{% endif %}"
Gives:
PLAY [Check status] ************************************************************************************************************************************************************************
TASK [set_status either to True or False] **************************************************************************************************************************************************
ok: [localhost]
TASK [output set_status variable set in the previous task] *********************************************************************************************************************************
ok: [localhost] => {
"msg": true
}
TASK [Check if status is True or false] ****************************************************************************************************************************************************
ok: [localhost] => {
"msg": true
}

Ansible loop with multiple register value

would you please help me with this problem:
I have a playbook with multiple tasks and each tasks contains loop and register for the output of the task. Last task is going to use lineinfile to create a csv report based on the previous registers. something like below:
- name: information
module:
xxxx: xxxx
xxxx: xxxxx
loop:
- xxxx
- xxxx
register: task1_info
- name: information
module:
xxxx: xxxx
xxxx: xxxxx
loop:
- xxxx
- xxxx
register: task2_info
- name: information
lineinfile:
path: xxxx
line: "{{ item.tags.Name }}, {{ item.lastName }}"
loop:
- task1_info.results
- task2_info.results
if i use only one register at the end it is working, but not loop through all registers. the other option is to write a task after each register which I don't think reasonable!!
I understand your use case that you like to append one list to an other or to merge two lists.
To do so you could use an approach like
---
- hosts: localhost
become: false
gather_facts: false
vars:
LIST_1:
- 1
- 2
- 3
LIST_2:
- A
- B
- C
tasks:
- name: Info
debug:
msg: "{{ item }}"
loop: "{{ LIST_1 + LIST_2 }}"
loop_control:
extended: true
label: "{{ansible_loop.index0 }}"
resulting into an output of
TASK [Info] ******************
ok: [localhost] => (item=0) =>
msg: 1
ok: [localhost] => (item=1) =>
msg: 2
ok: [localhost] => (item=2) =>
msg: 3
ok: [localhost] => (item=3) =>
msg: A
ok: [localhost] => (item=4) =>
msg: B
ok: [localhost] => (item=5) =>
msg: C
Credits to
Append list variable to another list in Ansible
Further Q&A
Combine two lists in Ansible when one list could be empty
Ansible: Merge two lists based on an attribute

How to alert via email in Ansible

I have setup a mail task in ansible to send emails if yum update is marked as 'changed'.
Here is my current working code:
- name: Send mail alert if updated
community.general.mail:
to:
- 'recipient1'
cc:
- 'recipient2'
subject: Update Alert
body: 'Ansible Tower Updates have been applied on the following system: {{ ansible_hostname }}'
sender: "ansible.updates#domain.com"
delegate_to: localhost
when: yum_update.changed
This works great, however, every system that gets updated per host group sends a separate email. Last night for instance I had a group of 20 servers update and received 20 separate emails. I'm aware of why this happens, but my question is how would I script this to add all the systems to one email? Is that even possible or should I just alert that the group was updated and inform teams of what servers are in each group? (I'd prefer not to take the second option)
Edit 1:
I have added the code suggested and am now unable to receive any emails. Here's the error message:
"msg": "The conditional check '_changed|length > 0' failed. The error was: error while evaluating conditional (_changed|length > 0): {{ hostvars|dict2items| selectattr('value.yum_update.changed')| map(attribute='key')|list }}: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'yum_update'\n\nThe error appears to be in '/tmp/bwrap_1073_o8ibkgrl/awx_1073_0eojw5px/project/yum-update-ent_template_servers.yml': line 22, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Send mail alert if updated\n ^ here\n",
I am also attaching my entire playbook for reference:
---
- name: Update enterprise template servers
hosts: ent_template_servers
tasks:
- name: Update all packages
yum:
name: '*'
state: latest
register: yum_update
- name: Reboot if needed
import_tasks: /usr/share/ansible/tasks/reboot-if-needed-centos.yml
- name: Kernel Cleanup
import_tasks: /usr/share/ansible/tasks/kernel-cleanup.yml
- debug:
var: yum_update.changed
- name: Send mail alert if updated
community.general.mail:
to:
- 'email#domain.com'
subject: Update Alert
body: |-
Updates have been applied on the following system(s):
{{ _changed }}
sender: "ansible.updates#domain.com"
delegate_to: localhost
run_once: true
when: _changed|length > 0
vars:
_changed: "{{ hostvars|dict2items|
selectattr('yum_update.changed')|
map(attribute='key')|list }}"
...
Ansible version is: 2.9.27
Ansible Tower version is: 3.8.3
Thanks in advance!
For example, the mail task below
- debug:
var: yum_update.changed
- community.general.mail:
sender: ansible
to: root
subject: Update Alert
body: |-
Updates have been applied to the following system:
{{ _changed }}
delegate_to: localhost
run_once: true
when: _changed|length > 0
vars:
_changed: "{{ hostvars|dict2items|
selectattr('value.yum_update.changed')|
map(attribute='key')|list }}"
TASK [debug] ***************************************************************
ok: [host01] =>
yum_update.changed: true
ok: [host02] =>
yum_update.changed: false
ok: [host03] =>
yum_update.changed: true
TASK [community.general.mail] **********************************************
ok: [host01 -> localhost]
will send
From: ansible#domain.com
To: root#domain.com
Cc:
Subject: Update Alert
Date: Wed, 09 Feb 2022 16:55:47 +0100
X-Mailer: Ansible mail module
Updates have been applied to the following system:
['host01', 'host03']
Remove the condition below if you want to receive also empty lists
when: _changed|length > 0
Debug
'ansible.vars.hostvars.HostVarsVars object' has no attribute 'yum_update'
Q: "What I could try?"
A: Some of the hosts are missing the variables yum_update. You can test it
- debug:
msg: "{{ hostvars|dict2items|
selectattr('value.yum_update.changed')|
map(attribute='key')|list }}"
run_once: true
Either make sure that the variable is defined on all hosts or use json_query. This filter tolerates missing attributes, e.g.
- debug:
msg: "{{ hostvars|dict2items|
json_query('[?value.yum_update.changed].key') }}"
run_once: true
Q: "The 'debug' task prior to the 'mail' task gives me the same output. But it fails when the 'mail' task is executed."
A: Minimize the code and isolate the problem. For example, in the code below you can see
Variable yum_update.changed is missing on host03
The filter json_query ignores this
The filter selectattr fails
- debug:
var: yum_update.changed
- debug:
msg: "{{ hostvars|dict2items|
json_query('[?value.yum_update.changed].key') }}"
run_once: true
- debug:
msg: "{{ hostvars|dict2items|
selectattr('value.yum_update.changed')|
map(attribute='key')|list }}"
run_once: true
gives
TASK [debug] **************************************************
ok: [host01] =>
yum_update.changed: true
ok: [host02] =>
yum_update.changed: false
ok: [host03] =>
yum_update.changed: VARIABLE IS NOT DEFINED!
TASK [debug] **************************************************
ok: [host01] =>
msg:
- host01
TASK [debug] **************************************************
fatal: [host01]: FAILED! =>
msg: |-
The task includes an option with an undefined variable.
The error was: 'ansible.vars.hostvars.HostVarsVars object'
has no attribute 'yum_update'
Both filters give the same results if all variables are present
TASK [debug] **************************************************
ok: [host01] =>
yum_update.changed: true
ok: [host02] =>
yum_update.changed: false
ok: [host03] =>
yum_update.changed: true
TASK [debug] **************************************************
ok: [host01] =>
msg:
- host01
- host03
TASK [debug] **************************************************
ok: [host01] =>
msg:
- host01
- host03

Serverless: TypeError: Cannot read property 'stage' of undefined

frameworkVersion: '2'
plugins:
- serverless-step-functions
- serverless-python-requirements
- serverless-parameters
- serverless-pseudo-parameters
provider:
name: aws
region: us-east-2
stage: ${opt:stage, 'dev'}
runtime: python3.7
versionFunctions: false
iam:
role: arn:aws:iam::#{AWS::AccountId}:role/AWSLambdaVPCAccessExecutionRole
apiGateway:
shouldStartNameWithService: true
lambdaHashingVersion: 20201221
package:
exclude:
- node_modules/**
- venv/**
# Lambda functions
functions:
generateAlert:
handler: handler.generateAlert
generateData:
handler: handler.generateDataHandler
timeout: 600
approveDenied:
handler: handler.approveDenied
timeout: 600
stepFunctions:
stateMachines:
"claims-etl-and-insight-generation-${self:provider.stage}":
loggingConfig:
level: ALL
includeExecutionData: true
destinations:
- Fn::GetAtt: ["ETLStepFunctionLogGroup", Arn]
name: "claims-etl-and-insight-generation-${self:provider.stage}"
definition:
Comment: "${self:provider.stage} ETL Workflow"
StartAt: RawQualityJob
States:
# Raw Data Quality Check Job Start
RawQualityJob:
Type: Task
Resource: arn:aws:states:::glue:startJobRun.sync
Parameters:
JobName: "data_quality_v2_${self:provider.stage}"
Arguments:
"--workflow-name": "${self:provider.stage}-Workflow"
"--dataset_id.$": "$.datasetId"
"--client_id.$": "$.clientId"
Next: DataQualityChoice
Retry:
- ErrorEquals: [States.ALL]
MaxAttempts: 2
IntervalSeconds: 10
BackoffRate: 5
Catch:
- ErrorEquals: [States.ALL]
Next: GenerateErrorAlertDataQuality
# End Raw Data Quality Check Job
DataQualityChoice:
Type: Task
Resource:
Fn::GetAtt: [approveDenied, Arn]
Next: Is Approved ?
Is Approved ?:
Type: Choice
Choices:
- Variable: "$.quality_status"
StringEquals: "Denied"
Next: FailState
Default: HeaderLineJob
FailState:
Type: Fail
Cause: "Denied status"
# Header Line Job Start
HeaderLineJob:
Type: Parallel
Branches:
- StartAt: HeaderLineIngestion
States:
HeaderLineIngestion:
Type: Task
Resource: arn:aws:states:::glue:startJobRun.sync
Parameters:
JobName: headers_lines_etl_rs_v2
Arguments:
"--workflow-name.$": "$.Arguments.--workflow-name"
"--dataset_id.$": "$.Arguments.--dataset_id"
"--client_id.$": "$.Arguments.--client_id"
End: True
Retry:
- ErrorEquals: [States.ALL]
MaxAttempts: 2
IntervalSeconds: 10
BackoffRate: 5
Catch:
- ErrorEquals: [States.ALL]
Next: GenerateErrorAlertHeaderLine
End: True
# Header Line Job End
GenerateErrorAlertDataQuality:
Type: Task
Resource:
Fn::GetAtt: [generateAlert, Arn]
End: true
resources:
Resources:
# Cloudwatch Log
"ETLStepFunctionLogGroup":
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: "ETLStepFunctionLogGroup_${self:provider.stage}"
This is what my serverless.yml file looks like.
When I run the command:
sls deploy --stage staging
It show
Type Error ----------------------------------------------
TypeError: Cannot read property 'stage' of undefined
at Variables.getValueFromOptions (/snapshot/serverless/lib/classes/Variables.js:648:37)
at Variables.getValueFromSource (/snapshot/serverless/lib/classes/Variables.js:579:17)
at /snapshot/serverless/lib/classes/Variables.js:539:12
Your Environment Information ---------------------------
Operating System: linux
Node Version: 14.4.0
Framework Version: 2.30.3 (standalone)
Plugin Version: 4.5.1
SDK Version: 4.2.0
Components Version: 3.7.4
How I can fix this? I tried with different version of serverless.
There is error in yamlParser file, which is provided by serverless-step-functions.
Above is my serverless config file.
It looks like a $ sign is missing from your provider -> stage?
provider:
name: aws
region: us-east-2
stage: ${opt:stage, 'dev'} # $ sign is missing?
runtime: python3.7
versionFunctions: false
iam:
role: arn:aws:iam::#{AWS::AccountId}:role/AWSLambdaVPCAccessExecutionRole
apiGateway:
shouldStartNameWithService: true
lambdaHashingVersion: 20201221

Ansible - variable changing it's value even though the condition is not met

I have a role that I need to use for more values. For each task within the role I register a variable: checkdeps (it's the same for all tasks within this role - during a run it always has at least one value/output - I need it like this because the path differs "/opt/play/apps/default-ace", "default-device" etc.) and in the end I do an echo to view the output of checkdeps.stdout.
Below I've put one task that will output ok and one that will intentionally will be skipped.
If I use the parameter dep: APK_PARSER in the playbook what it does is: first checkdeps registers the output and in the second task the value of checkdeps is replaced with nothing! Even though the task is skipped due to no matching dep parameter.
Why does the value of checkdeps is replaced if the condition is not met ?
- name: "output ok"
shell: "cd /opt/play/apps/default-ace && play deps {{ dep }}"
register: checkdeps
when: "dep == \"APK_PARSER\""
- name: "example to skip"
shell: "cd /opt/play/apps/default-device && play deps {{ dep }}"
register: checkdeps
when: "dep == \"I\" or dep == \"II\""
- name: "echo ok if Done!"
shell: "echo \"OK - {{ dep }} Dependencies {{ checkdeps.stdout }}\""
And it gives me error:
One or more undefined variables: 'dict' object has no attribute 'stdout'
I've modified the last line without the stdout:
shell: "echo \"OK - {{ dep }} Dependencies {{ checkdeps }}\""
and it ran without error but gave the wrong output:
stdout:
OK - APK_PARSER Dependencies {u'skipped': True, u'changed': False}
did the variable checkdeps register the "skipping: [...]" ? Why it is changing it's value if the condition is not met ?
ansible stores the "log of ansible task execution", NOT the 'output of command executed'. This log which is a dict and one of the keys is stdout, which contains everything the executed command printed on stdout (output of command).
tasks:
- debug: msg='one'
register: o1
when: True
- debug: msg='two'
register: o2
when: False
- debug: msg='o1={{o1}}'
- debug: msg='o2={{o2}}'
It prints the following. 'skipped' & 'changed' are the two keys that would be present in the "log" when the task is not executed.
TASK: [debug msg='one'] *******************************************************
ok: [localhost] => {
"msg": "one"
}
TASK: [debug msg='two'] *******************************************************
skipping: [localhost]
TASK: [debug msg='o1={{o1}}'] *************************************************
ok: [localhost] => {
"msg": "o1={'msg': u'one', 'verbose_always': True, 'invocation': {'module_name': u'debug', 'module_args': u\"msg='one'\"}}"
}
TASK: [debug msg='o2={{o2}}'] *************************************************
ok: [localhost] => {
"msg": "o2={u'skipped': True, u'changed': False}"
}
* term "task execution log" is invented by me for explanation and not ansible standard terminology.
Or simply tell ansible to not register the variable if the task is skipped with set_when_task_skipped=false:
- name: "example to skip"
shell: "cd /opt/play/apps/default-device && play deps {{ dep }}"
register: checkdeps set_when_task_skipped=false
when: "dep == \"I\" or dep == \"II\""