I have a task that should run a list of SQL scripts. If there is an error during the execution of ANY of the scripts in the sequence, then the task should stop executing.
Given the following task, is there a way that it can modified it to see the register's stdout for the current iteration of the loop to check if 'ERROR' is in the stdout?
- name: Build and run SQLPlus commands
shell: 'echo #{{ item }} | {{ sqlplus }} {{ db_user }}/{{ db_pass }}#{{ environment }}'
register: sh1
with_items:
- ["a.sql", "b.sql"]
# failed_when: "'ERROR' in sh1.stdout_lines"
I was thinking something along the lines of the last commented line, but since sh1 is a register variable from a looping task, the output from each SQL script resides within the list results of sh1; so I'm not sure how to access the specific stdout of the command that was just executed.
I was thinking something along the lines of the last commented line,
Until now you were thinking correctly, so just uncomment the line:
- name: Build and run SQLPlus commands
shell: 'echo #{{ item }} | {{ sqlplus }} {{ db_user }}/{{ db_pass }}#{{ environment }}'
register: sh1
with_items:
- ["a.sql", "b.sql"]
failed_when: "'ERROR' in sh1.stdout_lines"
but since sh1 is a register variable from a looping task, the output from each SQL script resides within the list results of sh1;
No. Within the task, the values in sh1 dictionary are accessible in each iteration directly (without a list). The list sh1.results will be visible for subsequent tasks.
But the above won't break the execution of the whole loop, which is how Ansible was designed. So to realise the following...
If there is an error during the execution of ANY of the scripts in the sequence, then the task should stop executing.
You can use a workaround: save the task to a separate file and iterate the include task (see this answer).
Related
I am struggling to figure out how to execute an API test using a pipeline where the command used can be modified using a loop. For example:
TEMPLATE.yaml
parameters:
JobName: ''
TestDirectory: '.\tests\smoke\'
PositiveTest: ''
NegativeTest: ''
- name: environments
type: object
values:
- dev01
- dev02
- test01
- test02
jobs:
- job: ${{ parameters.JobName }}
pool:
name: pool
demands:
- Cmd
variables:
PosTest: ${{ parameters.PositiveTest }}
NegTest: ${{ parameters.NegativeTest }}
Directory: ${{ parameters.TestDirectory }}
- script: |
call .\venv\Scripts\activate.bat
cd $(Directory)
python $(PosTest)
displayName: 'Executing Positive Test Scenarios'
condition: and(succeeded(), ne('${{ variables.PosTest }}', ''))
- script: |
call .\venv\Scripts\activate.bat
cd $(Directory)
python $(NegTest)
displayName: 'Executing Negative Test Scenarios'
condition: and(succeeded(), ne('${{ variables.NegTest }}', ''))
TEST_FILE.yaml
...
jobs:
# Get this file: templates\TEMPLATE.yml from the `build` repository (imported above)
- template: templates\api-test-build.yml#build
- ${{ each env in parameters.environments }}:
parameters:
TestDirectory: '.\tests\smoke\job_class'
PositiveTest: 'python_test.py http://apient${{ env }}/arbitrary/api/path/name'
NegativeTest: ''
This of course doesn't work (the each directive returns an error like "the first property must be template". If I move it up a line it then says "the first property must be a job" and this cycle of errors just continues...).
The idea is just that I have a loop that iterates through environment strings (top of the TEMPLATE.yaml example). The yaml file that references the template passes the command python_test.py http://apient<whatever env string the current iteration is on>/arbitrary/api/path/name for each iterated string (bottom of TEST_FILE.yaml) and the template just executes each of those api tests. At the end of a run there should be 4 environments that have been tested on.
This is just an idea I have and I am still learned all the in's and out's of Azure Devops YAML. If anyone knows how I can get this to work, any improvements I can make to the idea itself or any other workarounds/solutions that would be highly appreciated. Thank you!
You can try using Multi-job configuration (matrix) in your pipeline.
When you want to run the same job with multi-configuration in a pipeline, the matrix strategy is a good choose.
For example, in your pipeline, you want to run the jobs that have the same steps and input parameters but different values of the input parameters. You can just set up one job with the matrix strategy in the pipeline.
I need to call a bash script from my gitlab cicd pipeline. When I call it, the input parameter needs to change depending on whether or not this is a merge into master. Basically what I want is this:
if master:
test:
stage: test
script:
- INPUT="foo"
- $(myscript.sh $INPUT)
if NOT master:
test:
stage: test
script:
- INPUT=""
- $(myscript.sh $INPUT)
I'm trying to figure out a way to set INPUT depending on which branch the pipeline is running on. I know there are rules as well as only/except, but they don't seem to allow you to set variable, only test them. I know the brute force way is to just write this twice, one with "only master" and another with "except master", but I would really like to avoid that.
Thanks
I implement this kind of thing using yml anchors to extend tasks.
I find it easier to read and customization can include other things, not only variables.
For example:
.test_my_software:
stage: test
script:
- echo ${MESSAGE}
- bash test.sh
test stable:
<<: *test_my_software
variables:
MESSAGE: testing stable code!
only:
- /stable.*/
test:
<<: *test_my_software
variables:
MESSAGE: testing our code!
except:
- /stable.*/
you get the idea...
Why not have two jobs to run the script and use rules to control when they are ran against master:
test:
stage: test
script:
- if [$CI_COMMIT_REF_NAME == 'master']; then export INPUT="foo"; else INPUT=""; fi
- $(myscript.sh $INPUT)
How do we check for a registered variable if only one of the two conditions turns out to be true having the same registered variable?
Below is my playbook that executes only one of the two shell modules.
- name: Check file
shell: cat /tmp/front.txt
register: myresult
when: Layer == 'front'
- fail:
msg: data was read from front.txt and print whatever
when: myresult.rc != 0
- name: Check file
shell: cat /tmp/back.txt
register: myresult
when: Layer == 'back'
- fail:
msg: data was read from back.txt and print whatever
when: myresult.rc != 0
Run the above playbook as
ansible-playbook test.yml -e Layer="front"
I do get error that says myresult does not have an attribute rc. What is the best way to print debug one statements based on the condition met?
Note: I wish the fail to terminate the execution of the play as soon as the condition is met hence I beleive ignore_errors with fail will not help.
Note: The shell modules can be any Unix command.
I tried myresult is changed but that too does not help. Can you please suggest.
You may want to look at this logical grouping of tasks: blocks
- name: Check file
block:
- name: check file
shell: cat /tmp/front.txt
register: myresult
ignore_errors: true
- fail:
msg: data was read from front.txt and print whatever
when: myresult.rc != 0
when: Layer == 'front'
- name: Check file
block:
- name: check file
shell: cat /tmp/back.txt
register: myresult
ignore_erros: true
- fail:
msg: data was read from back.txt and print whatever
when: myresult.rc != 0
when: Layer == 'back'
when the variable Layer is set to the front it will execute the shell command for front. but in case when the file doesn't exists it will give the error no such file exists and stop the play. so i have put the ignore_errors in the shell task.it will ignore it and jump to the fail module.
Apologies for lack of clarify, rewriting my ask:
I am struggling to get the appropriate start_index value passed on to the third task below "Echo parameters". If user_defined_index is "" I want the echo unique UIDs task to execute and output passed in the start_index variable. Likewise if user_define_index is not "" I want the second task below to execute and populate the start_index variable. I essentially need to pass either A or B to the echo parameters task.
The Echo parameters task expects to get some UIDs. The first task autogenerates UIDs based on what you see in the shell command. the second task allows for user to specify UIDs. So whichever WHEN command is valid that set of UIDs need to get used by the third task. Using debug statements I have confirmed that both ECHO UNIQUE UIDS and CAPTURE USER DEFINED UIDs tasks work fine and the corresponding register variables have the right data.
My issue is 3rd task only picks up values from the 1st task regardless, whether it be auto generated values or blank with skipped equals true.
I need the correct corresponding value in the start_index to be fed into the 3rd task.
- name: echo unique UIDs
shell: echo $(((0x$(hostid) + $(date '+%s'))*100000 + {{ item[0] }}*100000 + {{ start_stress_index }}))
with_indexed_items:
- "{{ load_cfg }}"
register: start_index
when: user_defined_index == ""
changed_when: False
- name: Capture user defined UIDs
shell: echo '{{ user_defined_index }}' | tr , '\n'
with_indexed_items:
- "{{ load_cfg }}"
register: start_index
when: user_defined_index != ""
changed_when: False
- name: Echo parameters
command: echo --cfg='{{ start_index }}' --si={{ item[1].stdout }}
with_together:
- "{{ load_cfg }}"
- "{{ start_index.results }}"
For the above regardless of user_define_index output from the echo unique UIDs always gets passed through to the 3rd task. After googling I finally found a potential solution to use the ternary filter:
https://github.com/ansible/ansible/issues/33827
I have modified my code to be:
- name: echo unique UIDs
shell: echo $(((0x$(hostid) + $(date '+%s'))*100000 + {{ item[0] }}*100000 + {{ start_stress_index }}))
with_indexed_items:
- "{{ load_cfg }}"
register: start_auto_index
when: user_defined_index == ""
changed_when: False
- name: Capture user defined UIDs
shell: echo '{{ user_defined_index }}' | tr , '\n'
with_indexed_items:
- "{{ load_cfg }}"
register: start_user_index
when: user_defined_index != ""
changed_when: False
- name: Echo parameters
command: echo --cfg='{{ start_index }}' --si={{ item[1].stdout }}
with_together:
- "{{ load_cfg }}"
- "{{ ((start_auto_index is not skipped)|ternary(start_auto_index,start_user_index))['results'] }}"
However, I still have same issue as with my first example when i run the above I again only get output from start_auto_index sent to 3rd task echo parameters no matter what i do with user_defined_index.
I hope this clarifies my question.
Explanation
The problem is that your tasks contain a loop and in such case Ansible returns separate statuses for the task and each loop iteration.
With start_auto_index is not skipped you check the status of the whole task, but it is iterations that get status "skipped".
Solution
Since your conditional user_defined_index for the tasks containing the loop is constant for each loop iteration, all iterations will have the same skipped-status, you can modify the condition in the Echo parameters task to checking just one of them:
"{{ ((start_auto_index[0] is not skipped)|ternary(start_auto_index,start_user_index))['results'] }}"
Besides, in the Echo parameters task with_together does not seem to serve any function, as you don't refer to the item[0].
Ansible provides a failed_when module, allowing users to specify certain fail conditions on their tasks, e.g. a certain string being found in stdout or stderr.
I am trying to do the opposite: I'd like my tasks not to fail if any of a set of strings is found in stdout or stderr. In other words, I'd like something approaching the functionality of a passed_when module.
I still want it to pass normally when the return code is 0.
But if it would fail (rc != 0) then it should first check for the occurrence of some string.
I.e. if some string is found it passes no matter what.
My reasoning goes like this:
There are many reasons why the task could
fail - but some of these, depending on the output, I do not consider
as a failure in the current context.
Does anybody have a good idea how this can be achieved?
Have a look here:
Is there some Ansible equivalent to "failed_when" for success
- name: ping pong redis
command: redis-cli ping
register: command_result
failed_when:
- "'PONG' not in command_result.stderr"
- "command_result.rc != 0"
will not fail if return code is 0 and there is no 'PONG' in stderr.
will not fail if there is "PONG" in stderr.
So it passes if any of the list is False
Your original question was phrased like this (using boolean logic to make it easier):
Succeed a command if a set of strings is found in stdout or stderr
Rephrasing your logic:
fail if a set of strings is NOT found in stdout or stderr. Using this logic it's easy to do with failed_when. Here's a snippet:
---
- name: Test failed_when as succeed_if
hosts: localhost
connection: local
gather_facts: no
tasks:
- name: "'succeed_if' set of strings in stdout"
command: /bin/echo succeed1
register: command_result
failed_when: "command_result.stdout not in ['succeed1',]"
- name: "'succeed_if' set of strings in stdout (multiple values)"
command: /bin/echo succeed2
register: command_result
failed_when: "command_result.stdout not in ['succeed1', 'succeed2']"
- name: "'succeed_if' set of strings in stderr (multiple values)"
shell: ">&2 /bin/echo succeed2 "
register: command_result
failed_when: "command_result.stderr not in ['succeed1', 'succeed2']"
- name: "'succeed_if' set of strings in stderr (multiple values) or rc != 0"
shell: ">&2 /bin/echo succeed2; /bin/false"
register: command_result
failed_when: "command_result.stderr not in ['succeed1', 'succeed2'] and command_result.rc != 0"
# vim: set ts=2 sts=2 fenc=utf-8 expandtab list:
Also the documenation you are probably looking for are Jinja2 Expressions