steps marked with trigger: status = failure, always run, even if status is success - drone.io

I want to send notification in case any of the build step fails in drone ci. I tried adding following trigger at various levels, but it always runs, even in case of success.
Trigger that I am trying is as follows:
trigger:
status:
- failure
Tried setting it up inside/outside the steps. but it keeps getting triggered every time.

Try using where instead of trigger, like this:
- name: notify-me
image: ...
...
when:
status: [ failure ]

Related

Run a fallback script when liquibase script fails in gradle

I'm using Liquibase with gradle in order to apply database changes.
I have three activities in runList:
runList='stop_job, execute_changes, start_job'
It works fine in case that I don't have any exception, but if something fails on the second step (execute_changes) it stops there and does not execute "start_job" activity.
Is it possible to introduce something like a fallback activity or "finally" block?
You could use failOnError:false. It defines whether the migration will fail if an error occurs while executing the changeset. Default value is true.

Airflow/SQLAlchemy Error - Loading context has changed within a load/refresh handler

I am attempting to use clairvoyant's db-cleanup dag to clear metadata in our xcom table, but when I run it, I receive the following warning, printed thousands of times before I manually stop the job in order to not take down our mysql instance:
SAWarning: Loading context for <BaseXCom at 0x7f26f789b370> has changed within a load/refresh handler, suggesting a row refresh operation took place. If this event handler is expected to be emitting row refresh operations within an existing load or refresh operation, set restore_load_context=True when establishing the listener to ensure the context remains unchanged when the event handler completes.
The other cleanup tasks work fine, but it is the xcom table in particular I am having trouble with. We have hundreds/thousands of active dags and so the xcom table is constantly being written to nearly every second or two. I think that is what is causing this error, the fact that the data is continually changing while it is being queried.
I have been unable to find the cause of this or any examples of how this can be resolved. I tried adding a "restore_load_context":True line as per SQLAlchemy docs but it did not work.
Here are the snippets I attempted to add to the database object and the cleanup task:
{
"airflow_db_model": XCom,
"age_check_column": XCom.execution_date,
"keep_last": False,
"keep_last_filters": None,
"keep_last_group_by": None,
"restore_load_context":True
},
....
def cleanup_function(**context):
logging.info("Retrieving max_execution_date from XCom")
max_date = context["ti"].xcom_pull(
task_ids=print_configuration.task_id, key="max_date"
)
max_date = dateutil.parser.parse(max_date) # stored as iso8601 str in xcom
airflow_db_model = context["params"].get("airflow_db_model")
state = context["params"].get("state")
age_check_column = context["params"].get("age_check_column")
keep_last = context["params"].get("keep_last")
keep_last_filters = context["params"].get("keep_last_filters")
keep_last_group_by = context["params"].get("keep_last_group_by")
restore_load_context = context["params"].get("restore_load_context")
In order to not paste too much code here, I am using the same code in the db-cleanup dag. Has anyone encountered this and found a way to resolve?
I am very inexperienced with sqlalchemy and am entirely unsure where else to place this code or how to go about it.

Empty error message when processing SSAS Multidimention Cube

When processing an SSAS Multidimensional Cube's dimensions from an SSIS package the task repeatedly fails with an empty error message:
If I do a manual processing, with default settings, everything seems to work. The default setting are:
Processing order: Parallel
Transaction mode: (Default)
Dimension errors: (Default)
Dimension key error log path: (Default)
Process affected objects: Do not process
The SSIS task runs with the following settings:
Processing order: Sequential
Transaction mode: All in one transaction
Dimension errors: (Default)
Dimension key error log path: (Default)
Process affected objects: Do not process
Today I run the manual processing with the same setting form the Task, and got the result from the image.
Can someone help me understand what is the meaning of the empty error message? And how does the Processing order and Transaction mode affects error messages?

How to create a negative test case in GitHub

I am working on a repository in GitHub and learning to use their Workflows and Actions to execute CI tests. I have created a simple workflow that runs against a shell script to test a simple mathematical expression y-x=expected_val. This workflow isn't that different from other automatic tests I have set up on code in the past, but I cannot figure out how to perform negative test cases.
on:
push:
branches:
- 'Math-Test-Pass*'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: T1. Successful math test
uses: ./.github/actions/mathTest
with:
OPERAND1: 3
OPERAND2: 5
ANSWER: 2
- name: T2. Mismatch answer math test
if: ${{ always() }}
uses: ./.github/actions/mathTest
with:
OPERAND1: -3
OPERAND2: 2
ANSWER: 1
- name: T3. Missing operand math test
if: ${{ always() }}
uses: ./.github/actions/mathTest
with:
OPERAND1: -3
ANSWER: 5
- name: T4. Another test should pass
if: ${{ always() }}
uses: ./.github/actions/mathTest
with:
OPERAND1: 6
OPERAND2: 9
ANSWER: 3
- name: T5. Another test should pass
uses: ./.github/actions/mathTest
with:
OPERAND1: 1
OPERAND2: 9
ANSWER: 8
Now, I expected tests T.2 and T.3 to fail, but I run into two problems. First, I want all the steps to execute and the errors thrown by T.2 and T.3 make the job status a failure. Github's default response is to not run any additional steps unless I force it with something like if: ${{ always() }} This means that T.3 and T.4 only run because of that logic and T.5 doesn't run at all. See below.
The second problem is that while the mathTest action failed on T.2 and T.3 that was the intended behavior. It did exactly what it was supposed to do by failing. I wanted to show that by improperly configuring the parameters the script would fail. These negative pass tests shouldn't show up as failures, but as successes. The whole math test should pass to show that the script in question was prompting the right errors as well as the right answers.
There is a third case that doesn't show here. I definitely don't want to use continue on error. If the script failed to throw an error I want the test case to fail. There should be a failure and then the rest of the tests should continue. My ideal solution would show a pass on T.2 and T.3 and run T.4 and T.5. The same solution would also fail on T.2 or T.3 if they didn't generate an exception and still run T.4 and T.5. I just don't know how to fix that.
I have considered a couple of options but I don't know what is usually done. I expect that while I could jury rig something (e.g. put the failure into the script as another parameter, nest the testing in a second script that passes the parameters and catches the error, etc.), there is some standard way of doing this that I haven't considered. I'm looking for anyone who can tell me how it should be done.
I obtained an answer from the GitHub community that I want to share here.
https://github.community/t/negative-testing-with-workflows/116559
The answer is that the workflow should kick off one of several tools instead of multiple actions and that the tools can handle positive/negative testing on their own. The example given by the respondent is https://github.com/lee-dohm/close-matching-issues/blob/c65bd332c8d7b63cc77e463d0103eed2ad6497d2/.github/workflows/test.yaml#L16 which uses npm for testing.

Is there a better way to reference sub-features so that this test finishes?

When running the following scenario, the tests finish running but execution hangs immediately after and the gradle test command never finishes. The cucumber report isn't built, so it hangs before that point.
It seems to be caused by having 2 call read() to different scenarios, that both call a third scenario. That third scenario references the parent context to inspect the current request.
When that parent request is stored in a variable the tests hang. When that variable is cleared before leaving that third scenario, the test finishes as normal. So something about having a reference to that context hangs the tests at the end.
Is there a reason this doesn't complete? Am I missing some important code that lets the tests finish?
I've added * def currentRequest = {} at the end of the special-request scenario and that allows the tests to complete, but that seems like a hack.
This is the top-level test scenario:
Scenario: Updates user id
* def user = call read('utils.feature#endpoint=create-user')
* set user.clientAccountId = user.accountNumber + '-test-client-account-id'
* call read('utils.feature#endpoint=update-user') user
* print 'the test is done!'
The test scenario calls 2 different scenarios in the same utls.feature file
utils.feature:
#ignore
Feature: /users
Background:
* url baseUrl
#endpoint=create-user
Scenario: create a standard user for a test
Given path '/create'
* def restMethod = 'post'
* call read('special-request.feature')
When method restMethod
Then status 201
#endpoint=update-user
Scenario: set a user's client account ID
Given path '/update'
* def restMethod = 'put'
* call read('special-request.feature')
When method restMethod
Then status 201
And match response == {"status":"Success", "message":"Update complete"}
Both of the util scenarios call the special-request feature with different parameters/requests.
special-request.feature:
#ignore
Feature: Builds a special
Scenario: special-request
# The next line causes the test to sit for a long time
* def currentRequest = karate.context.parentContext.getRequest()
# Without the below clear of currentRequest, the test never finishes
# We are de-referencing the parent context's request allows test to finish
* def currentRequest = {}
without currentRequest = {} these are the last lines of output I get before the tests seem to stop.
12:21:38.816 [ForkJoinPool-1-worker-1] DEBUG com.intuit.karate - response time in milliseconds: 8.48
1 < 201
1 < Content-Type: application/json
{
"status": "Success",
"message": "Update complete"
}
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.818 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print] the test is done!
12:21:38.818 [pool-1-thread-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
<==========---> 81% EXECUTING [39s]
With currentRequest = {}, the test completes and the cucumber report generates successfully which is what I would expect to happen even without that line.
Two comments:
* karate.context.parentContext.getRequest()
Wow, these are internal API-s not intended for users to use, I would strongly advise passing values around as variables instead. So all bets are off if you have trouble with that.
It does sound like you have a null-pointer in the above (no surprises here).
There is a bug in 0.9.4 that causes test failures in some edge cases such as the things you are doing, pre-test life-cycle or failures in karate-config.js to hang the parallel runner. You should see something in the logs that indicates a failure, if not - do try help us replicate this problem.
This should be fixed in the develop branch, so you could help if you can build from source and test locally. Instructions are here: https://github.com/intuit/karate/wiki/Developer-Guide
And if you still see a problem, please do this: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue