How to create a negative test case in GitHub - testing

I am working on a repository in GitHub and learning to use their Workflows and Actions to execute CI tests. I have created a simple workflow that runs against a shell script to test a simple mathematical expression y-x=expected_val. This workflow isn't that different from other automatic tests I have set up on code in the past, but I cannot figure out how to perform negative test cases.
on:
push:
branches:
- 'Math-Test-Pass*'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: T1. Successful math test
uses: ./.github/actions/mathTest
with:
OPERAND1: 3
OPERAND2: 5
ANSWER: 2
- name: T2. Mismatch answer math test
if: ${{ always() }}
uses: ./.github/actions/mathTest
with:
OPERAND1: -3
OPERAND2: 2
ANSWER: 1
- name: T3. Missing operand math test
if: ${{ always() }}
uses: ./.github/actions/mathTest
with:
OPERAND1: -3
ANSWER: 5
- name: T4. Another test should pass
if: ${{ always() }}
uses: ./.github/actions/mathTest
with:
OPERAND1: 6
OPERAND2: 9
ANSWER: 3
- name: T5. Another test should pass
uses: ./.github/actions/mathTest
with:
OPERAND1: 1
OPERAND2: 9
ANSWER: 8
Now, I expected tests T.2 and T.3 to fail, but I run into two problems. First, I want all the steps to execute and the errors thrown by T.2 and T.3 make the job status a failure. Github's default response is to not run any additional steps unless I force it with something like if: ${{ always() }} This means that T.3 and T.4 only run because of that logic and T.5 doesn't run at all. See below.
The second problem is that while the mathTest action failed on T.2 and T.3 that was the intended behavior. It did exactly what it was supposed to do by failing. I wanted to show that by improperly configuring the parameters the script would fail. These negative pass tests shouldn't show up as failures, but as successes. The whole math test should pass to show that the script in question was prompting the right errors as well as the right answers.
There is a third case that doesn't show here. I definitely don't want to use continue on error. If the script failed to throw an error I want the test case to fail. There should be a failure and then the rest of the tests should continue. My ideal solution would show a pass on T.2 and T.3 and run T.4 and T.5. The same solution would also fail on T.2 or T.3 if they didn't generate an exception and still run T.4 and T.5. I just don't know how to fix that.
I have considered a couple of options but I don't know what is usually done. I expect that while I could jury rig something (e.g. put the failure into the script as another parameter, nest the testing in a second script that passes the parameters and catches the error, etc.), there is some standard way of doing this that I haven't considered. I'm looking for anyone who can tell me how it should be done.

I obtained an answer from the GitHub community that I want to share here.
https://github.community/t/negative-testing-with-workflows/116559
The answer is that the workflow should kick off one of several tools instead of multiple actions and that the tools can handle positive/negative testing on their own. The example given by the respondent is https://github.com/lee-dohm/close-matching-issues/blob/c65bd332c8d7b63cc77e463d0103eed2ad6497d2/.github/workflows/test.yaml#L16 which uses npm for testing.

Related

Running dbt tests as a pre-hook, prior to running a model

I want to prevent the scenario where my model runs, even though any number of source tables are (erroneously) empty. The phrase coming to mind is a "pre-hook," although I'm not sure that's the right terminology.
Ideally I'd run dbt run --select MY_MODEL and as a part of that, these tests for non-emptiness in the source tables would run. However, I'm not sure dbt works like that. Currently I'm thinking I'll have to apply these tests to the sources and run those tests (according to this document), prior to executing dbt run.
Is there a more direct way of having dbt run fail if any of these sources are empty?
Personally the way I'd go about this would be to define your my_source.yml
to have not_null tests on every column using something like this docs example
version: 2
sources:
- name: jaffle_shop
database: raw
schema: public
loader: emr # informational only (free text)
loaded_at_field: _loaded_at # configure for all sources
tables:
- name: orders
identifier: Orders_
loaded_at_field: updated_at # override source defaults
columns:
- name: id
tests:
- not_null
- name: price_in_usd
tests:
- not_null
And then in your run / build, use the following order of operations:
dbt test --select source:*
dbt build
In this circumstance, I'd highly recommend making your own variation on the generate_source macro from dbt-codegen which automatically defines your sources with columns & not_null tests included.

dbt relationship test compilation error: test definition dictionary must have exactly one key

I'm a new user of dbt, trying to write a relationship test:
- name: PROTOCOL_ID
tests:
- relationships:
to: ref('Animal_Protocols')
field: id
I am getting this error:
Compilation Error
Invalid test config given in models/Animal_Protocols/schema.yml:
test definition dictionary must have exactly one key, got [('relationships', None), ('to', "ref('Animal_Protocols')"), ('field', 'id')] instead (3 keys)
#: UnparsedNodeUpdate(original_file_path='model...ne)
"unique" and "not-null" tests in the same file are working fine, but I have a similar error with "accepted_values".
I am using dbt cli version 0.21.0 with Snowflake on MacOS Big Sur 11.6.
You are very close! I'm 96% sure that this is an indentation issue -- the #1 pain point of working with YAML. The solution is that both to and field need to be indented below the relationships key as opposed to at the same level.
See the Tests dbt docs page for an example
- name: PROTOCOL_ID
tests:
- relationships:
to: ref('Animal_Protocols')
field: id

Groking the parallel-matrix

This seems to be a mis-use of the new Parallel Matrix feature in GitLab 13.3 (https://docs.gitlab.com/ee/ci/yaml/#parallel-matrix-jobs)
I have a collection of parallel jobs for a set of services: build (docker image), test, release, delete.... and the code-base is created so that each parallel service is in a separate sub-directory.
This way I can have a common template:
variables:
IMAGE_NAME: $CI_REGISTRY_IMAGE/$LOCATION
.build-template:
script:
- docker build --tag $IMAGE_NAME:$CI_PIPELINE_ID-$CI_COMMIT_REF_SLUG --tag $IMAGE_NAME:latest $LOCATION
stage: build
when: manual
then multiple build jobs:
build-alpha:
extends: .build-template
variables:
LOCATION: alpha
build-beta:
extends: .build-template
variables:
LOCATION: beta
.... and repeat as needed.
I can then do the same thing for test, release, and delete jobs: a common template that takes just the one variable to distinguish the service.
Matrix to the rescue?
It would seem to me that
variables:
IMAGE_NAME: $CI_REGISTRY_IMAGE/$NOTEBOOK
build-services:
parallel:
matrix:
- LOCATION: alpha
- LOCATION: beta
script:
- docker build --tag $IMAGE_NAME:$CI_PIPELINE_ID-$CI_COMMIT_REF_SLUG --tag $IMAGE_NAME:latest $LOCATION
stage: build
when: manual
would be an ideal candidate for this matrix form.... but apparently matrix requires two variables.
Has anyone got a good solution for this multiple-parallel jobs problem?
would be an ideal candidate for this matrix form.... but apparently matrix requires two variables.
Not anymore.
See GitLab 13.5 (October 2020)
Allow one dimensional parallel matrices
Previously, the parallel: matrix keyword, which runs a matrix of jobs in parallel, only accepted two-dimensional matrix arrays. This was limiting if you wanted to specify your own array of values for certain jobs.
In this release, you now have more flexibility to run your jobs the way that works best for your development workflow.
You can run a parallel matrix of jobs in a one-dimensional array, making your pipeline configuration much simpler. Thanks Turo Soisenniemi for your amazing contribution!
Here’s a basic example of this in practice that will run 3 test jobs for different versions of Node.js, but you can apply this approach to your specific use cases and easily add or remove jobs in your pipeline as well:
See Documentation and Issue.
And with GitLab 13.10 (March 2021)
Use 'parallel: matrix' with trigger jobs
You can use the parallel: matrix keywords to run jobs multiple times in parallel, using different variable values for each instance of the job.
Unfortunately, you could not use it with trigger jobs.
In this release, we’ve expanded the parallel matrix feature to support trigger jobs as well, so you can now run multiple downstream pipelines (child or multi-project pipelines) in parallel using different variables value for each downstream pipeline.
This lets you configure CI/CD pipelines that are faster and more flexible.
See Documentation and Issue.
This is a known issue by Gitlab.
There is a workaround using a "dummy" value for the second value (I'm working with Gitlab 13.3.1).
For your example, it gives this :
parallel:
matrix:
- LOCATION: ['alpha', 'beta']
DUMMY: 'dummy'
Did you try:
parallel:
matrix:
- LOCATION: [alpha, beta]

steps marked with trigger: status = failure, always run, even if status is success

I want to send notification in case any of the build step fails in drone ci. I tried adding following trigger at various levels, but it always runs, even in case of success.
Trigger that I am trying is as follows:
trigger:
status:
- failure
Tried setting it up inside/outside the steps. but it keeps getting triggered every time.
Try using where instead of trigger, like this:
- name: notify-me
image: ...
...
when:
status: [ failure ]

karate.abort() in v0.9.4 results in Failed scenario in cucumber html reports

karate.abort() results in skipped steps. There was a fix previous for this . However, cucumber reporting treats skipped tests as Failed.
Is there any workaround where I can use karate.abort() and not have Failed scenario, as I am using it deliberately to skip some DB checks.
Or is there any alternative to karate.abort()?
Yes we need some community help to resolve how third party reports treat skipped steps, please read this - and maybe you can be the one to find a solution: https://github.com/intuit/karate/issues/755#issuecomment-488710450
A workaround is to split into a second feature and then:
* if (condition) karate.call('second.feature')