I added the below code to dbt_project.yml.
tests:
+store_failures: true
Is there any way to store only the failed tests in the table
Related
I want to prevent the scenario where my model runs, even though any number of source tables are (erroneously) empty. The phrase coming to mind is a "pre-hook," although I'm not sure that's the right terminology.
Ideally I'd run dbt run --select MY_MODEL and as a part of that, these tests for non-emptiness in the source tables would run. However, I'm not sure dbt works like that. Currently I'm thinking I'll have to apply these tests to the sources and run those tests (according to this document), prior to executing dbt run.
Is there a more direct way of having dbt run fail if any of these sources are empty?
Personally the way I'd go about this would be to define your my_source.yml
to have not_null tests on every column using something like this docs example
version: 2
sources:
- name: jaffle_shop
database: raw
schema: public
loader: emr # informational only (free text)
loaded_at_field: _loaded_at # configure for all sources
tables:
- name: orders
identifier: Orders_
loaded_at_field: updated_at # override source defaults
columns:
- name: id
tests:
- not_null
- name: price_in_usd
tests:
- not_null
And then in your run / build, use the following order of operations:
dbt test --select source:*
dbt build
In this circumstance, I'd highly recommend making your own variation on the generate_source macro from dbt-codegen which automatically defines your sources with columns & not_null tests included.
Even after my expect assertion failed my script is executing next 'it' block. How can I stop execution of script if any test case is failed in my webdriverIO
code:
it('6. Confirm flight IATA code and Airline name',async()=>{
await expect(cargodamagePage.flightIATA).toHaveValue('ERV')
console.log("The flight IATA code is",await cargodamagePage.flightIATA.getValue());
})
There is a bail option you can set in your wdio.conf.js file. If you set it to 1, it will stop all tests if after a single test failure:
https://webdriver.io/docs/options#bail
I'm a new user of dbt, trying to write a relationship test:
- name: PROTOCOL_ID
tests:
- relationships:
to: ref('Animal_Protocols')
field: id
I am getting this error:
Compilation Error
Invalid test config given in models/Animal_Protocols/schema.yml:
test definition dictionary must have exactly one key, got [('relationships', None), ('to', "ref('Animal_Protocols')"), ('field', 'id')] instead (3 keys)
#: UnparsedNodeUpdate(original_file_path='model...ne)
"unique" and "not-null" tests in the same file are working fine, but I have a similar error with "accepted_values".
I am using dbt cli version 0.21.0 with Snowflake on MacOS Big Sur 11.6.
You are very close! I'm 96% sure that this is an indentation issue -- the #1 pain point of working with YAML. The solution is that both to and field need to be indented below the relationships key as opposed to at the same level.
See the Tests dbt docs page for an example
- name: PROTOCOL_ID
tests:
- relationships:
to: ref('Animal_Protocols')
field: id
I have set of Test scenarios (say 10) which I would like to execute against different countries (say 3).
for loop is not preferred as execution time per scenario will be longer and each scenario Pass/Fail will have to be managed.
Create Keyword for each Test Scenario and call them per country.
this leads to 3 different robot file one per country with 10 Testcases for each scenario
Any new add/remove scenarios, will have to update 3 files
robot data driver template-based approach appears to support one Test scenario per robot file. Uses data file and dynamically execute one data entry as one testcase
This leads 10 robot file one per Test Scenario
Any new Test Scenario will be new robot file
Any way to include more Test scenario in robot data-driven approach
Any other approach you would suggest for iterative execution of scenario against data set where each iteration results are captured separately.
My first recommendation would be Templates with for loops. This way you do not have to manage failures, each iterations will be independent from the other. Every data set will be executed with the template. Note that if one iteration fails the whole test case will be marked as failed, but you will be able to check which iteration has failed.
Here is the code for the above example:
*** Variables ***
#{COUNTRIES} USA UK
*** Test Cases ***
Test Scenario 1
[Template] Test Scenario 1 Template
FOR ${country} IN #{COUNTRIES}
${country}
END
Test Scenario 2
[Template] Test Scenario 2 Template
FOR ${country} IN #{COUNTRIES}
${country}
END
Test Scenario 3
[Template] Test Scenario 3 Template
FOR ${country} IN #{COUNTRIES}
${country}
END
*** Keywords ***
Test Scenario 1 Template
[Arguments] ${country}
Log ${country}
Run Keyword If $country == 'UK' Fail Simulate failure.
Test Scenario 2 Template
[Arguments] ${country}
Log ${country}
Run Keyword If $country == 'USA' Fail Simulate failure.
Test Scenario 3 Template
[Arguments] ${country}
Log ${country}
The other option is based on this answer and it generates test cases dynamically during run time. Only a small library that also acts as a listener is needed. This library can have a start_suite method that will be invoked and it will get the suite(s) as Python object(s), robot.running.model.TestSuite. Then you could use this object along with Robot Framework's API to create new test cases programmatically.
DynamicTestLibrary.py:
from robot.running.model import TestSuite
class DynamicTestLibrary(object):
ROBOT_LISTENER_API_VERSION = 3
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = 0.1
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.top_suite = None
def _start_suite(self, suite, result):
self.top_suite = suite
self.top_suite.tests.clear() # remove placeholder test
def add_test_case(self, name, keyword, *args):
tc = self.top_suite.tests.create(name=name)
tc.keywords.create(name=keyword, args=args)
def add_test_matrix(self, data_set, test_scenarios):
for data in data_set:
for test_scenario in test_scenarios:
self.add_test_case(f'{test_scenario} - {data}', test_scenario, data)
globals()[__name__] = DynamicTestLibrary
UPDATE for Robot Framework 4.0
Due to the backward incompatible changes (Running and result models have been changed) made in the 4.0 release the add_test_case function should be change as below if you are using version above 4.0.
def add_test_case(self, name, keyword, *args):
tc = self.top_suite.tests.create(name=name)
tc.body.create_keyword(name=keyword, args=args)
In the robot file add a Suite Setup in which you can call the Add Test Matrix keyword with the list of countries and test scenarios to generate a test case for each combination. This way there will be an individual test case for each country - test scenario pair while having everything in one single file.
test.robot:
*** Settings ***
Library DynamicTestLibrary
Suite Setup Generate Test Matrix
*** Variables ***
#{COUNTRIES} USA UK
*** Test Cases ***
Placeholder test
[Documentation] Placeholder test to prevent empty suite error. It will be removed from execution during the run.
No Operation
*** Keywords ***
Generate Test Matrix
${test scenarios}= Create List Test Scenario 1 Test Scenario 2 Test Scenario 3
DynamicTestLibrary.Add Test Matrix ${COUNTRIES} ${test scenarios}
Test Scenario 1
[Arguments] ${country}
Log ${country}
Run Keyword If $country == 'UK' Fail Simulate failure.
Test Scenario 2
[Arguments] ${country}
Log ${country}
Run Keyword If $country == 'USA' Fail Simulate failure.
Test Scenario 3
[Arguments] ${country}
Log ${country}
This will be the output on the console:
# robot --pythonpath . test.robot
==============================================================================
Test
==============================================================================
Test Scenario 1 - USA | PASS |
------------------------------------------------------------------------------
Test Scenario 2 - USA | FAIL |
Simulate failure.
------------------------------------------------------------------------------
Test Scenario 3 - USA | PASS |
------------------------------------------------------------------------------
Test Scenario 1 - UK | FAIL |
Simulate failure.
------------------------------------------------------------------------------
Test Scenario 2 - UK | PASS |
------------------------------------------------------------------------------
Test Scenario 3 - UK | PASS |
------------------------------------------------------------------------------
Test | FAIL |
6 critical tests, 4 passed, 2 failed
6 tests total, 4 passed, 2 failed
==============================================================================
I am working on a repository in GitHub and learning to use their Workflows and Actions to execute CI tests. I have created a simple workflow that runs against a shell script to test a simple mathematical expression y-x=expected_val. This workflow isn't that different from other automatic tests I have set up on code in the past, but I cannot figure out how to perform negative test cases.
on:
push:
branches:
- 'Math-Test-Pass*'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: T1. Successful math test
uses: ./.github/actions/mathTest
with:
OPERAND1: 3
OPERAND2: 5
ANSWER: 2
- name: T2. Mismatch answer math test
if: ${{ always() }}
uses: ./.github/actions/mathTest
with:
OPERAND1: -3
OPERAND2: 2
ANSWER: 1
- name: T3. Missing operand math test
if: ${{ always() }}
uses: ./.github/actions/mathTest
with:
OPERAND1: -3
ANSWER: 5
- name: T4. Another test should pass
if: ${{ always() }}
uses: ./.github/actions/mathTest
with:
OPERAND1: 6
OPERAND2: 9
ANSWER: 3
- name: T5. Another test should pass
uses: ./.github/actions/mathTest
with:
OPERAND1: 1
OPERAND2: 9
ANSWER: 8
Now, I expected tests T.2 and T.3 to fail, but I run into two problems. First, I want all the steps to execute and the errors thrown by T.2 and T.3 make the job status a failure. Github's default response is to not run any additional steps unless I force it with something like if: ${{ always() }} This means that T.3 and T.4 only run because of that logic and T.5 doesn't run at all. See below.
The second problem is that while the mathTest action failed on T.2 and T.3 that was the intended behavior. It did exactly what it was supposed to do by failing. I wanted to show that by improperly configuring the parameters the script would fail. These negative pass tests shouldn't show up as failures, but as successes. The whole math test should pass to show that the script in question was prompting the right errors as well as the right answers.
There is a third case that doesn't show here. I definitely don't want to use continue on error. If the script failed to throw an error I want the test case to fail. There should be a failure and then the rest of the tests should continue. My ideal solution would show a pass on T.2 and T.3 and run T.4 and T.5. The same solution would also fail on T.2 or T.3 if they didn't generate an exception and still run T.4 and T.5. I just don't know how to fix that.
I have considered a couple of options but I don't know what is usually done. I expect that while I could jury rig something (e.g. put the failure into the script as another parameter, nest the testing in a second script that passes the parameters and catches the error, etc.), there is some standard way of doing this that I haven't considered. I'm looking for anyone who can tell me how it should be done.
I obtained an answer from the GitHub community that I want to share here.
https://github.community/t/negative-testing-with-workflows/116559
The answer is that the workflow should kick off one of several tools instead of multiple actions and that the tools can handle positive/negative testing on their own. The example given by the respondent is https://github.com/lee-dohm/close-matching-issues/blob/c65bd332c8d7b63cc77e463d0103eed2ad6497d2/.github/workflows/test.yaml#L16 which uses npm for testing.