DBT - Secondary Role as pre-hook is not working in tests - testing

in my DBT project (in Snowflake) I'm using a secondary role, which is working fine if I run the models. But if I run the tests using dbt test I'm getting:
Database Error in test not_null_stg_mytable_myid (models\staging\schema_staging.yml)
14:43:19 002037 (42601): SQL compilation error:
14:43:19 Failure during expansion of view 'STG_MYTABLE': SQL compilation error:
14:43:19 Database 'MYDB' does not exist or not authorized.
14:43:19 compiled Code at
target\run\myproject\models\staging\schema_staging.yml\not_null_stg_mytable_myid.sql
It seems that my secondary role which I'm defining as pre-hook is not valid during runing the tests. Here the config in the dbt_project.yml:
models:
myproject:
+pre_hook:
- "USE SECONDARY ROLES MY_SECONDARY_ROLE"
Any idea, how I can set the secondary role for the tests? The following didn't work:
tests:
myproject:
+pre_hook:
- "USE SECONDARY ROLES MY_SECONDARY_ROLE"
Best regards
Martin

You can't define pre- or post-hooks for tests. dbt simply ignores that config, which is why this works for your models but not your tests.
You can specify a role in your profiles.yml, and you can switch between targets at runtime, so if you only need privileges from a single role, to run your tests, you could dbt test -t test_target.
However, if you need privileges from both your primary and secondary roles to execute your tests, you'll have to refactor your roles and consolidate those privileges into a single role.
You could also open an issue for this in dbt-snowflake! Seems like a useful feature to support using different roles in different contexts.

Related

Why is DBT running a model that is not being targeted explicitly in the DBT run statement?

I have a DBT project that is mostly comprised of models for views over snowflake external tables.Every model view is triggered with a seperate dbt run statement concurrently.
dbt run --models model_for_view_1
I have one other model in the dbt project which materializes to a table that uses these views. I trigger this model in a separate DAG in airflow using the same DBT run statement as above. It uses no ref or source statement that connects it to the views.
I noticed recently that this table model is getting built by DBT whenever I build the view models. I thought it was because DBT was making an inference that this was a referenced model but after some experimentation in which I even set the table model SQL as something like SELECT 1+1 as column1, it was still getting built. I have placed it in a different folder in the dbt project, renamed the file etc. No joy. have no idea why running the other models is causing this unrelated model to be built. The only connection to the view models is that they share the same schema in the database. What is triggering this model to be built?
Selection syntax can be finicky, because there are many ways to select the models. From the docs:
The --select flag accepts one or more arguments. Each argument can be one of:
a package name
a model name
a fully-qualified path to a directory of models
a selection method (path:, tag:, config:, test_type:, test_name:)
(note that --models was renamed --select in v0.21, but --models has the same behavior)
So my guess is that your model_for_view_1 name isn't unique, and is either shared with your project (acting as a package in this case) or the directory that it is in.
So if your project looks like:
models
|- some_name
|- some_name.sql # the view
|- another_name.sql # the table
dbt run --models some_name will run the code in both some_name.sql and another_name.sql, since it is selecting the directory called some_name.
Would you be able to share a bit more context here.
Which version of dbt is your project on?
Would it be possible to share how the models look (while removing any sensitive information).
It is rather difficult to tell what is triggering this unexpected behaviour without these info.

problems on connecting multiple databases with tortoise-orm

In advance I should mention that I want to multi tenant app (same models for multiple databases) with tortoise. So in testing this actually work or not I created test project
I wrote multiple connection params in config dict, wrote models, showed them in modules, in order to test I wrote insert data
To Reproduce
I placed the code here https://dpaste.org/PMRx
Expected behavior
It should save Model with, different databases BaseDBAsyncClient. and then get them get them with, but shows error : tortoise.exceptions.OperationalError: permission denied for table tournament
Additional context
I saw NotImplemented areas inside tortoise-orm's source code. so what is the cause of error? how it can be solved? Is it even possible to create such app with tortoise?

flyway database script logging

I am currently evaluating Flyway software as a deployment option for our
company. We run our database deployments on an ORACLE database and
currently spool the output from a sqlplus session for logging purposes. We
use this to verify feedback information such as were objects created
successfully, were packages and functions, etc. compiled without errors,
verify amount of records entered and so forth.
Is there similar logging functionality in Flyway? Currently the only
logging we have found is in the server logs. We can tell from these logs
that a script has completed successfully or has triggered an ORA error but
we are curious as to whether this is the extent of the database logging
options or not.
Thank you,
We used the command line method for running flyway and turned on debug output (-X). Along with a lot of other output it also logs more information about the SQL migrations run (eg content of repeatable migrations) and the number of records affected. This is not perfect however it helped us a lot in capturing more information about what was applied.
See https://flywaydb.org/documentation/commandline/ as it is not documented for each individual command as it applies to flyway itself.

Can Liquibase detect if it has already run?

I have a small set of scripts that manage the build/test/deployment of an app. Recently I decided I wanted to switch to Liquibase for db schema management. This script will be working both on the developer machines where it regularly blow away and rebuild their database and also on deployed environment where we will only be adding new changesets.
When this program first runs on a deployed environment I need to detect if Liquibase has run or not and then run changelogSync to sync with the existing tables.
Other than manually checking if the database changelog table exists is there a way for the Liquibase API to let me know that it has already run at least once?
I'm using the Java core library in Groovy
The easiest way is probably ((StandardChangeLogHistoryService) ChangeLogHistoryServiceFactory.getInstance().getChangeLogService(database)).hasDatabaseChangeLogTable()
The ChangeLogHistoryService interface returned by liquibase.changelog.ChangeLogHistoryServiceFactory doesn't have a method to check if the table exists, but the StandardChangeLogHistoryService implementation does.

Can you run two test cases simultaneously in a Test Suite in Microsoft Test Manager 2010?

I am trying to create a unit test to run on two machines in Microsoft Test Manager 2010. In this test I want some client and server side test code to run simultaneously; the client side test being dependent on server side test working successfully.
When putting together a Test Suite in Test Manager, I want to be able to set both tests to have the same order value (so they run at the same time) but the validation prevents this; setting the order as shown below:
Is there any way I can achieve the simultaneous test execution I am after?
Sorry for the late answer... I've missed the notification about your answers to my question :-( Sorry for that!
In case you are still looking for solution, here my suggestion.
I suppose you have a test environment consisting of two machines (for server and client).
If so, you will not be able to run tests on both of them, or better to say you will not have enough control over running tests. Check How to Run automated tests on multiple computers at the same time
Actually I posted a related question to "Visual Studio Development Forum", you could check the answers I got here: Is it possible to run test on several virtual machines, which belong to the same environment, using build-deploy-test workflow
That all means you will end up creating two environments each consisting of one machine (one for server and one for client).
But then you will not be able to reference both environment in your build definition it you can only select one environment in DefaultLabTemplate.
That leads to the solution I can suggest:
Create two lab environments
Create three build definitions
the first one will only build your test code
the second one will deploy last successful build from the first one and start tests on the server environment
the third one will deploy last successful build from the first one and start tests on the client environment.
Run the first build definition automatically at night
Trigger the latter two simultaneously later.
It's not really nice, I know...
You will have to synchronize the build definition building the test code with the two build definitions running the tests.
I was thinking about setting up similar tests some months ago and it was the best solution I came up with...
Another option I have not tried yet could be:
Use a single test environment consisting of two machines and use different roles for them (server and client respectively).
In MTM create two Test Settings (one for the server role and one for the client role).
Create a bat file starting tests using tcm.exe tool (see How to: Run Automated Tests from the Command Line Using Tcm for more details).
You will need two tcm.exe calls, one for each Test Settings you have created.
Since a tcm.exe call just queues a test run an returns (more or less) immediately this bath file will start tests (more or less) simultaneously.
Create a build definition using DefaultLabTemplate.
This definition will:
build test code
deploy them to both machines in your environment
run your bath script as the last deployment step
(you will have to make sure this script is located on the build machine or deploy it there or make it accessible from the build machine)
As I've said, I have not tried it yet.
The disadvantage of this approach will be that you will not see the test part in the build log since the tests will not be started by means provided by DefaultLabTemplate. So the build will not fail when tests fail.
But you will still be able to see test outcomes in MTM and will have test results for each machine.
But depending on what is more important to you (having rest results or having build definition that fails if tests fail or having both) it could be a solution for you.
Yes, you can with modified TestSettings file.
http://blogs.msdn.com/b/vstsqualitytools/archive/2009/12/01/executing-unit-tests-in-parallel-on-a-multi-cpu-core-machine.aspx