I have a dbt_project.yml like:
name: rdb
profile: rdb
source-paths: ['models']
version: "0.1"
models:
rdb:
schema: cin
materialized: table
post-hook: 'grant select on {{ this }} to rer'
on-run-end:
# TODO: fix
- 'grant usage on schema "{{ target.schema }}" to rer'
DBT has worked really well. But with the on-run-end entry, it fails with Compilation Error 'target' is undefined. With that line commented out, it works fine.
Am I making a basic mistake? Thanks!
Your post-hook should actually look like this:
on-run-end:
- "{% for schema in schemas %}grant usage on schema {{ schema }} to rer;{% endfor %}"
The dbt docs for the on-run-end context explain this in detail, but long story short: because a dbt run may touch tables in different schemas on the target database, there is no single target.schema value to which you can apply the grant statement. Instead, the context provides you with a list of schema names called schemas that you need to loop through. That list has one or more elements.
The target in dbt is the configuration data for the adapter, like account, user, port, or schema. this is about the database object that is being written, and also includes a field schema. Finally, the on-run-end context provides the list of schemas, so that you are not forced to make redundant grant statements for each table or view, but can make just a single grant for each schema.
My hunch is you do not need to quote the jinja template. Try:
on-run-end:
- 'grant usage on schema {{ target.schema }} to rer'
See this for reference.
Related
I have one enviornment in which queries containing more than 100 tables. now i need to access same query in read only environment. so i need to use <schema_name>.<table_name> in read only env. This is read only env so i can not create synonyms for all.
instead of writing schema name in prefix of each table, Is there any short cut for it. i am just guessing if anything is possible? They all belongs to same schema.
Try this out. It will set your session environment to the specified schema and as consequence, no need to provide the <schema_name> prefix.
ALTER SESSION SET CURRENT_SCHEMA = <schema_name>;
We're using DBT to manage our data pipeline. We're also using postgres as our db. I'm creating some materialized views through a query (not in dbt) and it looks like whenever we run dbt run --full-refresh it drops those materialized views. Any idea why, and how to not drop the materialized views?
This answer came from Claire at DBT.
"If the materialized views depend on the upstream table, they will get dropped by the drop table my_table cascade statement"
This came from Jake at DBT.
"postgres views/materialized views are binding. There’s no opt-out and recreating them even in the same dbt run will result in periods where it’s not available."
https://www.postgresql.org/docs/9.3/rules-materializedviews.html
https://docs.getdbt.com/
As the previous answer stated, materialised views are dropped when a table is dropped because of the cascade.
A bridge towards getting higher uptime is to have tables which act as copies of the dbt tables which are being rebuilt which are then deleted and updated on rebuild.
The downtime whilst the tables are being rebuilt is probably worth the deterministic behaviour of knowing when the tables are being rebuilt, rather than having the materialised views disappearing during a long rebuild.
This is the macro I wrote to solve this problem. It creates a new table with a slightly different name within a single transaction, allowing for 100% uptime.
{% macro create_table(table_name) %}
{% set sql %}
BEGIN;
DROP TABLE IF EXISTS {{ table_name[:-4]}};
CREATE TABLE {{ table_name[:-4]}} AS SELECT * FROM {{ table_name }};
COMMIT;
{% endset %}
{% do run_query(sql) %}
{% do log(table_name+" table created", info=True) %}
{% endmacro %}
Liquibase ask for fully defined schema Name, for example for the indexExists tag. But what if want to run a changeset on multiple schema. Is there a way to indicate running on the current schema, or to dynamically set it when running an update ?
My Bad, that schemaName is not a required attribute for createIndex.
In our test environment, the schema is prepended to the trigger DDL as one might expect. However, in our QA and PROD environments, the schema prefix doesn't show in the DDL. We always connect as the "SCHEMA" user so it hasn't been a problem thus far. Is it worth updating the QA and PROD DDL's to include the schema prefix? If we don't ever connect to the DB as a user/schema other than "SCHEMA", do we really have anything to worry about?
TEST DDL:
create or replace TRIGGER "SCHEMA"."MDATA_BIR_TRG"
BEFORE INSERT ON "SCHEMA"."METADATA"
FOR EACH ROW
BEGIN
---CODE HERE.
END;
QA DDL:
create or replace TRIGGER "MDATA_BIR_TRG"
BEFORE INSERT ON "METADATA"
FOR EACH ROW
BEGIN
---CODE HERE.
END;
I agree with omeinusch that the schema name is not that important (as long as the current schema is the same as the schema where the object is intended to reside). There is no need to recompile the trigger and make it fully qualified.
A common approach to exporting an object's DDL is to use the SQL Developer's export wizard which does allow you to indicate whether the DDL of the object is schema qualified.
Directions to obtain DDL from SQL Developer export wizard
right click on the object in the connection navigator and select export
choose characteristics of export (include schema by selecting check)
make sure file path is entered.
click next.
No, the SCHEMA is optional and only needed if you want ensure that the handled object belongs to a defined schema or not. If you "don't care" and always use mean your current schema, you can omit it.
I'm getting back into NHibernate and I've noticed a new configuration property being used in examples: SchemaAutoAction. I cant seem to find documentation on what the various settings mean. The settings / my guesses as to what they mean are:
Recreate -- Drop and recreate the schema every time
Create -- If the schema does not exist create it
Update -- issue alter statements to make the existing schema match
the model
Validate -- Blow up if the schema differs from the model
Is this correct?
SchemaAutoAction is the same as schema-action mapping attribute.
As per docs:
The new 'schema-action' is set to none, this will prevent NHibernate
from including this mapping in its schema export, it would otherwise
attempt to create a table for this view
Similar, but not quite. The SchemaAutoAction is analogous to the configuration property hbm2ddl.auto, and its values are:
Create: always create the database when a session factory is created;
Validate: when a session factory is created check if the database matches the mappings and throw an exception otherwise;
Update: when a session factory is created issues DDL commands to update the database if it doesn't match the mappings;
Recreate: always creates the database and drop it when the session factory is disposed.