I'm reworking a good part of our Analytics Data Warehouse and am trying to build things out in a much more modular way. We've swapped to DBT vs an in house transformation tool and I'm trying to take advantage of the functionality it offers.
Previously, the way we classified our rental segments was in a series of CASE statements which evaluate a few fields. These are (sudocode)
CASE WHEN rental_rule_type <> monthly
AND rental_length BETWEEN 6 AND 24
AND rental_day IN (0,1,2,3,4)
AND rental_starts IN (5,6,7,8,9,10,11)
THEN weekday_daytime_rental
This obviously works. But it's ugly and hard to update. If we want to adjust this, we'll need to do so in the SQL rather than in a lookup table.
What I'd like to build is a simple lookup table that holds these values that can be adjusted at a later date to easily adjust how we classify these rentals, but I'm not sure what the best approach is.
My current thought is to layer these conditions into an excel file, load it into the warehouse with DBT and then join on these conditions, however I'm not sure if that would end up being cleaner logic or not. It would mean there are no hardcoded values in the code, but it would likely still result in a ton of ugly cases and joins.
I think there are some global variables I could define as well in DBT which may help with this?
Anyone approach something similar? Would love to hear some best practices.
Love the question here and actually tconbeer's answer as well.
However, if you want an answer that "favor's readability over DRY-ness", there is an another appropriate middle ground here which is generally regarded as a dbt best-practice: model defined sets.
Example:
{% set rental_rule_type = ["bank_transfer"] %}
{% set rental_length_low = ["6"] %}
{% set rental_length_high = ["24"] %}
{% set rental_days = ["0","1","2","3","4"] %}
{% set rental_starts = ["5","6","7","8","9","10","11"] %}
with some_cte as (
select * from {{ ref('some_source') }}
)
select *,
CASE
WHEN rental_rule_type <> {{rental_rule_type}}
AND rental_length BETWEEN {{rental_length_low}} AND {{rental_length_high}}
AND rental_day IN (
{% for rental_day in rental_days %}
{{rental_day}} {%- if not loop.last -%}, {%- endif -%}
{% endfor %}
)
AND rental_starts IN (
{% for rental_start in rental_starts %}
{{rental_start}} {%- if not loop.last -%}, {%- endif -%}
{% endfor %}
)
THEN weekday_daytime_rental
from some_cte
Example 2 (equivalent but cleaner as suggested in comment below):
{% set rental_rule_type = ["bank_transfer"] %}
{% set rental_length_low = ["6"] %}
{% set rental_length_high = ["24"] %}
{% set rental_days = ["0","1","2","3","4"] %}
{% set rental_starts = ["5","6","7","8","9","10","11"] %}
with some_cte as (
select * from {{ ref('some_source') }}
)
select *,
CASE
WHEN rental_rule_type <> {{rental_rule_type}}
AND rental_length BETWEEN {{rental_length_low}} AND {{rental_length_high}}
AND rental_day IN ( {{ rental_days | join(", ") }} )
AND rental_starts IN ( {{ rental_starts | join(", ") }} )
THEN weekday_daytime_rental
from some_cte
In this format, all the logic stays accessible to someone who is reading the model but also makes changing the logic much more accessible since it contains all the variables to a single location.
Also, much easier to see that variables / macros are in play here at a quick glance of the model vs if your case statement is buried deep in a chain of CTEs or a more complex select statement after some CTEs.
warning
I haven't compiled this so I'm not 100% it will be successful as-is. Should get you started though if you go in this direction.
I've tried what you describe, and I've generally regretted it.
Logic really should be expressed as code, not data. It should be source-controlled, reviewable, and support multiple environments (dev and prod).
Seed files (with dbt seed) are kind-of data, and kind-of code, since they get checked into source control alongside the code. This at least solves the multiple environments problem, but it makes code review extremely difficult.
I'd recommend doing what software engineers do -- encapsulate the logic into easily-understandable and easily-testable components, and then compose those components in your model. Macros work pretty well for this.
For example, your case statement above could become a macro called is_weekday_daytime_rental():
{% macro is_weekday_daytime_rental(rental_rule_type, rental_legnth, rental_day, rental_starts) %}
CASE
WHEN rental_rule_type <> monthly
AND rental_length BETWEEN 6 AND 24
AND rental_day IN (0,1,2,3,4)
AND rental_starts IN (5,6,7,8,9,10,11)
THEN true
ELSE false
END
{% endmacro %}
then you could call that macro in your model, like:
CASE
WHEN
{{ is_weekly_daytime_rental(
rental_rule_type,
rental_legnth,
rental_day,
rental_starts
) }}
THEN weekday_daytime_rental
WHEN ...
But let's do better. Assuming you're also going to have is_weekend_daytime_rental, then each of those component bits of logic should be its own macro that you can reuse:
{% macro is_weekday_daytime_rental(rental_rule_type, rental_legnth, rental_day, rental_starts) %}
CASE
WHEN {{ is_daily_rental(rental_rule_type, rental_length) }}
AND {{ is_weekday(rental_day) }}
AND {{ is_daytime(rental_starts) }}
THEN true
ELSE false
END
{% endmacro %}
where each component looks like:
{% macro is_weekday(day_number) %}
CASE
WHEN day_number IN (0, 1, 2, 3, 4)
THEN true
ELSE false
END
{% endmacro %}
Related
I want to run a macro in a COPY INTO statement to S3 bucket. Apparently in snowflake I can't do dynamic path. So I'm doing a hacky way to solve this.
{% macro unload_snowflake_to_s3() %}
{# Get all tables and views from the information schema. #}
{%- set query -%}
select concat('COPY INTO #MY_STAGE/year=', year(current_date()), '/my_file FROM (SELECT OBJECT_CONSTRUCT(*) from my_table)');
{%- endset -%}
-- {%- set final_query = run_query(query) -%}
-- {{ dbt_utils.log_info(final_query) }}
-- {{ dbt_utils.log_info(final_query.rows.values()[0]) }}
{%- do run_query(final_query.columns.values()[0]) -%}
-- {% do final_query.print_table() %}
{% endmacro %}
Based on above macros, what I'm trying to do is:
Use CONCAT to add year in the bucket path. Hence, the query becomes a string.
Use the concatenated query to do run_query()again to actually run the COPY INTO statement.
Output and error I got from dbt log:
09:06:08 09:06:08 + | column | data_type |
| ----------------------------------------------------------------------------------------------------------- | --------- |
| COPY INTO #MY_STAGE/year=', year(current_date()), '/my_file FROM (SELECT OBJECT_CONSTRUCT(*) from my_table) | Text |
09:06:08 09:06:08 + <agate.Row: ('COPY INTO #MY_STAGE/year=2022/my_file FROM (SELECT OBJECT_CONSTRUCT(*) from my_table)')>
09:06:09 Encountered an error while running operation: Database Error
001003 (42000): SQL compilation error:
syntax error line 1 at position 0 unexpected '<'.
root#2c50ba8af043:/dbt#
I think the error is that I didn't extract the row and column specifically which is in agate format. How can I convert/extract this to string?
You might have better luck with dbt_utils.get_query_results_as_dict.
But you don't need to use your database to construct that path. The jinja context has a run_started_at variable that is a Python datetime object, so you can build your string in jinja, without hitting the database:
{% set yr = run_started_at.strftime("%Y") %}
{% set query = 'COPY INTO #MY_STAGE/year=' ~ yr ~ '/my_file FROM (SELECT OBJECT_CONSTRUCT(*) from my_table)' %}
Finally, depending on how you're calling this macro you probably want to gate this whole thing with an {% if execute %} flag, so dbt doesn't do the COPY when it's parsing your models.
You can use dbt_utils.get_query_results_as_dict function to get rid of agate part. Maybe after that your copy statement can work.
{%- set final_query = dbt_utils.get_query_results_as_dict(query) -%}
{{log(final_query ,true)}}
{% for keys,val in final_query.items() %}
{{log(keys,true)}}
{{log( val ,true)}}
{% endfor %}
if you run like this you will see ('COPY INTO #MY_STAGE/year=', year(current_date())...') and lastly remove "('')" by
{%- set final_val=val | replace('(', '')| replace(')', '') | replace("'", '') -%}```
That's it.
In my dbt project, if I declare a jinja sql variable how can I pass it to a dbt_utils function?
For example this doesn't work:
{% set exclude_columns = ["col1", "col2", "col3"] %}
SELECT {{ dbt_utils.star(from=ref('table'), except=exclude_columns) }}
FROM {{ ref('table') }}
If I manually add columns to the "except" parameter, it works, but not with the variable. I tried {{ exclude columns }} as well and same result.
I'm not sure why, but defining a variable in my dbt_project.yml and then referencing that variable works for me!
{% set exclude_columns = var('exclude_fields') %}
{{ dbt_utils.star(from=ref('my_table'), except=exclude_columns) }}
You can also check that exclude_columns is working with this log statement:
{{ log(exclude_columns, info=true) }}
You have to add quotation marks around the column names, otherwise Jinja will treat them as variables and fails silently, so the results will be a list of 3 None.
{% set exclude_columns = ["col1", "col2", "col3"] %}
During the creation of a model in dbt I'm trying to construct an if else statement that has the following logic: if there is a table with the "table_name" name under the "project_name.dataset" then use this SELECT 1 else use SELECT 2
As I understand this should be something like this:
{% if "table_name" in run_query("
SELECT
table_name
FROM project-name.dataset.INFORMATION_SCHEMA.TABLES
").columns[0].values() %}
SELECT
1
{%.else %}
SELECT
2
{%.endif %}
This is by the way all happens in the BigQuery, that's why we use project-name.dataset.INFORMATION_SCHEMA.TABLES to extract the name of all the tables under this project and dataset.
But unfortunately this approach doesn't work. It would be really great if somebody could help me, please.
Here is how I did it:
{% set tables_list = [] %}
{%- for row in run_query(
"
SELECT
*
FROM project-name.dataset_name.INFORMATION_SCHEMA.TABLES
"
) -%}
{{ tables_list.append(row.values()[2]) if tables_list.append(row.values()[2]) is not none }}
{%- endfor -%}
{% if "table_name" in tables_list %}
SELECT logic 1
{% else %}
SELECT logic 2
{% endif %}
I'm trying to use a dbt macro to transform survey results.
I have a table similar to:
column1
column2
often
sometimes
never
always
...
...
I want to transform it into:
column 1
column 2
3
2
1
4
...
...
using the following mapping:
category
value
always
4
often
3
sometimes
2
never
1
To do so I have written the following sbt macro:
{% macro class_to_score(class) %}
{% if class == "always" %}
{% set result = 1 %}
{% elif class == "often" %}
{% set result = 2 %}
{% elif class == "sometimes" %}
{% set result = 3 %}
{% elif class == "never" %}
{% set result = 4 %}
{% endif -%}
{{ return(result) }}
{% endmacro %}
and then the following sql query:
{%- set class_to_score = class_to_score -%}
select
{{ set_class_to_score(column1) }} as column1_score,
from
table
However, I get the error:
Syntax error: SELECT list must not be empty at [5:1]
Anyone know why I am not getting anything back?
Thanks for the time you took to communicate your question. It's not easy! It looks like you're experiencing the number one misconception when it comes to dbt and Jinja:
Jinja isn't about transforming data, it's about composing a single SQL query that will be sent to the database. After everything inside jinja's curly brackets is rendered, you will be left with a query that can be sent to the database.
This notion does get complicated with dbt macros like run_query (docs) which will go to the database and get information. But the info you fetch can only used to generate the SQL string.
Your example sounds like the way to do things if you're using Python's pandas where the transformations happens in memory. In dbt-land, only the string generation happens in memory, though sometimes we get some of the substrings from the database before making the new query. So it sounds like you'd like Jinja to look at every value in the column and make the substitution, what you really need to do be doing is make generate a query that instructs the database to make the substitution. The way we do substitution in SQL is with CASE WHEN statements (see Mode's CASE docs for more info)
This is probably closer to what you want. Note it's probably better to make the likert_map object into a dbt seed table.
{% set likert_map =
{"1": "always", "2": "often", "3": "sometimes", "4": "never"} %}
SELECT
CASE column_1
{% for key, value in likert_map.items() %}
WHEN '{{ value }}' THEN '{{ key }}'
{% endfor %}
ELSE 0 END AS column_1_new,
CASE column_2
{% for key, value in likert_map.items() %}
WHEN '{{ value }}' THEN '{{ key }}'
{% endfor %}
ELSE 0 END AS column_2_new
{% endfor %}
FROM
table
Here's some related questions using mapping dictionary information to make a SQL query:
How to join two tables into a dictionary in dbt jinja
DBT - for loop issue with number as variable
I have created a macro to returns a table name from the INFORMATION_SCHEMA in Snowflake.
I have tables in snowflake as follows
------------
| TABLES |
------------
| ~one |
| ~two |
| ~three |
------------
I want to pass the table type i.e. one into the macro and get the actual table name i.e. ~one
Here is my macro(get_table.sql) in DBT that takes in parameter and returns the table name
{%- macro get_table(table_type) -%}
{%- set table_result -%}
select distinct TABLE_NAME from "DEMO_DB"."INFORMATION_SCHEMA"."TABLES" where TABLE_NAME like '\~%{{table_type}}%'
{%- endset -%}
{%- set table_name = run_query(table_result).columns[0].values() -%}
{{ return(table_name) }}
{%- endmacro -%}
Here is my DBT Model that calls the above macro
{{ config(materialized='table',full_refresh=true) }}
select * from {{get_table("one")}}
But I am getting an error:
Compilation Error in model
'None' has no attribute 'table'
> in macro get_table (macros\get_table.sql)
I don't understand where the error is
You need to use the execute context variable to prevent this error, as it is described here:
https://discourse.getdbt.com/t/help-with-call-statement-error-none-has-no-attribute-table/602
You also be careful about your query, that the table names are uppercase. So you better use "ilike" instead of "like".
Another important point is, "run_query(table_result).columns[0].values()" returns an array, so I added index to the end.
So here's the modified version of your module, which I successfully run it on my test environment:
{% macro get_table(table_name) %}
{% set table_query %}
select distinct TABLE_NAME from "DEMO_DB"."INFORMATION_SCHEMA"."TABLES" where TABLE_NAME ilike '%{{ table_name }}%'
{% endset %}
{% if execute %}
{%- set result = run_query(table_query).columns[0].values()[0] -%}
{{return( result )}}
{% else %}
{{return( false ) }}
{% endif %}
{% endmacro %}