Django template: compare two dates, are same month? - django-templates

I have two dates and I want to see if they have the same month. I was trying:
{% if {{event.date_to|date:"m"}} = {{event.date_from|date:"m"}} %}
<a>same!</a>
{%endif%}
I get an error when rendering: Could not parse the remainder. date_to and date_from are both DateTimeField(). At this point, I am thinking of doing the comparison in the view and passing an is_same_date value. However, I though I'd first ask if it could be done in the template.

I figured it out. It was some silly typo-like things. This worked.
{% if event.date_from|date:"m" == event.date_to|date:"m" %}
It also works with day and month, etc. "d m"

Related

dbt post hook relation "my_table" does not exist

I am building some models using dbt.
I have a model so -
SELECT
COALESCE(
col1, col2
) AS col,
....
FROM
{{ source(
'db',
'tbl'
) }}
WHERE ....
This model has a config section calling a macro
{{- config(
post_hook = [macro()],
materialized='table'
) -}}
Within the macro I use {% if execute %} and I also log to check the execute value {{ log('Calling update macro with exec value = ' ~ execute) }}
When I run dbt compile I do not expect the macro to fire according to the documentation. However, it does and actually sets the execute to true triggering the update and causing on error as the table doesn't exist. Am I missing something or is this a dbt bug? I am confused!
Here's the line from the logs -
2021-09-15 20:48:16.864555 (Thread-1): Calling update macro with exec value = True
.. and the error is
relation "schema.my_table" does not exist
Appreciate any pointers someone might have, thanks
Ok, so here's what I found out about dbt.
When you dbt compile or dbt run the first time, the tables do not exist in the database yet. However, both compile and run will check the db objects exist and throw an error otherwise. So, my select within the macro failed irrespective of me using {% if execute %}
I called the adapter.get_relation() to check if the table exists -
{%- set source_relation = adapter.get_relation(
database=this.database ,
schema=this.schema,
identifier=this.name) -%}
and used the check condition -
{% set table_exists=source_relation is not none %}
For an incremental run, the fix was easier -
{% if execute and is_incremental() %}
Now, my code is fixed :)

How to parse a variable as a source reference in dbt?

I am building a model where I am dynamically referencing the table name and schema name based on the results of a query.
{%- set query %}
select * from master_data.event_metadata where game='{{game_name}}'
{% endset -%}
{%- set results = run_query(query) -%}
{%- if execute %}
{%- set record = results.rows[0] -%}
{% else %}
{%- set record = [] -%}
{% endif -%}
Two of the values are in record.SCHEMA_NAME and record.TABLE_NAME. I can use something like
select
*
from
{{record.SCHEMA_NAME}}.{{record.TABLE_NAME}}
but I'd rather use the source() function instead so that my documentation and DAG will be clean. How can I parse record.SCHEMA_NAME and record.TABLE_NAME as string arguments. I need to have something like
select
*
from
{{ source(record.SCHEMA_NAME, record.TABLE_NAME) }}
When I try to run the above I get the below error:
Server error: Compilation Error in rpc request (from remote system)
The source name (first) argument to source() must be a string, got <class 'jinja2.runtime.Undefined'>
You might already have found a workaround or a solution for this, but just in case someone else comes to the same situation...
To convert the values to string you can use the |string. For instance:
record.SCHEMA_NAME|string
record.TABLE_NAME|string
So your query would look something like this:
select * from {{ source(record.SCHEMA_NAME|string|lower, record.TABLE_NAME|string|lower) }}
Note that depending on the output for your query and how you defined the source file, you might have to lower or upper your values to match with your source.
Problem
Your record variable is a result of an execution (run_query(query)). When you do dbt compile/run dbt will do a series of operations like read all the files of your project, generate a "manifest.json" file, and will use the ref/source to generate the DAG so at this point, no SQL is executed, in other words, execute == False.
In your example, even if you do record.SCHEMA_NAME|string you will not be able to retrieve the value of that variable because nothing was executed and since you did if not execute then record = [] , you will get that message ... depends on a source named '.' which was not found, because at that point, record is empty.
A workaround would be to wrap your model's query in a if execute block, something like this:
{% if execute %}
select * from {{ source(record.TABLE_SCHEMA|string|lower, record.TABLE_NAME|string|lower) }}
{% endif %}
With that approach, you will be able to dynamically set the source of your model.
But unfortunately, this will not work as you expected because it will not generate the DAG for that model. Using an if execute block to wrap your model's query will prevent dbt to generate the DAG for the model.
In the end, this would be the same as your first attempt on having the schema and table declared without the source function.
For more details, you can check the dbt documentation about the execute mode:
https://docs.getdbt.com/reference/dbt-jinja-functions/execute
I think you need to convert that two objects into their string representation first before passing them to the source macro.
Try this
select
*
from
{{ source(record.SCHEMA_NAME|string, record.TABLE_NAME||string) }}

dbt: sql_header macro limitations vs query-comment

Dbt has a configuration setting for sql_header that ostensibly is for injecting udf's at runtime into a model statement. Unfortunately, it seems calling a macro is unsupported. In addition, ephemeral materializations are un-impacted by this setting. I created a setting called sql_footer but at the end of a sql statement and has similarly limitations.
Would it be reasonable tweak the query_header code to support injecting raw sql in addition to comment blocks, say by adding an execution boolean to the config dictionary?
dbt/core/dbt/adapters/base/query_headers.py
def add(self, sql: str) -> str:
if not self.query_comment:
return sql
if self.append:
# replace last ';' with '<comment>;'
sql = sql.rstrip()
if sql[-1] == ';':
sql = sql[:-1]
return '{}\n{} {} {};'.format(sql, block_start, self.query_comment.strip(), block_end)
vs
return '{}\n/* {} */;'.format(sql, self.query_comment.strip())
I understand any reticence to injecting sql into sql, my use-cases are very much system level configurations that a model developer would never come into contact with and would ideally be controlled through cicd. Our etl has different implementations that require different staging filters depending on the environments. I'd prefer to inject a line or two of sql rather than having to duplicate models for each implementation.
for ex:
dbt_project.yml
models:
- foo:
query_comment:
comment: "{{ var('ops_filter', default_filter()) }}"
executable: True
append: True
stg_foo.sql
with source as (Select *
from {{ source('foo') }})
select id
from source
### inject footer sql here ###
where $date_param between dbt_valid_to and dbt_valid_from
|where 1=1
|where dms_updated_at::date=$date_param```
Any advice is appreciated, love this project!
Based on your use case, it sounds like you're interested in functionality along the lines of this older issue:
https://github.com/fishtown-analytics/dbt/issues/1096. We closed that issue in May due to lack of interest from the community, but that doesn't mean that people don't run into this problem (and dbtonic answers for it) today.
As I see it, the best answer is to include a macro {{ footer_sql() }} at the bottom of your models, which could then dynamically include (or not) your environment-specific logic:
{% macro footer_sql(date_param) %}
{% if target.name == 'ci' %}
where {{ date_param }} between dbt_valid_to and dbt_valid_from
{% elif target.name == 'prod' %}
where 1=1
{% elif target.name == 'dev' %}
where dms_updated_at::date= {{ date_param }}
{% endif %}
{% endmacro %}
Last but not least, I just want to address a few of the things you mentioned:
Unfortunately, it seems calling a macro is unsupported.
You can absolutely include Jinja macros in set_sql_header calls, as long as those macros compile to SQL. This is how many users create UDFs on BigQuery.
In addition, ephemeral materializations are un-impacted by this setting.
That's correct. The purpose of SQL headers is to interpolate SQL that will precede the create view as/create table as DDL; since ephemeral models aren't materialized as database objects, they have no DDL to precede.

WHERE Condition in the Kentico repeater

I have a list of events that are shown on "Events" page using a repeater. Each event is a page type and has a field "EventStartDateTime". I want to show only those events where the event start date is >= today. For now in the WHERE field in repeater,i am using the following condition:
(EventStartDateTime >= '{% DateTime.Now #%}') OR (EventStartDateTime = '')
But no data is visible on the page. So is it right ?
Any help will be greatly appreciated. Thanks
I checked your code and it's kinda worked for me, but I will still modify it like this:
(EventStartDateTime >= GetDATE()) OR (EventStartDateTime IS NULL)
Because second part where you compare 'Date and time' field with empty string is not correct. So in your case, you don't get any data if 'EventStartDateTime' is not populated for every item. If that is not the case, you should try remove WHERE condition and check if you get data without it.
Best regards,
Dragoljub
Your statement will work with a macro like so:
(EventStartDateTime >= '{% CurrentDateTime.ToShortDateString() %}')
Your second condition would not work because your EventStartDateTime field is (assumed) to be a DateTime object so checking for an empty string will not work. You need to check for NULL.

How to avoid repeating myself in Salt states?

We have two different environments, dev and production, managed by a single Salt server.
Something like this:
base:
'dev-*':
- users-dev
'prod-*':
- users-prod
user-dev and users-prod states are pretty much the same, like this:
{% for user, data in pillar['users-dev'].items() %}
{{ user }}-user:
user.present:
< ...something... >
{{ user }}_ssh_auth:
ssh_auth.present:
< ...something... >
{% endfor %}
We did not want to duplicate the code so our initial idea was to do something like this:
{% users = pillar['users'].items() %}
include:
- users-common
and then to refer to users in users-common, but this did not work because the proper Jinja syntax was set users = pillar['users'].items() and this was not intended to work across Salt states includes.
So, the question is how to do it properly?
All jinja is evaluated before any of the states (including the include statements) are evaluated.
However, I would think you would just be able to refer directly to pillar['users'].items() inside of users-common. Is it not allowing you to access pillar from within that state?