DBT fails with not found in location when using a seed - dbt

I'm trying to build a dbt project with seed. Something very similar to this.
To create a temporary table, I use dbt seed and reference the table as given in the example in the link above.
I do some post processing of the raw csv file in data/ and call that sql file as processed_seed_file.sql in models/.
However, when I referencing it in a macro as part of another sql file in models/,
it fails with the following error processed_seed_file was not found in location EU
the macro looks as follows
-- depends_on: {{ ref('processed_seed_file') }}
{{-
config(
materialized = 'ephemeral',
verbose=True
)
-}}
{%- call statement('start_statement', fetch_result=True) -%}
SELECT DATE_SUB(MIN(start), INTERVAL CAST('{{ var("some_var") }}' as INT64) WEEK)
FROM {{ ref('processed_seed_file') }}
{%- endcall -%}
{%- set start_date = load_result('start_statement')['data'][0][0] -%}
Any hints on where I might be going wrong?

Related

Programmatically get the $(rev:.r) variable in yaml scripts

This is a follow up of this question
I'd like to put the number of my build in the description (and in other tasks), when using BuildId it works without any problem.
But if I use $(rev:.r), this variable is not interpreted and I have an error saying that the number version is not correct (invalid characters such as $ and :).
Here's some code working with BuildId but not with Rev :
variables:
- group: NumVersion
- name: upperversion
${{ if eq(parameters.VersionBuild,'ReleaseProd') }}:
value: $(1-VersionMajeur).$(2-VersionMineure-ReleaseProd)
${{ if eq(parameters.VersionBuild,'Release') }}:
value: $(1-VersionMajeur).$(2-VersionMineure-Release)
${{ if eq(parameters.VersionBuild,'Develop') }}:
value: $(1-VersionMajeur).$(2-VersionMineure-Dev)
- name: lowerversion
${{ if eq(parameters.TypeBuild,'Feature') }}:
value: 99.$(Build.BuildId)
${{ if eq(parameters.TypeBuild,'Production') }}:
value: $(3-VersionCorrective-Release).$(rev:.r)
name: $(upperversion).$(lowerversion)
stages:
- stage: Build
jobs:
- job: Prerequisites
displayName: Prerequisites
steps:
- checkout: self
- script: |
echo '##vso[build.updatebuildnumber]$(upperversion).$(lowerversion)
Did somebody encounter this? Thanks in advance!
According to the docs you can’t use this variable other than in the build number / name field.
In Azure DevOps $(Rev:r) is a special variable format that only works in the build number field. When a build is completed, if nothing else in the build number has changed, the Rev integer value increases by one.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops

DevOps Pipeline Variables cannot be compared

I have a question to DevOps Pipelines.
I have created a Pipeline and during this creation I add a variable with this values “ Name: RESIDENCE ; value: ISS“. So this Value is defined outside from any script.
Pipeline Variable
Inside the .yml file I use this code:
variables:
- name: shuttle
value: Columbia
- name: pipe_var
value: $(RESIDENCE)
- name: location
${{ if eq(variables.pipe_var, 'ISS') }}:
value: pretty_cool
${{ if eq(variables.pipe_var, 'MIR') }}:
value: not_possible
steps:
- script: |
echo space shuttle is: $(shuttle)
echo residence is: $(pipe_var)
echo place to be is: $(location)
But The Pipeline output is only showing:
space shuttel is: Columbia
resicende is: ISS
place to be is:
So as it can be seen in the line “resicende is: ISS “ the value from the outside variable “RESIDENCE” is shown correct. To show this value I use the detour over the variable “pipe_var”. But when I try to compare this variable value with the “if equal” lines then I get no results.
What I’m doing wrong? Is there a special way to compare string values in a pipeline?
It would be nice if someone could give me a hint.

Azure Devops YAML: Looping Jobs With Different Variables Passed

I am struggling to figure out how to execute an API test using a pipeline where the command used can be modified using a loop. For example:
TEMPLATE.yaml
parameters:
JobName: ''
TestDirectory: '.\tests\smoke\'
PositiveTest: ''
NegativeTest: ''
- name: environments
type: object
values:
- dev01
- dev02
- test01
- test02
jobs:
- job: ${{ parameters.JobName }}
pool:
name: pool
demands:
- Cmd
variables:
PosTest: ${{ parameters.PositiveTest }}
NegTest: ${{ parameters.NegativeTest }}
Directory: ${{ parameters.TestDirectory }}
- script: |
call .\venv\Scripts\activate.bat
cd $(Directory)
python $(PosTest)
displayName: 'Executing Positive Test Scenarios'
condition: and(succeeded(), ne('${{ variables.PosTest }}', ''))
- script: |
call .\venv\Scripts\activate.bat
cd $(Directory)
python $(NegTest)
displayName: 'Executing Negative Test Scenarios'
condition: and(succeeded(), ne('${{ variables.NegTest }}', ''))
TEST_FILE.yaml
...
jobs:
# Get this file: templates\TEMPLATE.yml from the `build` repository (imported above)
- template: templates\api-test-build.yml#build
- ${{ each env in parameters.environments }}:
parameters:
TestDirectory: '.\tests\smoke\job_class'
PositiveTest: 'python_test.py http://apient${{ env }}/arbitrary/api/path/name'
NegativeTest: ''
This of course doesn't work (the each directive returns an error like "the first property must be template". If I move it up a line it then says "the first property must be a job" and this cycle of errors just continues...).
The idea is just that I have a loop that iterates through environment strings (top of the TEMPLATE.yaml example). The yaml file that references the template passes the command python_test.py http://apient<whatever env string the current iteration is on>/arbitrary/api/path/name for each iterated string (bottom of TEST_FILE.yaml) and the template just executes each of those api tests. At the end of a run there should be 4 environments that have been tested on.
This is just an idea I have and I am still learned all the in's and out's of Azure Devops YAML. If anyone knows how I can get this to work, any improvements I can make to the idea itself or any other workarounds/solutions that would be highly appreciated. Thank you!
You can try using Multi-job configuration (matrix) in your pipeline.
When you want to run the same job with multi-configuration in a pipeline, the matrix strategy is a good choose.
For example, in your pipeline, you want to run the jobs that have the same steps and input parameters but different values of the input parameters. You can just set up one job with the matrix strategy in the pipeline.

I need to replace the content of the variable in jinja2 template using ansible replace module when it satisfies when condition

I need to replace the content of the variable in jinja2 template using ansible replace module.
and in the same jinja2 template I need to replace some values when It satisfies the when condition. The when condition should be in jinja2 templates only.
I have tried in couple of way as shown below but none of them were worked for me
do we have any way to use the when condition in the same jinja2 template.
- set_fact: result="{{ temp | replace('nodeAgent', ''+value+'') | replace('nodeServrer', ''+result+'') when: (''+adu+'' == 'adt') }}"
- set_fact: result="{{ temp | replace('nodeAgent', ''+value+'') | replace('nodeServrer', ''+result+'') | when: (''+adu+'' == 'adt') }}"
Do something like:
{% if '+adu+' == 'adt' %}
nodeAgent
{% else %}
nodeServer
{% endif %}

Ansible - check for task failure during looping task's execution

I have a task that should run a list of SQL scripts. If there is an error during the execution of ANY of the scripts in the sequence, then the task should stop executing.
Given the following task, is there a way that it can modified it to see the register's stdout for the current iteration of the loop to check if 'ERROR' is in the stdout?
- name: Build and run SQLPlus commands
shell: 'echo #{{ item }} | {{ sqlplus }} {{ db_user }}/{{ db_pass }}#{{ environment }}'
register: sh1
with_items:
- ["a.sql", "b.sql"]
# failed_when: "'ERROR' in sh1.stdout_lines"
I was thinking something along the lines of the last commented line, but since sh1 is a register variable from a looping task, the output from each SQL script resides within the list results of sh1; so I'm not sure how to access the specific stdout of the command that was just executed.
I was thinking something along the lines of the last commented line,
Until now you were thinking correctly, so just uncomment the line:
- name: Build and run SQLPlus commands
shell: 'echo #{{ item }} | {{ sqlplus }} {{ db_user }}/{{ db_pass }}#{{ environment }}'
register: sh1
with_items:
- ["a.sql", "b.sql"]
failed_when: "'ERROR' in sh1.stdout_lines"
but since sh1 is a register variable from a looping task, the output from each SQL script resides within the list results of sh1;
No. Within the task, the values in sh1 dictionary are accessible in each iteration directly (without a list). The list sh1.results will be visible for subsequent tasks.
But the above won't break the execution of the whole loop, which is how Ansible was designed. So to realise the following...
If there is an error during the execution of ANY of the scripts in the sequence, then the task should stop executing.
You can use a workaround: save the task to a separate file and iterate the include task (see this answer).