Dbt - The selection criterion [model name] does not match any node - dbt

When running command dbt run -s [model_name] I get the error: The selection criterion [model name] does not match any node.
Any suggestions why this problem occurs?
The model name was copied 1-on-1 and is exactly the same as in the dbt directory
The command works for other models

Related

"scancel: error: Invalid job id Submitted batch job" with --cluster-cancel from snakemake

I am running snakemake using this command:
snakemake --profile slurm -j 1 --cores 1 --cluster-cancel "scancel"
which writes this to standard out:
Submitted job 224 with external jobid 'Submitted batch job 54174212'.
but after I cancel the run with ctrl + c, I get the following error:
scancel: error: Invalid job id Submitted batch job 54174212
What I would guess is that the jobid is 'Submitted batch job 54174212'
and snakemake tries to run scancel 'Submitted batch job 54174212' instead of the expected scancel 54174212. If this is the case, how do I change the jobid to something that works with scancel?
Your suspicion is probably correct, snakemake probably tries to cancel the wrong job (id Submitted batch job 54174212).
Check your slurm profile for snakemake which you invoke (standard location ~/.config/snakemake/slurm/config.yaml):
Does it contain the --parsable flag for sbatch?
Missing to include that flag is a mistake I made before. Adding the flag solved it for me.

why dbt runs in cli but throws an error on cloud UI for the exact same model?

I am executing dbt run -s model_name on CLI and the task completes successfully. However, when I run the exact same command on dbt cloud, I get this error:
Syntax or semantic analysis error thrown in server while executing query.
Error message from server: org.apache.hive.service.cli.HiveSQLException:
Error running query: org.apache.spark.sql.AnalysisException: cannot
resolve '`pv.meta.uuid`' given input columns: []; line 6 pos 4;
\n'Project ['pv.meta.uuid AS page_view_uuid#249595,
'pv.user.visitorCookieId AS (80) (SQLExecDirectW)")
it looks like it fails recognizing 'pv.meta.uuid' syntax which extract data from a json format. It is not clear to me what is going on. Any thoughts? Thank you!

liquabase command generateChangeLog generate java.lang.StackOverflowError

I would like to generate csv files and loaddata changeset for some tables.
I use this command line:
$LB_HOME/liquibase --logLevel=DEBUG --changeLogFile=${TABLE}.xml \
--url=jdbc:oracle:thin:#local:1521/ORCL --username=TEST --password=TEST \
--dataOutputDirectory=csv --diffTypes=data \
--includeObjects="table:$TABLE" generateChangeLog
After a very long list of lines like this:
DEBUG [liquibase.util.DependencyUtil$DependencyGraph]:
Potential StackOverflowException. Pro-actively removing with incoming nodes
I get this error:
ERROR [liquibase.integration.commandline.Main]: Unexpected error running Liquibase: Unknown reason
java.lang.StackOverflowError: null
I put includeObjects="table:$TABLE" with only one table, why liquibase reads all object dependencies?
Any suggestion?
As per Liquibase Documentation, includeObjects is not valid parameter
here is the Link: https://docs.liquibase.com/commands/community/generatechangelog.html
Can you try running just generateChangeLog command without data and see if it works first?

Pentaho Spoon Job Executes Fine, Endless Loop in Kitchen

Without getting too much into the weeds, I have a Pentaho PDI job with multiple sub-transformations and sub-jobs (ETL from MySQL to Postgres). This job runs exactly as expected from Spoon, no errors, but when I run the job--with the following command--I am met with an endless loop error at the first step where a parameter would need to be defined and passed from within the job (the named params from the command seem to integrate fine). The command I am using is as follows:
sudo /bin/sh kitchen.sh \
-rep=KettleFileRepo \
-dir=M2P \
-job=ETL-M2P \
-level=Rowlevel \
-param:MY.PAR.LOADTYPE=full \
-param:MY.PAR.TABLELIST=table1 \
-param:MY.PAR.TENANTS=tenant1 \
/
Has anyone run into this type of issue with a discrepancy between Spoon and Kitchen? Is there some sort of config or command line option that I am missing? I am running version 6.0.1.0-386 on OS X 10.11.4.
If you think more details would be beneficial please let me know and I can provide whatever is necessary.
I am not aware of any discrepancy between Spoon and Kitchen. Are you sure, its not something in the ETL that causing the loop. I would suggest to go through your ETL in detail.
Another thing you can try to debug is run only part of the job in kitchen and keep adding more as you see success.

BigQuery bq command with asterisk (*) doesn't work in Compute Engine

I have a directory with a file named file1.txt
And I run the command:
bq query "SELECT * FROM [publicdata:samples.shakespeare] LIMIT 5"
In my local machine it works fine but in Compute Engine I receive this error:
Waiting on bqjob_r2aaecf624e10b8c5_0000014d0537316e_1 ... (0s) Current status: DONE
BigQuery error in query operation: Error processing job 'my-project-id:bqjob_r2aaecf624e10b8c5_0000014d0537316e_1': Field 'file1.txt' not found.
If the directory is empty it works fine. I'm guessing the asterisk is expanding the file(s) into the query but I don't know why.
Apparently the bq command which is located at /usr/bin/bq has the following script:
#!/bin/sh
exec /usr/lib/google-cloud-sdk/bin/bq ${#}
which expands the asterisk.
As a current workaround I'm calling /usr/lib/google-cloud-sdk/bin/bq directly.