dbt: vars in dbt_modules not recognized - dbt

For an unknown reason, the declared vars in the packages within the dbt_modules are not recognized anymore and need to be declared in the dbt_project.yml of the entire project vs. being declared only within the dbt_project.yml of the module.
When trying to preview the results of a staging model using one of the vars declared in the dbt_project.yml within the installed module, I get the following error:
Required var 'google_ads__click_performance' not found in config:
Vars supplied to request = {
"ad_reporting__facebook_ads_enabled": true,
"ad_reporting__google_ads_enabled": true,
"ad_reporting__linkedin_ads_enabled": true,
"ad_reporting__microsoft_ads_enabled": false,
"ad_reporting__pinterest_enabled": false,
"ad_reporting__snapchat_ads_enabled": false,
"ad_reporting__twitter_ads_enabled": false
}
> in rpc request (from remote system)
> called by rpc request (from remote system)
It used to recognize the vars declared within the module without having to declare them in the main dbt_project.yml file.
Could you please advise? This happened after I moved from dbt cloud 0.20.1 to 0.20.2
Thanks for your help

Related

How to work with mulitple annotation folder in latest version of darkaonline/l5-swagger?

It was working perfectly with multiple annotation folders in laravel 8.0, forgot the minor version of l5-swagger. Later when I do composer update and darkoline get updated to ^8.3 version. Now, its trying to make documentation(#SchemaRef) out of each file stored inside the folder. I do have following configuration
/*
* Absolute paths to directory containing the swagger annotations are stored.
*/
annotations' => [
base_path('app'),
base_path('Modules'),
]
In my case I do have following error
ErrorException
Skipping unknown \CreateRolesTable
Here CretaetRolesTable is a migration file inside Modules folder, no swagger related annotation exists in CreateRolesTable file and neither this name is being used as #ref.
You should use the anonymous function in CreateRolesTable file.
return new class extends Migration
For more information check the official documentation.(see more)

Deploy warning calling external JS function in `serverless.yml`

So, in my serverless.yml file I have this:
custom:
my_attr: ${file(./serverless/get-custom-value.js):my_attr}
And in that file (located in ./serverless/get-custom-value.js) is this JavaScript code:
module.exports.my_attr = async function(slsArg) {
const stage = slsArg.providers.aws.getStage()
console.debug(`### stage: "${stage}".`)
return stage
}
When doing a sls package -s {stage} or sls deploy -s {stage} (which are both successful), I see this warning:
Serverless: Deprecation warning: Variables resolver reports following resolution errors:
- Cannot resolve variable at "custom.my_attr": Cannot resolve "my_attr" out of "get-custom-value.js": Resolved a JS function not confirmed to work with a new parser, falling back to old resolver
Yet, despite the warning it works exactly as expected…
It's a deprecation warning, which serves to inform you that in the next version of the Serverless Framework, this specific resolver syntax is deprecated (and will error) as internally the process for resolving variables has changed.
You can adopt the new custom resolver very simply by changing the declaration in the serverless.yml and modifying the function arguments in get-custom-value.js, then you can set
variablesResolutionMode: 20210326
in your serverless.yml to indicate you have migrated. That will instruct the Serverless Framework to use the new resolver, as indicated by the warning message.
The full overview is in the documentation

A valid option to satisfy the declaration could not be found in serverless framework

I'm using serverless framework and using bitbucket-pipeline to configure CI/CD.
I have the following configuration in the serverless.yml file
provider:
name: aws
runtime: nodejs10.x
region: ${opt:region, ${env:AWS_REGION, 'ap-south-1'}}
memorySize: ${opt:memory, ${env:MEMORY_SIZE, 512}}tracing:
apiGateway: ${env:TRACING_API_GATEWAY, true}
I want to be able to pass the variables from the CLI as well as environment variables.
I have set up environment variable for AWS_REGION in the bitbucket pipeline variables but not for MEMORY_SIZE as want to use the default value.
But this gives error while running the pipeline.
Serverless Warning --------------------------------------
A valid option to satisfy the declaration 'opt:region,ap-south-1' could not be found.
Serverless Warning --------------------------------------
A valid environment variable to satisfy the declaration 'env:MEMORY_SIZE,512' could not be found.
Serverless Warning --------------------------------------
A valid environment variable to satisfy the declaration 'env:TRACING_API_GATEWAY,true' could not be found.
First, those are warnings not errors. A Serverless error would be something like:
Serverless Error ---------------------------------------
Trying to populate non string value into a string for variable ${opt:stage}. Please make sure the value of the property is a string.
This specific error happens because in the custom tag I've declared: var: ${opt:stage}-something which should be changed like:
provider:
stage: ${opt:stage, 'local'}
custom:
var: ${self:provider.stage}-something
I think in your case, you need to update region like this:
region: ${opt:region, env:AWS_REGION, 'ap-south-1'}
I couldn't reproduce the last warning though but I reckon ENV variables should be defined in bitbucket-pipelines.yml (or similar CI pipeline YAML) under args or variables and then they can be accessed using ${env:VAR}.

Serverless provider.environment variables not available in custom

I am trying to reference variables in self:provider.environment in my custom variables block; however, I get the following warning:
Serverless Warning --------------------------------------
A valid service attribute to satisfy the declaration
'self:provider.environment.myVar' could not be found.
We are using serverless 1.28.0, here's a sample config:
service: testing-vars
provider:
region: 'us-west-2'
environment:
myVar: ${env:myVar, self:custom.dotenv.myVar}
custom:
refToAbove: ${self:provider.environment.myVar}
...
I would like to reference the provider.environment vars in my custom block.
This was due a plugin not handling variables properly and has been fixed.

Janus Graph Remote Graph NoSuchFieldError: V3_0 error

I follow this example;
https://github.com/JanusGraph/janusgraph/tree/master/janusgraph-examples/example-remotegraph
and I would like to debug this project, I configured(HBase+Solr) and run Janus Graph server with
$JANUSGRAPH_HOME/bin/gremlin-server.sh $JANUSGRAPH_HOME/conf/gremlin-server/gremlin-server.yaml
command.
I passed this argument to IDEA via Run Configuration > Program Arguments
[Project Home]/conf/jgex-remote.properties
my jgex-remote.properties file is:
gremlin.remote.remoteConnectionClass=org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteConnection
# cluster file has the remote server configuration
gremlin.remote.driver.clusterFile=[Project Home]/conf/remote-objects.yaml
# source name is the global graph traversal source defined on the server
gremlin.remote.driver.sourceName=g
and my remote-objects.yaml file includes:
hosts: [127.0.0.1]
port: 8182
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0,
config: {
ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry]
}
}
It tries to run this command:
cluster = Cluster.open(conf.getString("gremlin.remote.driver.clusterFile"));
And throws this exception:
Exception in thread "main" java.lang.NoSuchFieldError: V3_0 at
org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0.(GryoMessageSerializerV3d0.java:41)
at
org.apache.tinkerpop.gremlin.driver.ser.Serializers.simpleInstance(Serializers.java:77)
at
org.apache.tinkerpop.gremlin.driver.Cluster$Builder.(Cluster.java:472)
at
org.apache.tinkerpop.gremlin.driver.Cluster$Builder.(Cluster.java:469)
at
org.apache.tinkerpop.gremlin.driver.Cluster.getBuilderFromSettings(Cluster.java:167)
at
org.apache.tinkerpop.gremlin.driver.Cluster.build(Cluster.java:159)
at org.apache.tinkerpop.gremlin.driver.Cluster.open(Cluster.java:233)
at
com.ets.dataplatform.init.RemoteGraphApp.openGraph(RemoteGraphApp.java:72)
at com.ets.dataplatform.init.GraphApp.runApp(GraphApp.java:290) at
com.ets.dataplatform.init.RemoteGraphApp.main(RemoteGraphApp.java:195)
It is not meaningful for me.
Thanks in advance.
I would try to align your versions. I assume that you are using JanusGraph 0.2.0. If you look at the pom.xml for that version you'll see that it is bound to TinkerPop 3.2.6:
https://github.com/JanusGraph/janusgraph/blob/v0.2.0/pom.xml#L68
Change to that version in your application and see if the connection works. Taking that approach should not only fix your problem but also ensure that you don't run into other incompatibilities. That is not to say that you can't configure later versions of TinkerPop to work with 3.2.6, but it requires a bit more configuration and you have to be aware of minor changes that might affect how certain operations might behave.