can we have two profile in one profiles.yml file in root
For example, to have a profile for Azure Blob and another profile for Azure Synapse SQL
for info:
dbt version: 1.3.1
python version: 3.9.6
adapter = dbt-synapse
# profiles.yml
default: dbt_project
dbt_project:
target: dev
outputs:
dev:
type: synapse
driver: 'ODBC Driver 17 for SQL Server' # (The ODBC Driver installed on your system)
server: XXXXXXX
database: ###
port: 1433
schema: #######
user: ######
password: #####
azure_blob:
target: dev
outputs:
dev:
type: azure_blob
account_name: ##
account_key: ##
container: ##
prefix: delta_lake
when i r deb-debug
get this error
02:25:37 Encountered an error:
Runtime Error : dbt encountered an error while trying to read your profiles.yml file.
the error starting on line azure_blob:
The issue is that the outputs: line under azure_blob is indented when it shouldn't be. If you unindent this line you should be good. Make it look like:
azure_blob:
target: dev
outputs:
dev:
type: azure_blob
account_name: ##
account_key: ##
container: ##
prefix: delta_lake
(P.S, there is a typo in the start of your file defaul should be default.)
Related
dbt version: 1.3.1
python version: 3.9.6
adapter = dbt-synapse.yml
# profiles.yml
default: dbt_project
dbt_project:
target: dev
outputs:
dev:
type: synapse
driver: 'ODBC Driver 17 for SQL Server'
server: XXXXXXX
database: ###
port: 1433
schema: #######
user: ######
password: #####
azure_blob:
target: dev
outputs:
dev:
type: azure_blob
account_name: ##
account_key: ##
container: ##
prefix: delta_lake
--- after implied this change here is the error message a get--01/30/2023--- #2:32 pm central time----
i get this error when try to read the file from azure blob storage
-- the is the profiles.yml--------
default: dbt_project
dbt_project:
target: dev
outputs:
dev:
type: synapse #synapse #type: Azuresynapse
driver: 'ODBC Driver 17 for SQL Server' # (The ODBC Driver installed on your system)
server: XXXXXXX
database: XXXXXXX
port: 1433
schema: XXXXXXX
#authentication: sqlpassword
user: XXXXXXX
password: XXXXXXX
azure_blob:
type: azure_blob
account_name: XXXXXXX
account_key: XXXXXXX
container: data-platform-archive #research-container/Bronze/Freedom/ABS_VESSEL/
prefix: abc/FGr1/fox/
--------------- dbt_project.yml-------------------------
name or the intended use of these models
name: 'dbt_project'
version: '1.0.0'
config-version: 2
# This setting configures which "profile" dbt uses for this project.
profile: 'dbt_project'
model-paths: ["models"]
analysis-paths: ["analyses"]
test-paths: ["tests"]
seed-paths: ["seeds"]
macro-paths: ["macros"]
snapshot-paths: ["snapshots"]
target-path: "target"
clean-targets:
- "target"
- "dbt_packages"
models:
dbt_project:
staging:
+materialized: table
utilities:
+materialized: view
azure_Blob:
staging:
+materialized: view
--------------------------------
Model name=dbt_stg_DL_abs_acm_users.sql"
and here is the code
{{ config(
materialized='view',
connection='azure_blob'
) }}
select *
from {{ source('data-platform-archive/abc/FGr1/fox/', 'abc.parquet') }}
Compilation Error in model dbt_stg_DL_abs_acm_users
Model 'model.dbt_project.dbt_stg_DL_abs_acm_users' 'abc.parquet' which was not found
Yes, what you've shown here is multiple profiles in a single profiles.yml file. However, there is no default key in the profiles.yml file, since the profile must be specified by the project.
In your dbt_project.yml file, there is a key that names the profile that the project should use. This project config use the dbt_project profile that you have defined:
# dbt_project.yml
name: 'jaffle_shop'
profile: 'dbt_project'
And this one will use the azure_blob profile that you have defined:
# dbt_project.yml
name: 'jaffle_shop'
profile: 'azure_blob'
This is a common pattern if you are developing on multiple projects on a single machine. For a SINGLE project, it is more common to define multiple targets:
# profiles.yml
dbt_project:
target: dev
outputs:
dev:
type: synapse
...
blob:
type: azure_blob
...
You can select which target to use at runtime by passing the -t or --target parameter to the CLI: dbt run -t blob would run the project against the azure_blob connection. The target: dev line in the file above specifies that the dev target (so the Synapse connection) should be the default, if the target is not specified at runtime.
It's somewhat unusual to have multiple targets use different types of databases, and some care must be taken to write compatible syntax. But it is possible -- many packages do this for integration tests, for example.
Hi I just created a dbt project using
dbt init puddle
I have a MySQL database running locally and have defined my profiles.yml as follows
puddle:
target: dev
outputs:
dev:
type: mysql5
server: localhost
port: 3306 # optional
database: puddle
dbname: puddle
schema: puddle
username: test
password: test
driver: MySQL ODBC 8.0 ANSI Driver
prod:
type: mysql5
server: [server/host]
port: [port] # optional
database: [schema] # optional, should be same as schema
schema: [schema]
username: [username]
password: [password]
driver: MySQL ODBC 8.0 ANSI Driver
But when I run dbt run I get the following error
02:48:52 1 of 2 START table model puddle.my_first_dbt_model.............................. [RUN]
02:48:52 1 of 2 ERROR creating table model puddle.my_first_dbt_model..................... [ERROR in 0.11s]
02:48:52 2 of 2 SKIP relation puddle.my_second_dbt_model................................. [SKIP]
02:48:52
02:48:52 Finished running 1 table model, 1 view model in 0.31s.
02:48:52
02:48:52 Completed with 1 error and 0 warnings:
02:48:52
02:48:52 Database Error in model my_first_dbt_model (models/example/my_first_dbt_model.sql)
02:48:52 1046 (3D000): No database selected
02:48:52 compiled SQL at target/run/puddle/models/example/my_first_dbt_model.sql
How am I suppose to select the database??
I am using the serverless framework for the deployment. It's throwing the following error while we are deploying it on the AWS. But my zip file size is 45mb and unzipped size is 130mb on local.
Serverless Error ----------------------------------------
An error occurred: SharedLambdaLayer - Unzipped size must be smaller than 262144000 bytes (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: 27f9378e-b9ea-42c5-ad73-a3b7cf9d584c).
This is my environment
Operating System: win32
Node Version: 12.19.0
Framework Version: 2.35.0
Plugin Version: 4.5.3
SDK Version: 4.2.2
Components Version: 3.8.2
Following is my .yml file content
service: rxd-layers
frameworkVersion: '2'
useDotenv: true
unresolvedVariablesNotificationMode: error
configValidationMode: error
plugins:
serverless-plugin-git-variables
serverless-dotenv-plugin
custom:
stageVariables:
gitBranch: ${opt:stage, git:branch}
package:
include:
- /nodejs/node_modules/shared # no need to add this yourself, this plugin does it for you
exclude:
- /nodejs/node_modules/**
- /nodejs/shared/**
provider:
stage: ${opt:stage, git:branch}
name: aws
runtime: nodejs12.x
region: ${env:AWS_REGION_CRED, 'us-east-1'}
versionFunctions: true
lambdaHashingVersion: 20201221
layers:
shared:
path: shared
description: This layer is for node packages of all services
resources:
Outputs:
SharedLayerExport:
Value:
Ref: SharedLambdaLayer
Export:
Name: SharedLambdaLayer
This is was due to geo-tz library. It was creating the unzip size almost more than 255MB for just geo-tz on my linux environment on AWS, this was the main problem. So I just uninstall this package. and after that My layer deployed correctly.
I am quite new to the Elastic stack and trying to experiment with visualization of apache log files in Kibana. I am using filebeat to ingest the apache logs. However when I run .\filebeat.exe setup -e, I get the following error:
2019-02-05T20:53:10.515+0530 INFO elasticsearch/client.go:165 Elasticsearch url: http://localhost:9200
2019-02-05T20:53:10.520+0530 INFO elasticsearch/client.go:721 Connected to Elasticsearch version 6.6.0
2019-02-05T20:53:10.520+0530 INFO kibana/client.go:118 Kibana url: http://localhost:5601
2019-02-05T20:53:10.567+0530 WARN fileset/modules.go:388 X-Pack Machine Learning is not enabled
2019-02-05T20:53:10.572+0530 ERROR instance/beat.go:911 Exiting: 1 error: error loading config file: invalid con
fig: yaml: line 4: did not find expected hexdecimal number
My filebeat.yml file looks like this:
filebeat.inputs:
- type: log
enabled: true
paths: C:\Users\bigdataadmin\Downloads\ApacheLogs\*
#============================= Filebeat modules ===============================
filebeat.config.modules:
path: C:\Program Files\Filebeat\modules.d\*.yml
reload.enabled: true
reload.period: 60s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
host: "localhost:5601"
output.elasticsearch:
hosts: ["localhost:9200"]
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
I also checked the yml on http://www.yamllint.com/ but didn't find any problems. I can't seem to figure out what's wrong with line 4 of this file.
I am using filebeat 6.6
The path key(on line 4) is an array. So you need to represent an array there.
Example :
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\Users\bigdataadmin\Downloads\ApacheLogs\*
Please be very cautious about the data type you are representing in such config files, I had made the same mistake while I was working on Filebeat and I had to spend a lot of time for a small mistake...
The branch was previously functional, then merged to master and the builds on master failed. Master was reverted, then master was merged into this branch and some fixes were made. When attempting to merge back to master, the build failed again with the following error. The push passed, the pr failed.
* What went wrong:
Could not resolve all files for configuration ':app:debugCompileClasspath'.
> Could not find com.squareup.leakcanary:leakcanary-android:1.5.4.
The travis.yml file:
sudo: false
language: android
android:
components:
- build-tools-27.0.2
- android-27
- sys-img-armeabi-v7a-android-27
jdk:
- oraclejdk8
before_install:
- yes | sdkmanager "platforms;android-27"
- chmod +x gradlew
#First app is built then unit tests are run
jobs:
include:
- stage: build
async: true
script: ./gradlew assemble
- stage: test
async: true
script: ./gradlew -w runUnitTests
notifications:
email:
recipients:
- email#me.com
on_success: always # default: change
on_failure: always # default: always
I felt maven repo outage today and faced the same issue. Hours later, I found that the failed Travis Job is working fine now. Do check it at your side.
Also, For any given scenario when classpath dependencies are missing one should check the build.gradle file rather than the .travis.yml file.
The failure message says that the app:debugCompileClasspath task is failing when looking for the com.squareup.leakcanary:leakcanary-android:1.5.4 (jar or AAR). Gradle allows you to define the repositories at the the root level
allProjects{
repositories {
maven() //Gradle has definition the points to https://jcenter.bintray.com/
}
}
So it will look into the following places for the class files or jar file.
Name: $ANDROID_HOME/extras/m2repository; url: file:/$ANDROID_HOME/extras/m2repository/
Name: $ANDROID_HOME/extras/google/m2repository; url: $ANDROID_HOME/extras/google/m2repository/
Name: $ANDROID_HOME/extras/android/m2repository; url: file:$ANDROID_HOME/extras/android/m2repository/
Name: BintrayJCenter; url: https://jcenter.bintray.com/
If not found the dependency resolution will fail giving the error mentioned above.