Hi I just created a dbt project using
dbt init puddle
I have a MySQL database running locally and have defined my profiles.yml as follows
puddle:
target: dev
outputs:
dev:
type: mysql5
server: localhost
port: 3306 # optional
database: puddle
dbname: puddle
schema: puddle
username: test
password: test
driver: MySQL ODBC 8.0 ANSI Driver
prod:
type: mysql5
server: [server/host]
port: [port] # optional
database: [schema] # optional, should be same as schema
schema: [schema]
username: [username]
password: [password]
driver: MySQL ODBC 8.0 ANSI Driver
But when I run dbt run I get the following error
02:48:52 1 of 2 START table model puddle.my_first_dbt_model.............................. [RUN]
02:48:52 1 of 2 ERROR creating table model puddle.my_first_dbt_model..................... [ERROR in 0.11s]
02:48:52 2 of 2 SKIP relation puddle.my_second_dbt_model................................. [SKIP]
02:48:52
02:48:52 Finished running 1 table model, 1 view model in 0.31s.
02:48:52
02:48:52 Completed with 1 error and 0 warnings:
02:48:52
02:48:52 Database Error in model my_first_dbt_model (models/example/my_first_dbt_model.sql)
02:48:52 1046 (3D000): No database selected
02:48:52 compiled SQL at target/run/puddle/models/example/my_first_dbt_model.sql
How am I suppose to select the database??
Related
dbt version: 1.3.1
python version: 3.9.6
adapter = dbt-synapse.yml
# profiles.yml
default: dbt_project
dbt_project:
target: dev
outputs:
dev:
type: synapse
driver: 'ODBC Driver 17 for SQL Server'
server: XXXXXXX
database: ###
port: 1433
schema: #######
user: ######
password: #####
azure_blob:
target: dev
outputs:
dev:
type: azure_blob
account_name: ##
account_key: ##
container: ##
prefix: delta_lake
--- after implied this change here is the error message a get--01/30/2023--- #2:32 pm central time----
i get this error when try to read the file from azure blob storage
-- the is the profiles.yml--------
default: dbt_project
dbt_project:
target: dev
outputs:
dev:
type: synapse #synapse #type: Azuresynapse
driver: 'ODBC Driver 17 for SQL Server' # (The ODBC Driver installed on your system)
server: XXXXXXX
database: XXXXXXX
port: 1433
schema: XXXXXXX
#authentication: sqlpassword
user: XXXXXXX
password: XXXXXXX
azure_blob:
type: azure_blob
account_name: XXXXXXX
account_key: XXXXXXX
container: data-platform-archive #research-container/Bronze/Freedom/ABS_VESSEL/
prefix: abc/FGr1/fox/
--------------- dbt_project.yml-------------------------
name or the intended use of these models
name: 'dbt_project'
version: '1.0.0'
config-version: 2
# This setting configures which "profile" dbt uses for this project.
profile: 'dbt_project'
model-paths: ["models"]
analysis-paths: ["analyses"]
test-paths: ["tests"]
seed-paths: ["seeds"]
macro-paths: ["macros"]
snapshot-paths: ["snapshots"]
target-path: "target"
clean-targets:
- "target"
- "dbt_packages"
models:
dbt_project:
staging:
+materialized: table
utilities:
+materialized: view
azure_Blob:
staging:
+materialized: view
--------------------------------
Model name=dbt_stg_DL_abs_acm_users.sql"
and here is the code
{{ config(
materialized='view',
connection='azure_blob'
) }}
select *
from {{ source('data-platform-archive/abc/FGr1/fox/', 'abc.parquet') }}
Compilation Error in model dbt_stg_DL_abs_acm_users
Model 'model.dbt_project.dbt_stg_DL_abs_acm_users' 'abc.parquet' which was not found
Yes, what you've shown here is multiple profiles in a single profiles.yml file. However, there is no default key in the profiles.yml file, since the profile must be specified by the project.
In your dbt_project.yml file, there is a key that names the profile that the project should use. This project config use the dbt_project profile that you have defined:
# dbt_project.yml
name: 'jaffle_shop'
profile: 'dbt_project'
And this one will use the azure_blob profile that you have defined:
# dbt_project.yml
name: 'jaffle_shop'
profile: 'azure_blob'
This is a common pattern if you are developing on multiple projects on a single machine. For a SINGLE project, it is more common to define multiple targets:
# profiles.yml
dbt_project:
target: dev
outputs:
dev:
type: synapse
...
blob:
type: azure_blob
...
You can select which target to use at runtime by passing the -t or --target parameter to the CLI: dbt run -t blob would run the project against the azure_blob connection. The target: dev line in the file above specifies that the dev target (so the Synapse connection) should be the default, if the target is not specified at runtime.
It's somewhat unusual to have multiple targets use different types of databases, and some care must be taken to write compatible syntax. But it is possible -- many packages do this for integration tests, for example.
can we have two profile in one profiles.yml file in root
For example, to have a profile for Azure Blob and another profile for Azure Synapse SQL
for info:
dbt version: 1.3.1
python version: 3.9.6
adapter = dbt-synapse
# profiles.yml
default: dbt_project
dbt_project:
target: dev
outputs:
dev:
type: synapse
driver: 'ODBC Driver 17 for SQL Server' # (The ODBC Driver installed on your system)
server: XXXXXXX
database: ###
port: 1433
schema: #######
user: ######
password: #####
azure_blob:
target: dev
outputs:
dev:
type: azure_blob
account_name: ##
account_key: ##
container: ##
prefix: delta_lake
when i r deb-debug
get this error
02:25:37 Encountered an error:
Runtime Error : dbt encountered an error while trying to read your profiles.yml file.
the error starting on line azure_blob:
The issue is that the outputs: line under azure_blob is indented when it shouldn't be. If you unindent this line you should be good. Make it look like:
azure_blob:
target: dev
outputs:
dev:
type: azure_blob
account_name: ##
account_key: ##
container: ##
prefix: delta_lake
(P.S, there is a typo in the start of your file defaul should be default.)
I want to set the owner for dbt models from yml file. I have tried everything.
---
version: 2
models:
- name: my_model
description: "Foo bar"
owner: 'XXXX' # not work
config:
owner: 'XXXX' # not work
meta:
owner: 'XXXX' # not work
dbt-core 1.2.2
Finally I can do it
A. Add schema file
make this file
src/models/some_version/some_squad/some_domain/schema.yml
B. Content
version: 2
models:
- name: some_model_name
meta:
owner: 'some_owner'
domain:
- some_domain_
- some_domain_2
description: |
some description
C. Final result
I'm trying to setup SQL Oracle and use active records to migrate db with ruby on rails.
I installed sqldeveloper and created a new connection. Here the first error:
I/O error: The Network Adapeter could not establish the connection
Then I installed instantclient basic, sdk, sqlplus and then two gems:
gem 'ruby-oci8'
gem 'activerecord-oracle_enhanced-adapter'
but I got this error:
LoadError: Could not load 'active_record/connection_adapters/oracle_adapter'. Make sure that the adapter in config/database.yml is valid. If you use an adapter other than 'mysql', 'mysql2', 'postgresql' or 'sqlite3' add the necessary adapter gem to the Gemfile.
this is my database.yml:
# SQLite version 3.x
# gem install sqlite3
#
# Ensure the SQLite 3 gem is defined in your Gemfile
# gem 'sqlite3'
#
development:
adapter: oracle
database: development
username: nick
password:
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
adapter: oracle
database: test
username: nick
password:
production:
adapter: oracle
database: production
username: nick
password:
I wasn't able to find a solution, what can I do?
Update adapter entries as follows:
adapter: oracle
to
adapter: oracle_enhanced
I am new to Aerospike and need your help to troubleshoot a restore issue. I have Aerospike running on my mac and it seem to work all fine except that it do not allow me to restore from .asb file. I took backup from an aerospike instance running on an Ubuntu machine using asbackup utility. But when I try to restore the .asb file using asrestore command on my mac instance, it throws following exception:
asrestore -d ~
restoring: host 127.0.0.1 port 3000 bin_list (null) from directory /home/vagrant
2015-08-25 15:13:43 INFO Add node BB9A9EAAB270008 127.0.0.1:3000
Aug 25 2015 15:13:43 GMT: starting restore: filename: /home/vagrant/BB9A3F5AA1ED512_00000.asb FILE 0x7f63680008c0
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
put failed in restore: unusual error 20 trying again
restore: too many consecutive put failure
Aug 25 2015 15:13:44 GMT: expired 0 : skipped 0 : attempted 0 : [updated 0 not-updated (existed 0 gen-old 0)]
I tried using -t option to restrict the thread count, but no respite.
Has any one faced a similar issue?
Looking forward to your help.
Error 20 indicates a bad namespace parameter. Check your server errorlog for more details. Seems like the namespace that is there in the backup file is not defined in the configuration of the cluster where you are trying to load using asrestore.
Two options
Create a namespace with the same namespace name as in the backup file
Write a script to change the namespace name in the backup files to the intended name which is valid in the cluster where you are trying to load.
The backup file format is documented at http://www.aerospike.com/docs/tools/backup/file_format.html