dbt env macro produces wrong output - dbt

i've encountered strange problem with env var.
Here's my profiles.yml file
project:
outputs:
prod:
type: postgres
threads: 4
host: my_host
port: 5432
user: "{{ env_var('DBT_USER') }}"
pass: "{{ env_var('DBT_PASS') }}"
dbname: my_db
schema: my_schema
target: prod
Here's my env variables:
echo $DBT_USER;
my_user
echo $DBT_PASS;
my_password
dbt-debug output:
connection to server at "my_host", port 5432 failed: FATAL: password authentication failed for user "my_user"
But if I change
pass: "{{ env_var('DBT_PASS') }}"
to
my_password
in profiles.yml, dbt-debug shows no error.
dbt version is 1.0.1.
What am i doing wrong?

#dcpt The way you used the env_var function looks correct per the env_var docs but there's two things I can think of.
Are you restarting your shell after setting your env variables? Maybe there's something about the CLI session that needs to be renewed after a profiles.yml or env_var change.
I think that you need to layout your profiles.yml with target as a higher header? Looking at the example here on profiles.yml it looks like:
profiles.yml
<profile-name>:
target: <target-name>
outputs:
<target-name>:
type: <bigquery | postgres | redshift | snowflake | other>
schema: <schema_identifier>
threads: <natural_number>
But you have your target: at the bottom? Is that supposed to be an additional target definition or should it be higher above your prod: tag.

Related

Ansible task includes undefined var, despite being defined in defaults/main.yml

I am trying to create a Galaxy role for our org's internal galaxy, which I am testing first locally. In our org we use a common list of defaults across all roles.
Ansible is throwing me a "The task includes an option with an undefined variable The error was: 'redis_download_url' is undefined" error when running my playbook, despite me having defined the variable in defaults/main.yml:
# Download
redis_version: "6.2.3"
redis_download_url: "https://download.redis.io/releases/redis-{{ redis_version }}.tar.gz"
When running my simple role/playbook.yml
---
- hosts: all
become: true
tasks:
- include: tasks/main.yml
Linked to tasks/main.yml
---
- name: Check ansible version
assert:
that: "ansible_version.full is version_compare('2.4', '>=')"
msg: "Please use Ansible 2.4 or later"
- include: download.yml
tags:
- download
- include: install.yml
tags:
- install
It should pull the tar file from tasks/download.yml as stated:
---
- name: Download Redis
get_url:
url: "{{ redis_download_url }}"
dest: /usr/local/src/redis-{{ redis_version }}.tar.gz
- name: Extract Redis tarball
unarchive:
src: /usr/local/src/redis-{{ redis_version }}.tar.gz
dest: /usr/local/src
creates: /usr/local/src/redis-{{ redis_version }}/Makefile
copy: no
The redis_download_url var is defined in defaults/main.yml which as I understand ansible should be able to locate there. I also have similar vars defined in defaults/task.yml eg.
redis_user: redis
redis_group: "{{ redis_user }}"
redis_port: "6379"
redis_root_dir: "/opt/redis"
redis_config_dir: "/etc/redis"
redis_conf_file: "{{ redis_config_dir }}/{{ redis_port }}.conf"
redis_password: "change-me"
redis_protected_mode: "yes"
and I assume they are also not able to be found/seen by ansible (but it does not get that far). I have also checked all file permissions and they seem to be fine.
Apologies in advance if the question is badly formatted.
As per documentation:
If you include a task file from a role, it will NOT trigger role behavior, this only happens when running as a role, include_role will work.
To get the role functionality of reading variables from defaults/main.yml, you'll need to use include_role or roles: [].
- hosts: all
become: true
tasks:
- include_role:
name: myrole
OR
- hosts: all
become: true
roles:
- myrole

Getting this error: ERROR! 'openssl_certificate' is not a valid attribute for a Play

So I'm trying to run a little playbook to test out the openssl_certificate module documented here: https://docs.ansible.com/ansible/2.7/modules/openssl_certificate_module.html
My playbook:
---
- name: play to run opensll verification
hosts: localhost
tasks:
- name: Running OpenSSL Module.
openssl_certificate:
path: "bleh.crt"
provider: assertonly
valid_in: "{{ (20*3600*24) | int }}"
register: VALIDATION_OUTPUT
ignore_errors: true
Basically I wanna see if the cert is valid in the given time frame. However, when I run
ansible-playbook openssl_test.yml
I get:
ERROR! 'openssl_certificate' is not a valid attribute for a Play
The error appears to be in '/path/to/my/yaml/openssl_test.yml': line 6, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- name: Running OpenSSL Module.
^ here
What am I doing wrong here? I'm sure it's something small.
Your indentation is wrong. while copying the yaml data, you may use :set paste if using vim editor to persevere the indentation.
---
- name: play to run opensll verification
hosts: localhost
tasks:
- name: Running OpenSSL Module.
openssl_certificate:
path: "bleh.crt"
provider: assertonly
valid_in: "{{ (20*3600*24) | int }}"
register: VALIDATION_OUTPUT
ignore_errors: true

store output of command in variable in ansible

Concerning: Ansible-Playbooks
Is it possible to run a command on the remote machine and store the resulting output into a variable?
I am trying to get the kernel version and install the matching headers like this:
---
- hosts: all
tasks:
- name: install headers
become: yes
become_method: sudo
apt:
name: "{{ packages }}"
vars:
packages:
- linux-headers-`uname -r`
#- a lot of other packages here
Unfortunately uname -r is not executed here.
I am aware of this question: Ansible: Store command's stdout in new variable?
But it looks like this is another topic.
By definition:
Ansible facts are data related to your remote systems, including
operating systems, IP addresses, attached filesystems, and more.
In this link you can see all ansible facts that you can use.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_vars_facts.html
One of those variables is ansible_kernel, this is the version of the kernel of your remote system. By default ansible gets this variables, but if you want to be sure that ansible will get this variables you have to use gather_facts: yes.
---
- hosts: all
gather_facts: yes
tasks:
- name: install
become: yes
become_method: sudo
apt:
name: "{{ packages }}"
vars:
packages:
- linux-headers-{{ ansible_kernel }}
I found a solution but I am not sure if this is really elegant
---
- hosts: all
tasks:
- name: Getting Kernel Version
command: "uname -r"
register: kernel_version
- name: install
become: yes
become_method: sudo
apt:
name: "{{ packages }}"
vars:
packages:
- linux-headers-{{ kernel_version.stdout }}

How to use a Variable in Ansible aws_ec2 plugin

I want to filter ec2 instances according to the Environment tag which I define when I run the scripts, i.e ansible-playbook start.yml -e env=dev
However, it seems that the plugin is not parsing variables. Any idea on how to achieve this task?
my aws_ec2.yml:
---
plugin: aws_ec2
regions:
- eu-central-1
filters:
tag:Secure: 'yes'
tag:Environment: "{{ env }}"
hostnames:
- private-ip-address
strict: False
groups:
keyed_groups:
- key: tags.Function
separator: ''
Edit
There is no error message resulting when running the playbook. The only problem that ansible handle the variable exactly as a string tag:Environment: "{{ env }}"instead of value tag:Environment: dev

Set different ORACLE_HOME and PATH environment variable using Ansible

Im currently querying multiple databases and capturing the results of the query
The way Im doing it is, Im writing a task which copies a shell script, something like below
#!/bin/bash
source $HOME/bin/gsd_xenv $1 &> /dev/null
sqlplus -s <<EOF
/ as sysdba
set heading off
select d.name||','||i.instance_name||','||i.host_name||';' from v\$database d,v\$instance i;
EOF
In the playbook, Im writing the task as below:
- name: List Query [Host and DB]
shell: "/tmp/sqlscript/sql_select.sh {{item}} >> /tmp/sqlscript/output.out"
become: yes
become_method: sudo
become_user: oracle
environment:
PATH: "/home/oracle/bin:/usr/orasys/12.1.0.2r10/bin:/usr/bin:/bin:/usr/ucb:/sbin:/usr/sbin:/etc:/usr/local/bin:/oradata/epdmat/goldengate/config/sys"
ORACLE_HOME: "/usr/orasys/12.1.0.2r10"
with_items: "{{ factor_dbs.split('\n') }}"
However I have noticed that the different hosts have different ORACLE_HOME and PATHS. How can I define those variables in the playbook, so that the task picks the right ORACLE_HOME and PATH variables and execute the task successfully
you can define host specific variables for each of the hosts. You can write your inventory file like:
[is_hosts]
greenhat ORACLE_HOME=/tmp
localhost ORACLE_HOME=/sbin
similarly for the PATH variable
then your task:
sample playbook that demonstrates the results:
- hosts: is_hosts
gather_facts: false
vars:
tasks:
- name: task 1
shell: "env | grep -e PATH -e ORACLE_HOME"
environment:
# PATH: "{{ hostvars[inventory_hostname]['PATH']}}"
ORACLE_HOME: "{{ hostvars[inventory_hostname]['ORACLE_HOME'] }}"
register: shell_output
- name: print results
debug:
var: shell_output.stdout_lines
sample output, you can see ORACLE_HOME variable was indeed changed, and as defined per host:
TASK [print results] ************************************************************************************************************************************************************************************************
ok: [greenhat] => {
"shell_output.stdout_lines": [
"ORACLE_HOME=/tmp",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
]
}
ok: [localhost] => {
"shell_output.stdout_lines": [
"ORACLE_HOME=/sbin",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
]
}