How to parse variables in Ansible group_vars dictionary? - variables

I have previously been placing all of my variables within the inventory file, such as
dse_dir=/app/dse
dse_bin_dir={{ dse_dir }}/bin
dse_conf_dir={{ dse_dir }}/resources/dse/conf
dse_yaml_loc={{ dse_conf_dir }}/dse.yaml
cass_conf_dir={{ dse_dir }}/resources/cassandra/conf
cass_yaml_loc={{ cass_conf_dir }}/cassandra.yaml
cass_bin_dir={{ dse_dir }}/resources/cassandra/bin
I did not need to use any quotes for these variables in the inventory file and it worked quite well.
Now I am trying to make use of the group_vars functionality, to separate variables per group of hosts. This has a different format, being a dictionary. So now I have:
dse_dir: "/app/dse"
dse_bin_dir: "{{ dse_dir }}/bin"
dse_conf_dir: "{{ dse_dir }}/resources/dse/conf"
dse_yaml_loc: "{{ dse_conf_dir }}/dse.yaml"
cass_conf_dir: "{{ dse_dir }}/resources/cassandra/conf"
cass_yaml_loc: "{{ cass_conf_dir }}/cassandra.yaml"
cass_bin_dir: "{{ dse_dir }}/resources/cassandra/bin"
In order to avoid parsing complains, I need to place quotes around these parameters. But now when I have a playbook such as the following:
---
# Copy CQL files across
- include: subtasks/copy_scripts.yml
- name: Create users
command: '{{ cass_bin_dir })/cqlsh'
I get the following error. Omitting the single quotes or replacing them with double quotes does not work either.
ERROR: There was an error while parsing the task 'command {{ cass_bin_dir })/cqlsh'.
Make sure quotes are matched or escaped properly
All of the documentation that I could find only shows hardcoded values in the dictionary, i.e. without variables including other variables, but I would assume that Ansible would support this.
Any advice on how to parse these properly?

See the “Gotchas” section here for understanding why you needed to add the quotes in your group_vars. (It's the yaml/ansible problematic : {{ combo.)
To address the error in your command, fix the typo: you have a }) instead of }}.

Related

Env var required but not provided - dbt CLI

We have an environment variable set in dbtCloud called DBT_SNOWFLAKE_ENV that selects the right database depending on which environment is used.
At the moment, I'm trying to set up dbt CLI with VSCode. I created a profiles.yml file that looks like this:
default:
target: development
outputs:
development:
type: snowflake
account: skxxxx.eu-central-1
user: <name>
password: <pass>
role: sysadmin
warehouse: transformations_dw
database: " {{ env_var('DBT_SNOWFLAKE_ENV', 'analytics_dev') }} "
schema: transformations
threads: 4
I added the env_var line after some suggestions but I realise that the environment variable still doesn't exist yet. The problem I see is that if I hardcode analytics_dev in that place (which makes sense), the error still persists.
I wouldn't want anybody who's going to use dbt to have to change the environment variable if they want to run something on production.
What are my options here?
You can set up a source file for the variables on dbt cli - for example you would create a bash script called set_env_var.sh and then source set_env_var.sh in your terminal.
An example of the bash script would be:
export SNOWFLAKE_ACCOUNT=xxxxx
export SNOWFLAKE_USER=xxxxx
export SNOWFLAKE_ROLE=xxxx
export SNOWFLAKE_SCHEMA=xxxx
export SNOWFLAKE_WAREHOUSE=xxxxx
and in your profiles.yml you can add all the variables you want, for example..
warehouse: "{{ env_var('SNOWFLAKE_WAREHOUSE') }}"
database: "{{ env_var('SNOWFLAKE_DATABASE') }}"
Hope this helps.
First of all younyou have to give hard code database name the other syntax is wrong. Secondly try to make a dynamic variable for environment and then use it like this when you want to use dbt, mean
**DBT snapshot --profile --vars $DBT_SNOWFLAKE_ENV **
As when you run it can easily pick up from env.
Currently i am working on dbt with everything dynamic even the full profile is dynamic according to schema and db.
In my case in my DBT model my variable was declared as part of vars within my dbt_project.yml file, so instead of accessing the variable like
"{{ env_var('MY_VARIABLE') }}"
I should have used:
"{{ var('MY_VARIABLE') }}"

YAML Variables, can you reference variables within variables?

I am using a variables.yml file as a template to store different variables, and I was curious if I am able to reference variables within the yml file itself to essentially nest them.
For example:
#variables.yml file
variables:
food1: 'pineapple'
food2: 'pizza'
favoriteFood: '$(food1) $(food2)'
So that when I eventually call upon this variable "favoriteFood", I can just use ${{ variables.favoriteFood }} and the value should be "pineapple pizza"
Example:
#mainPipeline.yml file
variables:
- template: 'variables.yml'
steps:
- script: echo My favorite food is ${{ variables.favoriteFood }}.
Am I on the right track here? I can't seem to google to any examples of if this is possible.
Yes! It is in fact possible, just follow the syntax outlined above. Don't forget spacing is critical in YML files.

Pass variable that includes double quotes (") in its value to a container from K8s deployment

I am trying to deploy the statsd exporter (https://github.com/prometheus/statsd_exporter) software as a docker container in a K8s cluster. However, I want some parameters to be configurable. In order to do that, I am passing some arguments to the container via K8s deployment in a yaml format. When these arguments do not contain the double quotes character ("), everything works fine. However, if the desired value of the introduced variables contains double quotes, K8s interprets them in a wrong way (something similar is described in Pass json string to environment variable in a k8s deployment for Envoy). What I want to set is the --statsd.listen-tcp=":<port>" argument, and I am using command and args in K8s deployment:
- name: statsd-exporter
image: prom/statsd-exporter:v0.12.2
...
command: ["/bin/statsd_exporter"]
args: ['--log.level="debug"', '--statsd.listen-tcp=":9999"']
When I deploy it in K8s and check the content of the "running" deployment, everything seems to be right:
command:
- /bin/statsd_exporter
args:
- --log.level="debug"
- --statsd.listen-tcp=":9999"
However, the container never starts, giving the following error:
time="..." level=fatal msg="Unable to resolve \": lookup \": no such host" source="main.go:64"
I think that K8s is trying to "scape" the double quotes and it is passing them adding the backslash to the container, so the latter cannot understand them. I have also tried to write the args as
args: ["--log.level=\"debug\"", "--statsd.listen-tcp=\":9999\""]
and the same happens. I have also tried to pass them as env variables, and all the times the same problem is happening: the double quotes are not parsed in the right way.
Any idea regarding some possible solution?
Thanks!
According to the source code, statsd-exporter uses kingpin for command line and flag parser. If I am not mistaken, kingpin doesn't require values to be surrounded by double quotes.
I would suggest to try:
- name: statsd-exporter
image: prom/statsd-exporter:v0.12.2
...
command: ["/bin/statsd_exporter"]
args:
- --log.level=debug
- --statsd.listen-tcp=:9999
Reason being is that according to the source code here, the input value for statsd.listen-tcp is split into host and port and it seems the the host per the error message gets the value of a double quote character ".

lineinfile module not acting appropriately with Ansible variable dictionary

My code is removing all instances of socket variables in my.cnf file and then repopulating the file with the correct variable and file location.
The deletion portion is working correctly, however the insert quits after the first line. So it will insert the correct 'socket' line directly under [mysqladmin] but it won't add to the remaining sections of the file.
- name: modify my.cnf setting
hosts: dbservers
remote_user: dbuser
become: yes
tasks:
- name: delete old socket definitions
lineinfile:
dest: /etc/mysql/my.cnf
regexp: "^socket "
state: absent
- name: add new socket variable
lineinfile:
dest: /etc/mysql/my.cnf
line: "{{ item.line }}"
insertafter: "{{ item.insertafter }}"
with_items:
- { line: 'socket = /mysql/run/mysqld.sock', insertafter: '^\[mysqladmin\]' }
- { line: 'socket = /mysql/run/mysqld.sock', insertafter: '^\[mysql\]' }
- { line: 'socket = /mysql/run/mysqld.sock', insertafter: '^\[mysqldump\]' }
- { line: 'socket = /mysql/run/mysqld.sock', insertafter: '^\[mysqld\]' }
On an additional note, I'd like there to be a line of space between the header and the new socket declaration if that is at all possible.
I've tried with version 2.0.2 and 2.2.0 and neither are behaving as intended.
The lineinfile works as it should (see #techraf's answer).
Your task is:
My code is removing all instances of socket variables in my.cnf file and then repopulating the file with the correct variable and file location.
Why not use replace then?
- replace:
dest: /etc/mysql/my.cnf
regexp: '^socket.*$'
replace: 'socket = /mysql/run/mysqld.sock'
Keep in mind:
It is up to the user to maintain idempotence by ensuring that the same pattern would never match any replacements made.
So you may want to change regexp ^socket.*$ to something that matches only "wrong" values that should be replaced to prevent unnecessary changed state of the task.
Ansible works exactly as intended. lineinfile module is used to ensure a particular line is in the specified file. The name of the module describes its function literally: "line in a file". It's not "lines in a file" or "line in a part of a file".
You specify only one pattern for the line:
socket = /mysql/run/mysqld.sock
so after Ansible ensured it exists (likely inserting it), all further calls to "ensure it exists" will not insert it again, because it already exists (that's how declarative programming works).
It doesn't matter that you specify a different insert_after values, because the like is the same and insert_after is not a part of the condition.
Although you don't show the exact syntax of your configuration file, it looks like an INI-style formatting (it also looks like a regular MySQL option file, which is an INI file), thus you may try using ini_file module instead.

How to use the value of an Ansible variable to target and get the value of a hostvar?

I'm in a situation where I have multiple Ansible roles, using multiple group_vars. Spread around each host's vars (depending on the host) is a number of directory paths, each in different places within the hostvar tree.
I need to ensure that a certain number of these directories exist when provisioning. So I created a role that uses the file module to ensure that these directories exist. Well, it would do, if I could figure out how to get it to work.
I have a group_var something similar to:
ensure_dirs:
- "daemons.builder.dirs.pending"
- "processor.prep.logdir"
- "shed.logdir"
Each of these 3 values maps directly to a group var that contains a string value that represents the corresponding filesystem path for that var, for example:
daemons:
builder:
dirs:
pending: /home/builder/pending
I would like to somehow iterate over ensure_dirs and evaluate each item's value in order to resolve it to the FS path.
I've tried several approaches, but I can't seem to get the value I need. The following is the most success I've had, which simply returns the literal of the constructed string.
- file:
dest: "hostvars['{{ ansible_hostname }}']['{{ item.split('.') | join(\"']['\") }}']"
state: directory
with_items: "{{ ensure_dirs }}"
This results in directories named, for example, hostvars['builder']['daemons']['builder']['dirs']['pending'] in the working directory. Of course, what I want the file module to work with the the value stored at that path in the hostvars, so that it will instead ensure that /home/builder/pending exists.
Anybody have any ideas?
There is a simple way – template your group variable.
group_var
ensure_dirs:
- "{{ daemons.builder.dirs.pending }}"
- "{{ processor.prep.logdir }}"
- "{{ shed.logdir }}"
task
- file:
path: "{{ item }}"
state: directory
with_items: "{{ ensure_dirs }}"
I suggest you to create and use a lookup plugin.
Ansible defines lots of lookup plugins, the most popular is 'items' when you use 'with_items'. Convention is 'with_(plugin name)'.
To create you lookup plugin:
Edit file ansible.cfg and uncomment key 'lookup_plugins' with value './plugins/lookup'
Create a plugin file named 'dirs.py' in './plugins/lookup'
Use it in your playbook:
- file:
dest: "{{ item }}"
state: directory
with_dirs: "{{ ensure_dirs }}"
implement you plugin dirs.py with something like that (see lookup plugins for more examples)
class LookupModule(LookupBase):
def run(self, terms, **kwargs):
return [dir.replace('.', '/') for dir in terms]
Advantages:
* Your playbook is more easy to read
* You can create python unitary tests for you plugin and improve it