lineinfile module not acting appropriately with Ansible variable dictionary - module

My code is removing all instances of socket variables in my.cnf file and then repopulating the file with the correct variable and file location.
The deletion portion is working correctly, however the insert quits after the first line. So it will insert the correct 'socket' line directly under [mysqladmin] but it won't add to the remaining sections of the file.
- name: modify my.cnf setting
hosts: dbservers
remote_user: dbuser
become: yes
tasks:
- name: delete old socket definitions
lineinfile:
dest: /etc/mysql/my.cnf
regexp: "^socket "
state: absent
- name: add new socket variable
lineinfile:
dest: /etc/mysql/my.cnf
line: "{{ item.line }}"
insertafter: "{{ item.insertafter }}"
with_items:
- { line: 'socket = /mysql/run/mysqld.sock', insertafter: '^\[mysqladmin\]' }
- { line: 'socket = /mysql/run/mysqld.sock', insertafter: '^\[mysql\]' }
- { line: 'socket = /mysql/run/mysqld.sock', insertafter: '^\[mysqldump\]' }
- { line: 'socket = /mysql/run/mysqld.sock', insertafter: '^\[mysqld\]' }
On an additional note, I'd like there to be a line of space between the header and the new socket declaration if that is at all possible.
I've tried with version 2.0.2 and 2.2.0 and neither are behaving as intended.

The lineinfile works as it should (see #techraf's answer).
Your task is:
My code is removing all instances of socket variables in my.cnf file and then repopulating the file with the correct variable and file location.
Why not use replace then?
- replace:
dest: /etc/mysql/my.cnf
regexp: '^socket.*$'
replace: 'socket = /mysql/run/mysqld.sock'
Keep in mind:
It is up to the user to maintain idempotence by ensuring that the same pattern would never match any replacements made.
So you may want to change regexp ^socket.*$ to something that matches only "wrong" values that should be replaced to prevent unnecessary changed state of the task.

Ansible works exactly as intended. lineinfile module is used to ensure a particular line is in the specified file. The name of the module describes its function literally: "line in a file". It's not "lines in a file" or "line in a part of a file".
You specify only one pattern for the line:
socket = /mysql/run/mysqld.sock
so after Ansible ensured it exists (likely inserting it), all further calls to "ensure it exists" will not insert it again, because it already exists (that's how declarative programming works).
It doesn't matter that you specify a different insert_after values, because the like is the same and insert_after is not a part of the condition.
Although you don't show the exact syntax of your configuration file, it looks like an INI-style formatting (it also looks like a regular MySQL option file, which is an INI file), thus you may try using ini_file module instead.

Related

Env var required but not provided - dbt CLI

We have an environment variable set in dbtCloud called DBT_SNOWFLAKE_ENV that selects the right database depending on which environment is used.
At the moment, I'm trying to set up dbt CLI with VSCode. I created a profiles.yml file that looks like this:
default:
target: development
outputs:
development:
type: snowflake
account: skxxxx.eu-central-1
user: <name>
password: <pass>
role: sysadmin
warehouse: transformations_dw
database: " {{ env_var('DBT_SNOWFLAKE_ENV', 'analytics_dev') }} "
schema: transformations
threads: 4
I added the env_var line after some suggestions but I realise that the environment variable still doesn't exist yet. The problem I see is that if I hardcode analytics_dev in that place (which makes sense), the error still persists.
I wouldn't want anybody who's going to use dbt to have to change the environment variable if they want to run something on production.
What are my options here?
You can set up a source file for the variables on dbt cli - for example you would create a bash script called set_env_var.sh and then source set_env_var.sh in your terminal.
An example of the bash script would be:
export SNOWFLAKE_ACCOUNT=xxxxx
export SNOWFLAKE_USER=xxxxx
export SNOWFLAKE_ROLE=xxxx
export SNOWFLAKE_SCHEMA=xxxx
export SNOWFLAKE_WAREHOUSE=xxxxx
and in your profiles.yml you can add all the variables you want, for example..
warehouse: "{{ env_var('SNOWFLAKE_WAREHOUSE') }}"
database: "{{ env_var('SNOWFLAKE_DATABASE') }}"
Hope this helps.
First of all younyou have to give hard code database name the other syntax is wrong. Secondly try to make a dynamic variable for environment and then use it like this when you want to use dbt, mean
**DBT snapshot --profile --vars $DBT_SNOWFLAKE_ENV **
As when you run it can easily pick up from env.
Currently i am working on dbt with everything dynamic even the full profile is dynamic according to schema and db.
In my case in my DBT model my variable was declared as part of vars within my dbt_project.yml file, so instead of accessing the variable like
"{{ env_var('MY_VARIABLE') }}"
I should have used:
"{{ var('MY_VARIABLE') }}"

How do I read in a file from ansible and get it to return the last few lines?

I have a log file that updates over time. After it updates there is a specific line in the log that specifies that the update is complete, near the end of the file. I want to create a task using Ansible that reads in this file and returns the last line of the file, or at least the last few lines. If the line isn't found re read the file (as it means it is still updating). Does anyone know how I can go about this?
I've tried looking at documentation and have made a task that at reads the log file into a variable but I'm not sure where to go from here, or even if this is the right way to do it? Below is what I've done so far. I'm working on a windows machine.
- name: Check Log file
win_shell: cmd /k TYPE C:\Files\logfile.log
register: logFile
Thanks!!!
Here we can use a module called lineinfile although the module is used for a different purpose but it can be used in conjuction to
check_mode: yes which ensure it will not write anything to file ever
until: not presence.changed will do the required looping for you
retries: 5 this is the number of retries, therefore for infinite loop set it to a large number but absolute infinity is not encouraged here!
- name: find
lineinfile:
path: /PATH/TO/LOG_FILE
line: 'registering Mbean....'
check_mode: yes
register: presence
until: not presence.changed
retries: 5
delay: 10
UPDATE: #zeitounator suggestion in comments of the question looks cleaner and is implemented below
- name: WAIT
wait_for:
path: /PATH/TO/LOG_FILE
search_regex: 'registering Mbean....'

Hiera hierarchy doesnt respect facter

Background
My hiera.yaml looks like
version: 5
defaults:
datadir: /etc/puppet/hieradata
hierarchy:
- name: "YAML data: environments, stages, roles, common"
data_hash: yaml_data
paths:
- "roles/%{role}/common.yaml"
- "roles/common.yaml"
- "common.yaml"
hieradata folder have following files
/etc/puppet/hieradata/roles/development/common.yaml
/etc/puppet/hieradata/roles/common.yaml
/etc/puppet/hieradata/common.yaml
all the above files have following content
---
foo : "bar"
my facter output is given below
[root#allinone puppet]# facter role
development
Problem statement
when executed the puppet lookup foo command, outputs bar as expected.
I deleted the file /etc/puppet/hieradata/common.yaml and still outputs bar. This is fine.
But when i deleted /etc/puppet/hieradata/roles/common.yaml also output doesn't show anything. It doesnt respect the file /etc/puppet/hieradata/roles/development/common.yaml Any reason?
I can see the fact role using the facter command. But my hiere doesnt respect that.
I have also tried the following in hiera.yaml
- "roles/%{::role}/common.yaml"
- "roles/%{facts.role}/common.yaml"
but nothing helps
After tedious debug process, The issue was with the facts file.
Those facts files were having Windows style line endings. After changing to linux style, everything worked.

How to use the value of an Ansible variable to target and get the value of a hostvar?

I'm in a situation where I have multiple Ansible roles, using multiple group_vars. Spread around each host's vars (depending on the host) is a number of directory paths, each in different places within the hostvar tree.
I need to ensure that a certain number of these directories exist when provisioning. So I created a role that uses the file module to ensure that these directories exist. Well, it would do, if I could figure out how to get it to work.
I have a group_var something similar to:
ensure_dirs:
- "daemons.builder.dirs.pending"
- "processor.prep.logdir"
- "shed.logdir"
Each of these 3 values maps directly to a group var that contains a string value that represents the corresponding filesystem path for that var, for example:
daemons:
builder:
dirs:
pending: /home/builder/pending
I would like to somehow iterate over ensure_dirs and evaluate each item's value in order to resolve it to the FS path.
I've tried several approaches, but I can't seem to get the value I need. The following is the most success I've had, which simply returns the literal of the constructed string.
- file:
dest: "hostvars['{{ ansible_hostname }}']['{{ item.split('.') | join(\"']['\") }}']"
state: directory
with_items: "{{ ensure_dirs }}"
This results in directories named, for example, hostvars['builder']['daemons']['builder']['dirs']['pending'] in the working directory. Of course, what I want the file module to work with the the value stored at that path in the hostvars, so that it will instead ensure that /home/builder/pending exists.
Anybody have any ideas?
There is a simple way – template your group variable.
group_var
ensure_dirs:
- "{{ daemons.builder.dirs.pending }}"
- "{{ processor.prep.logdir }}"
- "{{ shed.logdir }}"
task
- file:
path: "{{ item }}"
state: directory
with_items: "{{ ensure_dirs }}"
I suggest you to create and use a lookup plugin.
Ansible defines lots of lookup plugins, the most popular is 'items' when you use 'with_items'. Convention is 'with_(plugin name)'.
To create you lookup plugin:
Edit file ansible.cfg and uncomment key 'lookup_plugins' with value './plugins/lookup'
Create a plugin file named 'dirs.py' in './plugins/lookup'
Use it in your playbook:
- file:
dest: "{{ item }}"
state: directory
with_dirs: "{{ ensure_dirs }}"
implement you plugin dirs.py with something like that (see lookup plugins for more examples)
class LookupModule(LookupBase):
def run(self, terms, **kwargs):
return [dir.replace('.', '/') for dir in terms]
Advantages:
* Your playbook is more easy to read
* You can create python unitary tests for you plugin and improve it

How to parse variables in Ansible group_vars dictionary?

I have previously been placing all of my variables within the inventory file, such as
dse_dir=/app/dse
dse_bin_dir={{ dse_dir }}/bin
dse_conf_dir={{ dse_dir }}/resources/dse/conf
dse_yaml_loc={{ dse_conf_dir }}/dse.yaml
cass_conf_dir={{ dse_dir }}/resources/cassandra/conf
cass_yaml_loc={{ cass_conf_dir }}/cassandra.yaml
cass_bin_dir={{ dse_dir }}/resources/cassandra/bin
I did not need to use any quotes for these variables in the inventory file and it worked quite well.
Now I am trying to make use of the group_vars functionality, to separate variables per group of hosts. This has a different format, being a dictionary. So now I have:
dse_dir: "/app/dse"
dse_bin_dir: "{{ dse_dir }}/bin"
dse_conf_dir: "{{ dse_dir }}/resources/dse/conf"
dse_yaml_loc: "{{ dse_conf_dir }}/dse.yaml"
cass_conf_dir: "{{ dse_dir }}/resources/cassandra/conf"
cass_yaml_loc: "{{ cass_conf_dir }}/cassandra.yaml"
cass_bin_dir: "{{ dse_dir }}/resources/cassandra/bin"
In order to avoid parsing complains, I need to place quotes around these parameters. But now when I have a playbook such as the following:
---
# Copy CQL files across
- include: subtasks/copy_scripts.yml
- name: Create users
command: '{{ cass_bin_dir })/cqlsh'
I get the following error. Omitting the single quotes or replacing them with double quotes does not work either.
ERROR: There was an error while parsing the task 'command {{ cass_bin_dir })/cqlsh'.
Make sure quotes are matched or escaped properly
All of the documentation that I could find only shows hardcoded values in the dictionary, i.e. without variables including other variables, but I would assume that Ansible would support this.
Any advice on how to parse these properly?
See the “Gotchas” section here for understanding why you needed to add the quotes in your group_vars. (It's the yaml/ansible problematic : {{ combo.)
To address the error in your command, fix the typo: you have a }) instead of }}.