I am trying to split my Ansible role variables into multiple files - as per this answer, it should be possible to create a vars/main directory, and all the .yml files in that directory should be automatically loaded.
However, this does not seem to happen in my case.
My directory structure:
vars
└── main
├── gce_settings.yml
├── vsphere_settings.yml
└── vsphere_zone.yml
However, when I try to use a variable defined inside vsphere_settings.yml, Ansible complains that the variable is undefined:
{"msg": "The task includes an option with an undefined variable. The error was: 'vsphere_user' is undefined
If I move the variable declaration into vars/main.yml, everything works as expected. But, of course, I would prefer to separate my variables into multiple files.
I was unable to find any reference to this "feature" in the official Ansible documentation, and I do not know how I could troubleshoot it. Can anyone point me in the right direction?
My ansible version:
ansible 2.8.5 on Ubuntu 16.04
And before you ask: yes, I did make sure that main.yml was not present when trying to load vars/main/*.yml...
The example below
$ cat play.yml
- hosts: localhost
roles:
- role1
$ cat roles/role1/tasks/main.yml
- debug:
var: var1
$ cat roles/role1/vars/main/var1.yml
var1: test_var1
gives
"var1": "test_var1"
TL;DR version: it is a bug in ansible, caused by the presence of empty .yml files in vars/main. There is a PR out for it already. See
here .
The actual result (as mentioned in the comments) actually depends on the order the files are processed (by default, it looks like my Ansible processes them in alphabetical order - but this might depend on the version, or the underlying OS):
If the empty file is processed first, you get an error message: ERROR! failed to combine variables, expected dicts but got a 'NoneType' and a 'AnsibleMapping'
If the empty file is processed after other files, there is no error message, but all the variables that have been set up to this point are destroyed
More details: since I have no idea how to troubleshoot this in Ansible itself, I went to the code and started to add print statements to see what is happening.
In my case, I had some empty .yml files in the vars/main directory (files that I was planning to populate later on. Well... it looks like when the code encounters such an empty file, it destroys the entire dictionary that it has built so far.
Either I'm missing something very important, or this is a bug... Steps to reproduce:
create a role
create the vars/main directory, and populate it with some .yml files
add an empty .yml file (named so that it is processed after the other files)
try to print the variables from the first files
ansible# tree roles
roles
+-- test
+-- tasks
¦ +-- main.yml
+-- vars
+-- main
+-- correct_vars.yml
4 directories, 2 files
ansible# cat roles/test/vars/main/correct_vars.yml myname: bogd
ansible# ansible-playbook -i inventory.yml test.yml
...
ok: [localhost] => {
"myname": "bogd" }
...
ansible# echo > roles/test/vars/main/emptyfile.yml
ansible# ansible-playbook -i inventory.yml test.yml
... ok: [localhost] => {
"myname": "VARIABLE IS NOT DEFINED!" }
...
Later edit: yep, it's a bug... See here.
Related
I have been creating a role in Ansible and when I run my gitci pipeline, i get a warning message as
"no-changed-when: Commands should not change things if nothing needs doing"
I have tried to use changed_when: false on the task file. When I try to deploy the image build, I get the permission error or status doesn't show properly.
In the example below, I am just copying the files from one directory to another.
Let me know how to use the shell module here.
E.g.
- name: Copy the configuration files to the Helm Directory
shell: "cp {{ files_dir_path }}/*.xml {{ roles_dir_path }}/{{ image.docker_tag }}/files/helm- chart/"
I am a little late to the party, but for those, that stumble upon this:
Ansible wants to be idempotent, so it needs to be able to verify if a change is necessary. In your example Ansible has to blindly run a command, without being able to check if that is even necessary. This is what the warning wants to tell you.
You can solve this, by giving Ansible a file, that will be present, only after the command ran. That way Ansible will skip the task the next time around.
- name: Copy the configuration files to the Helm Directory
shell: "cp {{ files_dir_path }}/*.xml {{ roles_dir_path }}/{{ image.docker_tag }}/files/helm- chart/"
args:
creates: "{{ roles_dir_path }}/{{ image.docker_tag }}/files/helm- chart/*.xml"
See e.g., the command module documentation for further reference.
I am working on a Deployment with Bitbucket and got some trouble using the variables. In Bitbucket I set in
Repository Settings / Deployment / Staging:
SOME_VAR = foobar
The bitbucket-pipelines.yml looks like this:
...
definitions:
steps:
- step: &build-some-project
name: 'Build Project'
script:
- echo "\$test='$SOME_VAR';" >> file.php
...
I am not able to pass the value of a variable into the file within the script part via echo, but why? I also put it in double qutes (like echo "\$test='"$SOME_VAR"';"). But the result always is just $test=''; = an empty string. The expected result should be $test='foobar';. Does anybody know how to get this working? The variable is not a secure variable.
// edit
The Repository Variables are working as well. But: I need the same variable with different values for each environment. Also the Repository Variables are accessable for users with permissions to push into the repo.
For the script to understand that the variable to be fetched from the deployment variable mention the deployment type
definitions:
steps:
- step: &build-some-project
name: 'Build Project'
deployment: Staging
script:
- echo "\$test='$SOME_VAR';" >> file.php
I am trying to reverse engineer the drone.io docker plugin and understand how to run the docker daemon in a pipeline step (DinD).
drone.io uses the library github.com/cncd/pipeline to compile and execute .drone.yml files.
The first thing the plugins/docker does is to start the docker daemon:
+ /usr/local/bin/dockerd -g /var/lib/docker
This works if fine in the official plugin, but I cannot get it to work with my own image where I do the same:
pipeline.yml
workspace:
base: /go
path: src/github.com/fnbk/hello
pipeline:
test:
image: fnbk/drone-daemon
fnbk/drone-daemon/run.sh
#!/bin/sh
/usr/local/bin/dockerd # <= ERROR: containerd: write /proc/17/oom_score_adj: permission denied
# ...
It will give me the following error:
containerd: write /proc/14/oom_score_adj: permission denied
The full example can be found on github: https://github.com/cncd/pipeline/pull/45
Any suggestions are highly appreciated.
You need to add your plugin to a whitelist via the DRONE_ESCALATE environment variable, which is passed to the server. This is the default value:
DRONE_ESCALATE=plugins/docker,plugins/gcr,plugins/ecr
So you would pass something like this:
-DRONE_ESCALATE=plugins/docker,plugins/gcr,plugins/ecr
+DRONE_ESCALATE=plugins/docker,plugins/gcr,plugins/ecr,fnbk/my-custom-plugin
Note that this should be the image name only. It must not include the tag.
I am using Puppet Enterprise 3.7.2 and on one of my nodes I create the file:
[root#vii-osc4-mgmt-001 ~]# cat /etc/profile.d/POD_prefix.sh
export FACTER_pod_prefix=vii-osc4
Then I rebooted that node and logged back in and verified that
the FACTER_pod_prefix gets set and facter pod_prefix outputs the
expected value.
[root#vii-osc4-mgmt-001 ~]# env | grep FACTER_pod_prefix
FACTER_pod_prefix=vii-osc4
[root#vii-osc4-mgmt-001 ~]# facter pod_prefix
vii-osc4
On my PE 3.7 Puppet master I created the file /var/lib/hiera/vii-osc4.yaml.
I created the /var/lib/hiera/vii-osc4.yaml from the /var/lib/hiera/defaults.yaml
file that I had been using like so:
# cp /var/lib/hiera/defaults.yaml /var/lib/hiera/vii-osc4.yaml
This file has a bunch of class parameter values. For example there is this
line in the file:
controller_vip_name: vii-osc4.example.com
Then I changed my hiera.yaml file to look like this:
[root#osc4-ppt-001 ~]# cat /etc/puppetlabs/puppet/hiera.yaml
---
:backends:
- yaml
:hierarchy:
- "%{pod_prefix}"
- defaults
- "%{clientcert}"
- "%{environment}"
- global
:yaml:
# datadir is empty here, so hiera uses its defaults:
# - /var/lib/hiera on *nix
# - %CommonAppData%\PuppetLabs\hiera\var on Windows
# When specifying a datadir, make sure the directory exists.
:datadir:
Then I restarted my pe-httpd service like so (RHEL7):
# systemctl restart pe-httpd
Then I make a small change to the /var/lib/hiera/vii-osc4.yaml for example
I change the line ...
controller_vip_name: vii-osc4.example.com
... to ...
controller_vip_name: VII-osc4.example.com
But when I run puppet agent -t --noop on my node, vii-osc4-mgmt-001, I do not see the change
that I expected to see. If I make the change in the /var/lib/hiera/defaults.yaml and then
run puppet agent -t --noop on my node I do see the expected changes. What am I doing wrong here?
UPDATE: using /etc/facter/facts.d method of setting custom facts.
I looked into using /etc/facter/facts.d for what I am trying to do. What I am trying to do is set a custom fact "pod_prefix". I want to use this fact in my hiera.yaml like so ...
---
:backends:
- yaml
:hierarchy:
- "%{::pod_prefix}"
- defaults
- "%{clientcert}"
- "%{environment}"
- global
:yaml:
# datadir is empty here, so hiera uses its defaults:
# - /var/lib/hiera on *nix
# - %CommonAppData%\PuppetLabs\hiera\var on Windows
# When specifying a datadir, make sure the directory exists.
:datadir:
... so that nodes that have pod_prefix set to vii-osc4 will obtain their class parameters from the file /var/lib/hiera/vii-osc4/yaml and host that pod_prefix set to ix-xyz will get their class params from /var/lib/hiera/ix-xyz.yaml. I do not see how creating the file /etc/facter/facts.d/pod_prefix.txt on my puppet master that contains something like this ...
# cat pod_prefix.txt
pod_prefix=vii-osc4
... could possibly be a solution to my problem. I guess I must be misunderstanding something here. Can someone help?
UPDATE 2.
The /etc/facter/facts.d/pod_prefix.txt file goes on my nodes.
I think my biggest problem is that just execute systemctl restart pe-httpd was not sufficient and things didn't start working until I did a full reboot of my puppet master. I need to go look at the docs and figure out what is the correct way to restart the "puppet master".
The very approach of managing custom facts through environment variables is quite brittle. In this case, I suspect it does not work because you changed the environment of login shells via /etc/profile.d. System services don't run in such shells, though.
A clean approach would be to define your fact value in /etc/facter/facts.d instead.
I have a devop directory containing ansible's varible directroy , plabooks and inventory directory
The directory looks like this
|groups_vars
-all.yml
-development.yml
-staging.yml
|inventroy
- staging
- development
configure.yml
deploy.yml
configure.yml and deploy.yml contains task that are applied to either staging or development machines using variable in groups_vars
Know if i call ansible-playbook command with staging inventory. How will it know which variable file to use. The varfile task in not added to configure.yml and deploy.yml
By the way am using an example from the company i work and the example is working I just want to know the magic that is happening it is using the right variable file though the var file is not incuded in the configure.yml nor deploy.yml
Ansible uses a few conventions to load vars files:
group_vars/[group]
host_vars/[host]
So if you have an inventory file that looks like this:
[staging]
some-host.name.com
Then These files will be included (optional extension .yml or .yaml also):
/group_vars/all
/group_vars/staging
/host_vars/some-host.name.com
I think this is the "magic" you are referring to.
You can find more on the subject here: http://docs.ansible.com/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
And here: http://docs.ansible.com/playbooks_best_practices.html