no-changed-when lint warning araise in the ansible playbook - automation

I have been creating a role in Ansible and when I run my gitci pipeline, i get a warning message as
"no-changed-when: Commands should not change things if nothing needs doing"
I have tried to use changed_when: false on the task file. When I try to deploy the image build, I get the permission error or status doesn't show properly.
In the example below, I am just copying the files from one directory to another.
Let me know how to use the shell module here.
E.g.
- name: Copy the configuration files to the Helm Directory
shell: "cp {{ files_dir_path }}/*.xml {{ roles_dir_path }}/{{ image.docker_tag }}/files/helm- chart/"

I am a little late to the party, but for those, that stumble upon this:
Ansible wants to be idempotent, so it needs to be able to verify if a change is necessary. In your example Ansible has to blindly run a command, without being able to check if that is even necessary. This is what the warning wants to tell you.
You can solve this, by giving Ansible a file, that will be present, only after the command ran. That way Ansible will skip the task the next time around.
- name: Copy the configuration files to the Helm Directory
shell: "cp {{ files_dir_path }}/*.xml {{ roles_dir_path }}/{{ image.docker_tag }}/files/helm- chart/"
args:
creates: "{{ roles_dir_path }}/{{ image.docker_tag }}/files/helm- chart/*.xml"
See e.g., the command module documentation for further reference.

Related

How to run the same playbooks and inventories with or without a jumphost?

I have few playbooks and inventories and I need to run them from 2 different locations.
One that requires a jumphost and one that doesn't.
I have defined the jumphost in my inventory but when trying to run the playbook on the local one (no jumphost required) will fail.
Is there a way to load the SSH related vars based on ansible hostname?
If you just need to run things on the local host.. you can do that with specific tasks:
try local_action.. e.g.:
- name: copy file
local_action:
module: copy
src: stfile
dest: /path/to/testfile
or you can use delegate_to:
- name: copy a file
delegate_to: localhost
copy:
src: testfile
dest: /path/to/testfile
Edit: OK it seems I misread your question. Your question seems to be that you are running playbooks off two different hosts, one works and one fails, and so you need some logic that will select different variables for you based on the host?
You can include logic in the tasks using the when conditional like so :
- name: get ansible hostname
local_action:
module: shell
cmd: hostname
register: hostname_output
- shell: echo '{{variablefornonjumpbox}}'
when: hostname_output.stdout.find('jumpbox') != -1"
If you need to change the SSH target and details based on the hostname, you can use special variables like the ones at the bottom of this page:
https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html
Hopefully that helps.. If not I will need some clarification on the variables you are using and the errors you see.

Ansible 2.8 Roles - using the vars/main directory

I am trying to split my Ansible role variables into multiple files - as per this answer, it should be possible to create a vars/main directory, and all the .yml files in that directory should be automatically loaded.
However, this does not seem to happen in my case.
My directory structure:
vars
└── main
├── gce_settings.yml
├── vsphere_settings.yml
└── vsphere_zone.yml
However, when I try to use a variable defined inside vsphere_settings.yml, Ansible complains that the variable is undefined:
{"msg": "The task includes an option with an undefined variable. The error was: 'vsphere_user' is undefined
If I move the variable declaration into vars/main.yml, everything works as expected. But, of course, I would prefer to separate my variables into multiple files.
I was unable to find any reference to this "feature" in the official Ansible documentation, and I do not know how I could troubleshoot it. Can anyone point me in the right direction?
My ansible version:
ansible 2.8.5 on Ubuntu 16.04
And before you ask: yes, I did make sure that main.yml was not present when trying to load vars/main/*.yml...
The example below
$ cat play.yml
- hosts: localhost
roles:
- role1
$ cat roles/role1/tasks/main.yml
- debug:
var: var1
$ cat roles/role1/vars/main/var1.yml
var1: test_var1
gives
"var1": "test_var1"
TL;DR version: it is a bug in ansible, caused by the presence of empty .yml files in vars/main. There is a PR out for it already. See
here .
The actual result (as mentioned in the comments) actually depends on the order the files are processed (by default, it looks like my Ansible processes them in alphabetical order - but this might depend on the version, or the underlying OS):
If the empty file is processed first, you get an error message: ERROR! failed to combine variables, expected dicts but got a 'NoneType' and a 'AnsibleMapping'
If the empty file is processed after other files, there is no error message, but all the variables that have been set up to this point are destroyed
More details: since I have no idea how to troubleshoot this in Ansible itself, I went to the code and started to add print statements to see what is happening.
In my case, I had some empty .yml files in the vars/main directory (files that I was planning to populate later on. Well... it looks like when the code encounters such an empty file, it destroys the entire dictionary that it has built so far.
Either I'm missing something very important, or this is a bug... Steps to reproduce:
create a role
create the vars/main directory, and populate it with some .yml files
add an empty .yml file (named so that it is processed after the other files)
try to print the variables from the first files
ansible# tree roles
roles
+-- test
+-- tasks
¦   +-- main.yml
+-- vars
+-- main
+-- correct_vars.yml
4 directories, 2 files
ansible# cat roles/test/vars/main/correct_vars.yml myname: bogd
ansible# ansible-playbook -i inventory.yml test.yml
...
ok: [localhost] => {
"myname": "bogd" }
...
ansible# echo > roles/test/vars/main/emptyfile.yml
ansible# ansible-playbook -i inventory.yml test.yml
... ok: [localhost] => {
"myname": "VARIABLE IS NOT DEFINED!" }
...
Later edit: yep, it's a bug... See here.

in Ansible is it possible for a vars_prompt to allow tab autocomplete for a path?

In an ansible playbook, I'm prompting the user for a path to a file. I'd like to know if its possible to somehow integrate tab autocompletion for a path if they start typing to do this. This is my current snippet for the prompt below-
vars_prompt:
- name: "deadline_linux_installers_tar"
prompt: "What is the path to the deadline linux installers .tar?"
default: "/vagrant/downloads/Deadline-10.0.23.4-linux-installers.tar"
private: no
Thanks for any help!
No
Or, the slightly smelly duct-tape-y way would be to wrap ansible-playbook in a script that does accept tab-completion, and then call ansible-playbook -e "deadline_linux_installers_tar=$the_value" "$#"

drone.io: containerd: write /proc/14/oom_score_adj: permission denied

I am trying to reverse engineer the drone.io docker plugin and understand how to run the docker daemon in a pipeline step (DinD).
drone.io uses the library github.com/cncd/pipeline to compile and execute .drone.yml files.
The first thing the plugins/docker does is to start the docker daemon:
+ /usr/local/bin/dockerd -g /var/lib/docker
This works if fine in the official plugin, but I cannot get it to work with my own image where I do the same:
pipeline.yml
workspace:
base: /go
path: src/github.com/fnbk/hello
pipeline:
test:
image: fnbk/drone-daemon
fnbk/drone-daemon/run.sh
#!/bin/sh
/usr/local/bin/dockerd # <= ERROR: containerd: write /proc/17/oom_score_adj: permission denied
# ...
It will give me the following error:
containerd: write /proc/14/oom_score_adj: permission denied
The full example can be found on github: https://github.com/cncd/pipeline/pull/45
Any suggestions are highly appreciated.
You need to add your plugin to a whitelist via the DRONE_ESCALATE environment variable, which is passed to the server. This is the default value:
DRONE_ESCALATE=plugins/docker,plugins/gcr,plugins/ecr
So you would pass something like this:
-DRONE_ESCALATE=plugins/docker,plugins/gcr,plugins/ecr
+DRONE_ESCALATE=plugins/docker,plugins/gcr,plugins/ecr,fnbk/my-custom-plugin
Note that this should be the image name only. It must not include the tag.

Iterate over a list of files and execute them as part of Ansible deployment

I have just started looking at Ansible and I love the simplicity of it.
I would like to implement an automated migration script with a framework that does not support migrations by default but has a REST API.
My idea is the following.
Keep all REST Calls in shell scripts with version number
E.g.
/migrate/001.sh
/migrate/002.sh
/migrate/003.sh
I can run a
find . -name "*.sh" -type f -exec bash;
is there a better way of doing this ?
Does anyone have an idea how I could implement ruby style migrations scripts ? E.g. knowing which one the last script I executed and only execute the rest ?
Thanks for any ideas in advance!
You would at least need A few things (you've named them also):
A way to "remember" if a particular migration already ran
A list of migrations in some sort of repeatable order
An idempotent way of running the migrations
Taking from Doctrine Migrations project:
They have solved problem 1 by having a database table with a single column containing the file names of migrations already run. Comparing that to the list of files leaves all migrations that need to be executed.
They have solved nr 2 by generating migration filenames with datetime, as 20100416130422 for example. These should always be chronological and won't conflict with other developers adding migrations in the same project (possibly a problem with the incrementing numbers you used in your example).
They have solved nr. 3 with a command line tool that runs the migrations (and only executes what is needed). We run that with a regular command or shell module in Ansible.
Now I know realise naming a tool that solves the problem is pretty much cheating as an answer to your question, so here's a few ideas in Ansible:
synchronize all your migration files to remote; do not overwrite existing files
run your bash oneliner, with an added twist: empty the file if the migration workded: && > filename (you might have to resort to xargs instead of find -exec to pull it off).
Now all succesful migrations are done, and the files are empty so running again won't be a problem. The folder containing the migrations on the remote system serves as a "storage" of the migrations already ran. This solution would scale to multiple machines in different stages of migration.
If you could live with editing Ansible variables as migrations, you could also keep the entire thing in Ansible:
vars:
migrations:
- { name: "migration1", migration_command: "some migration command" }
- { name: "migration2", migration_command: "some migration command" }
tasks:
- name: perform migrations
shell: "{{ item.command }} && > {{ item.name }} creates={{ item.name }} chdir=/path/to/migrations/folder
with_items: migrations
This has exactly the same effect as the rsync version, it creates empty migration files (although they would not have to be empty in this case - you could save the command output as well) in the path of your chosing on the remote. Any file that exists is skipped in the migrations array - and if a migration fails (i.e. one of the shell commands fails) the script halts and no file is created - so next time the same migration will be run again.
Note that the "vars" could come from any variables source (like group_vars files or included files) so you could have a "migrations.yml" file included from a playbook.