I'm struggling to find out how to override Ansible module options defaults without hand rolling it with variables. Even better if there would be a way to override module options defaults only for a subset of hosts.
Say on couple hosts Git is available at /bin/git, as expected. On couple other hosts Git is at /usr/local/bin/git. How can I override the git module executable option default for the latter group of hosts?
At the moment I'm setting a hosts group variable like:
git_executable=/usr/local/bin/git
and using it with default(omit) filter everywhere git is used like so:
- git: "executable={{git_executable|default(omit)}} ..."
So it gets properly overriden on hosts where it's defined, and ignored on others.
executable may not be the best example here, since that may be controlled with PATH environment variable or something. So what about any kind of module option in general that I'd like to override for just some hosts but otherwise fall back to module default?
Since there is couple more of such basic differences in this environment, it's quite tedious to sprinkle this kind of default lookup using variables all over the place just in case it gets run on a host with non-default setup. Is there a way to do this better?
I don't think there is a better option. Modules only know what you pass to them. They do not have access to global vars, server facts or anything else unless you explicitly pass it as a module parameter.
If this really really is important and you want to invest some time, you could create your own action plugin(s). Action plugins are local actions, therefore have access to the Ansible runnerc class and all its properties including facts etc. So you could handle the default parameters or executable detection in there based on server facts and then call the git or whatever module programatically. Huge overhead in my opinion but that depends on view and might be feasible on your end.
Though take care, action plugins are 100% undocumented. Ansible 2.0 is going to be released in the next days. They claim 100% backwards compatibility but I wouldn't be surprised if that only counts for documented features.
In this specific case git.executable as long as it's in PATH or '/sbin', '/usr/sbin', '/usr/local/sbin' git module would find it because it uses the basic.get_bin_path()
On the larger topic, personally I would go with what you already did. But if you are bent on it one other possible hack would be to [mis]use the include statement to create a wrapper for each module that supplies the default value you want from some variable.
Obviously you would have to specify the path somewhere yourself either in group_vars or host/role/... vars. Or a variable defined in the play's vars section.
$ cat my_echo.yml
- shell: "{{echo_exec}} '{{text}}'"
$ cat playbook.yml
- hosts: localhost
tags: so
gather_facts: False
vars:
echo_exec: echo
tasks:
- include: my_echo.yml text='some text'
changed_when: False
- hosts: localhost
tags: so
gather_facts: False
vars:
echo_exec: printf
tasks:
- include: my_echo.yml text='some text'
changed_when: False
$ ansible-playbook playbook.yml -t so -v
PLAY [localhost] **************************************************************
TASK: [shell {{echo_exec}} '{{text}}'] ****************************************
changed: [localhost] => {"changed": true, "cmd": "echo 'some text'", "delta": "0:00:00.003782", "end": "2015-03-20 17:45:58.352069", "rc": 0, "start": "2015-03-20 17:45:58.348287", "stderr": "", "stdout": "some text", "warnings": []}
PLAY [localhost] **************************************************************
TASK: [shell {{echo_exec}} '{{text}}'] ****************************************
changed: [localhost] => {"changed": true, "cmd": "printf 'some text'", "delta": "0:00:00.003705", "end": "2015-03-20 17:45:58.690657", "rc": 0, "start": "2015-03-20 17:45:58.686952", "stderr": "", "stdout": "some text", "warnings": []}
PLAY RECAP ********************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=0
$
Related
I am having a strange issue that has driven me nuts for days - I'm hoping someone may be able to point me in the right direction. I am attempting run a simple playbook, download a git repository, and then build from source. In one of my playbooks, this works fine, but in my 2nd playbook, I get an error every time I attempt to run the make command.
Keeping it simple for brevity...
tasks:
- name: Set Python PATH
become: yes
shell: export PYTHONPATH=/usr/local/lib/python3/dist-packages
- name: Update bashrc with PYTHONPATH
lineinfile:
path: /home/vagrant/.bashrc
line: export PYTHONPATH=/usr/local/lib/python3/dist-packages
- name: cmake
become_user: vagrant
shell: cmake ..
args:
chdir: /home/vagrant/application/build
This works fine, though I had to become_user: vagrant, even though I did not in my other playbook. (I've split the cmake, make, make install commands up for troubleshooting) Then I run:
- name: make
become_user: vagrant
shell: make
args:
chdir: /home/vagrant/application/build
This fails every time with a VERY large amount of red text. I have logged in to the target, and can run make successfully there. But cannot via Ansible.
I have tried the community make plugin, I have tried many variations of this, including become: yes, but get the errors every time. This is the beginning of the error:
fatal: [server.local]: FAILED! => {"changed": true, "cmd": ["make", "all"], "delta": "0:00:01.218515", "end": "2021-12-30 16:39:36.586928", "msg": "non-zero return code", "rc": 2, "start": "2021-12-30 16:39:35.368413", "stderr": "ERROR:gnuradio.grc.core.FlowGraph:Failed to evaluate variable block variable_ax25_decoder_0_0\nTraceback (most recent call last):\n File \"/usr/lib/python3/dist-packages/gnuradio/grc/core/FlowGraph.py\", line 227, in renew_namespace\n
Do you have any suggestions as to why MAKE would fail in this instance, when it can run fine in another? (I've tried multiple VMs, new installs etc) but am having no joy.
My corporate firewall policy allows only 20 connections per minute 60 seconds between the same source and destinations.
Owing to this the ansible play hangs after a while.
I would like multiple tasks to use the same ssh session rather than creating new sessions. For this purpose i set the below pipelining = True in the local folder ansible.cfg as well as in the command line.
cat /opt/automation/startservices/ansible.cfg
[defaults]
host_key_checking = False
gathering = smart
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=600s
control_path = %(directory)s/%%h-%%r
pipelining = True
ANSIBLE_SSH_PIPELINING=0 ansible-playbook -i /opt/automation/startservices/finalallmw.hosts /opt/automation/startservices/va_action.yml -e '{ dest_host: myremotehost7 }' -e dest_user=oracle
The playbook is too big to be shared here but it is this task which loops and this is where it hangs due to more than 20 ssh connections in 60 seconds.
- name: Copying from "{{ inventory_hostname }}" to this ansible server.
synchronize:
src: "{{ item.path }}"
dest: "{{ playbook_dir }}/homedirbackup/{{ inventory_hostname }}/{{ dtime }}/"
mode: pull
copy_links: yes
with_items:
- "{{ to_copy.files }}"
With the pipelining settings set, my play still hangs after 20 connections.
Below are the playbook settings:
hosts: "{{ groups['dest_nodes'] | default(groups['all']) }}"
user: "{{ USER | default(dest_user) }}"
any_errors_fatal: True
gather_facts: false
tags: always
vars:
ansible_host_key_checking: false
ansible_ssh_extra_args: -o StrictHostKeyChecking=no -o ConnectionAttempts=5
Post suggestions this far on this thread the issue persists. Below is my local directory ansible.cfg
$ cat /opt/automation/startservices/ansible.cfg
# config file for ansible -- http://ansible.com/
# ==============================================
# nearly all parameters can be overridden in ansible-playbook
# or with command line flags. ansible will read ANSIBLE_CONFIG,
# ansible.cfg in the current working directory, .ansible.cfg in
# the home directory or /etc/ansible/ansible.cfg, whichever it
# finds first
[defaults]
host_key_checking = False
roles_path = roles/
gathering = smart
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=1200s -o ControlPath=~/.ansible/cp/%r#%h:%p
[persistent_connection]
control_path_dir = ~/.ansible/cp
$
Can you please suggest any solution to the issue on the ansible side where all tasks use the same ssh session and is pipelining not working here?
First: pipelining = True does not do what you are looking for. It reduces the number of network operations, but not the number of ssh connections. Check the docs for more information.
Imho, it is still a good thing to use, as it will speed up your playbooks.
What you want to use is the "persistent control mode" which is a feature of OpenSSH to keep a connection open.
You could for example do this in your ansible.cfg:
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=1200
This will keep the connection open for 1200 seconds.
The problem is not with ansible controller running the module(s) (i.e. copying the necessary temporary AnsibleZ files to your target and execute them - you already have the correct options for that in ansible.cfg to use master sessions) but with the synchronize module itself which needs to spawn its own ssh connections to transfer files between relevant servers while it is running on the target.
The latest synchronize module version is now part of the ansible.posix collection and has recently gained 2 options that will help you work around your problem to apply the use of master sessions to the module itself while using rsync.
ssh_multiplexing: yes
use_ssh_args: yes
Although it is possible to install this collection in ansible 2.9 to override the older stock module version (which does not have those options), I strongly suggest you use ansible version 2.10 or 2.11. My personal preferred installation method for ansible is through pip as it will let you install any ansible version on any OS for any user in any number of (virtual) environment.
Regarding pip, the versioning has changed (and is quite a mess IMO...)
ansible is now a meta package with its own independent versioning.
the usual ansible binaries (ansible, ansible-playbook, ....) are packaged in ansible-core which has the version the corresponding to what you get when running ansible -v
the meta ansible package installs a set of collections by default (including the ansible.posix one if I'm not wrong)
=> To get ansible -v => 2.10 you want to install ansible pip package 3.x
=> To get ansible -v => 2.11 you want to install ansible pip package 4.x
You will have to remove any previous version installed via pip in your current environment before proceeding.
I am trying to re-run an Ansible script on an old 3rd party integration, the command looks like this:
- name: "mount s3fs Fuse FS on boot from [REDACTED] on [REDACTED]"
mount:
name: "{{ [REDACTED] }}/s3/file_access"
src: "{{ s3_file_access_bucket }}:{{ s3_file_access_key }}"
fstype: fuse.s3fs
opts: "_netdev,uid={{ uid }},gid={{ group }},mp_umask=022,allow_other,nonempty,endpoint={{ s3_file_access_region }}"
state: mounted
tags:
- [REDACTED]
I'm receiving this error:
fatal: [REDACTED]: FAILED! => {"changed": false, "failed": true, "msg": "Error mounting /home/[REDACTED]: s3fs: there are multiple entries for the same bucket(default) in the passwd file.\n"}
I'm trying to find a passwd file to clean out, but I don't know where to find one.
Anyone recognizes this error?
s3fs checks /etc/passwd-s3fs and $HOME/.passwd-s3fs for credentials. It appears that one of these files has duplicate entries that you need to remove.
Your Ansible src stanza also attempts to supply credentials but I do not believe this will work. Instead you can supply these via the AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables.
I'm having issues with Ansible picking up a module that I've added.
The module is called 'passwordstore' https://github.com/morphje/ansible_pass_lookup/.
I'm using Ansible 2.2
In my playbook, I've added a 'library' folder and have added the contents of that GitHub directory to that folder. I've also tried uncommenting library = /usr/share/ansible/modules and adding the module files there and still doesn't get picked up.
Have also tried setting environment variable to ANSIBLE_LIBRARY=/usr/share/ansible/modules
My Ansible playbook looks like this:
---
- name: example play
hosts: all
gather_facts: false
tasks:
- name: set password
debug: msg="{{ lookup('passwordstore', 'files/test create=true')}}"
And when I run this I get this error;
ansible-playbook main.yml
PLAY [example play] ******************************************************
TASK [set password] ************************************************************
fatal: [backend.example.name]: FAILED! => {"failed": true, "msg": "lookup plugin (passwordstore) not found"}
fatal: [mastery.example.name]: FAILED! => {"failed": true, "msg": "lookup plugin (passwordstore) not found"}
to retry, use: --limit #/etc/ansible/roles/test-role/main.retry
Any guidance on what I'm missing? It may just be the way in which I'm trying to add the custom module, but any guidance would be appreciated.
It's a lookup plugin (not a module), so it should go into a directory named lookup_plugins (not library).
Alternatively, add the path to the cloned repository in ansible.cfg using the lookup-plugins setting.
I'm trying to run my first playbook to install Java on four servers and subsequently define a JAVA_HOME environment variable.
ansible-playbook site.yml --check
PLAY [crave_servers] **********************************************************
GATHERING FACTS ***************************************************************
ok: [54.174.151.196]
ok: [54.174.197.35]
ok: [54.174.207.83]
ok: [54.174.208.240]
TASK: [java | install Java JDK] ***********************************************
changed: [54.174.197.35]
changed: [54.174.151.196]
changed: [54.174.208.240]
changed: [54.174.207.83]
ERROR: change handler (setvars) is not defined
I've placed my site.yml under /etc/ansible
---
- hosts: crave_servers
remote_user: ubuntu
sudo: yes
roles:
- java
I've placed main.yml under /etc/ansible/java/tasks
---
- name: install Java JDK
apt: name=default-jdk state=present
notify:
- setvars
I've placed main.yml under /etc/ansible/handlers
---
- name: setvars
shell: echo "JAVA_HOME=\"/usr/lib/jvm/java-7-openjdk-amd64\"" >> /etc/environment
Now I'm not sure if the syntax is structure of my handlers is correct. But it's obvious from the output that Ansible is able to find the correct role and execute the correct task. But the task can't find the handler.
Nobody else seems to have the same problem. And I don't really know how to debug it because my ansible version seems to be missing the config file.
You should put your handler to /etc/ansible/java/handlers/main.yml
As handlers are part of a role.
Remarks:
You should not use your handler as it would paste the line into /etc/environment each time you call this playbook. I would recommend the lineinefile module.
You should reconsider your decision to put ansible playbooks into /etc