Ansible - Define Inventory at run time - variables

I am liitle new to ansible so bear with me if my questions are a bit basic.
Scenario:
I have a few group of Remote hosts such as [EPCs] [Clients] and [Testers]
I am able to configure them just the way I want them to be.
Problem:
I need to write a playbook, which when runs, asks the user for the inventory at run time.
As an example when a playbook is run the user should be prompted in the following way:
"Enter the number of EPCs you want to configure"
"Enter the number of clients you want to configure"
"Enter the number of testers you want to configure"
What should happen:
Now for instance the user enters 2,5 and 8 respectively.
Now the playbook should only address the first 2 nodes in the group [EPCs], the first 5 nodes in group [Clients] and the first 7 nodes in the group [Testers] .
I don't want to create a large number of sub-groups, for instance if I have 20 EPCs, then I don't want to define 20 groups for my EPCs, I want somewhat of a dynamic inventory, which should automatically configure the machines according to the user input at run time using the vars_prompt option or something similar to that
Let me post a partial part of my playbook for better understanding of what is to happen:
---
- hosts: epcs # Now this is the part where I need a lot of flexibility
vars_prompt:
name: "what is your name?"
quest: "what is your quest?"
gather_facts: no
tasks:
- name: Check if path exists
stat: path=/home/khan/Desktop/tobefetched/file1.txt
register: st
- name: It exists
debug: msg='Path existence verified!'
when: st.stat.exists
- name: It doesn't exist
debug: msg="Path does not exist"
when: st.stat.exists == false
- name: Copy file2 if it exists
fetch: src=/home/khan/Desktop/tobefetched/file2.txt dest=/home/khan/Desktop/fetched/ flat=yes
when: st.stat.exists
- name: Run remotescript.sh and save the output of script to output.txt on the Desktop
shell: cd /home/imran/Desktop; ./remotescript.sh > output.txt
- name: Find and replace a word in a file placed on the remote node using variables
shell: cd /home/imran/Desktop/tobefetched; sed -i 's/{{name}}/{{quest}}/g' file1.txt
tags:
- replace
#gli I tried your solution, I have a group in my inventory named test with two nodes in it. When I enter 0..1 I get:
TASK: [echo sequence] *********************************************************
changed: [vm2] => (item=some_prefix0)
changed: [vm1] => (item=some_prefix0)
changed: [vm1] => (item=some_prefix1)
changed: [vm2] => (item=some_prefix1)
Similarly when I enter 1..2 I get:
TASK: [echo sequence] *********************************************************
changed: [vm2] => (item=some_prefix1)
changed: [vm1] => (item=some_prefix1)
changed: [vm2] => (item=some_prefix2)
changed: [vm1] => (item=some_prefix2)
Likewise when I enter 4..5 (nodes not even present in the inventory, I get:
TASK: [echo sequence] *********************************************************
changed: [vm1] => (item=some_prefix4)
changed: [vm2] => (item=some_prefix4)
changed: [vm1] => (item=some_prefix5)
changed: [vm2] => (item=some_prefix5)
Any help would be really appreciated. Thanks!

You should use vars_prompt for getting information from user, add_host for updating hosts dynamically and with_sequence for loops:
$ cat aaa.yml
---
- hosts: localhost
gather_facts: False
vars_prompt:
- name: range
prompt: Enter range of EPCs (e.g. 1..5)
private: False
default: "1"
pre_tasks:
- name: Set node id variables
set_fact:
start: "{{ range.split('..')[0] }}"
stop: "{{ range.split('..')[-1] }}"
- name: "Add hosts:"
add_host: name="host_{{item}}" groups=just_created
with_sequence: "start={{start}} end={{stop}} "
- hosts: just_created
gather_facts: False
tasks:
- name: echo sequence
shell: echo "cmd"
The output will be:
$ ansible-playbook aaa.yml -i 'localhost,'
Enter range of EPCs (e.g. 1..5) [1]: 0..1
PLAY [localhost] **************************************************************
TASK: [Set node id variables] *************************************************
ok: [localhost]
TASK: [Add hosts:] ************************************************************
ok: [localhost] => (item=0)
ok: [localhost] => (item=1)
PLAY [just_created] ***********************************************************
TASK: [echo sequence] *********************************************************
fatal: [host_0] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
fatal: [host_1] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/gli/aaa.retry
host_0 : ok=0 changed=0 unreachable=1 failed=0
host_1 : ok=0 changed=0 unreachable=1 failed=0
localhost : ok=2 changed=0 unreachable=0 failed=0
Here, it failed as host_0 and host_1 are unreachable, for you it'll work fine.
btw, I used more powerful concept "range of nodes". If you don't need it, it is quite simple to have "start=0" and ask only for "stop" value in the prompt.

I don't think you can define an inventory at run time. One thing you can do is, write a wrapper script over Ansible which will first prompt user for the hosts and then dynamically structure an ansible-playbook command.
I would prefer doing this using python, but you can use any language of your choice.
$ cat ansible_wrapper.py
import ConfigParser
import os
nodes = ''
inv = {}
hosts = raw_input("Enter hosts: ")
hosts = hosts.split(",")
config = ConfigParser.ConfigParser(allow_no_value=True)
config.readfp(open('hosts'))
sections = config.sections()
for i in range(len(sections)):
inv[sections[i]] = hosts[i]
for key, value in inv.items():
for i in range(int(value)):
nodes = nodes + config.items(key)[i][0] + ";"
command = 'ansible-playbook -i hosts myplaybook.yml -e "nodes=%s"' % (nodes)
print "Running command: ", command
os.system(command)
Note: I've tried running this script only using python2.7

Related

ansible playbook / bash script

bonjour / hello
First of all sorry for my very poor english, I'm french and I used google translator.
I'm using an ansible playbook to run a bash script. In my test environment it works, but not in my production environment and I don't understand why.
It's a script that allows to scan for violations, check the status of the services and at the end it generates a report.
In ansible the task is accomplished but the report is not generated, whereas when I run the script directly from the terminal of my vm (hosted by aws) the script works and generates the report for me.
Can you help me please ?
- name: run the scan to generate deviation report
become: yes
command: sh /<path>
I tried several commands but I always have the same result. I tried the ansible script module and launched it in debug mode with bash -x
i tested like below
script:
root#swarm01:/myworkspace/ansible# cat hello.sh
#!/usr/bin/sh
touch abc
Playbook.
root#swarm01:/myworkspace/ansible# cat test.yml
- name: simple script
hosts: localhost
tasks:
- name: simple script
command: sh hello.sh
root#swarm01:/myworkspace/ansible# ansible-playbook test.yml
PLAY [simple script] ***********************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************
ok: [localhost]
TASK [simple script] ***********************************************************************************************************************************
changed: [localhost]
PLAY RECAP *********************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
It worked for me, i see a abc file got created.
Suggest using the script module
This will allow the PB to run the script on remote systems as well as localhost. Probably best to make sure you have the sh-bang on the first line of the script.
- name: simple script
hosts: localhost
tasks:
- name: simple script
script:
cmd: hello.sh

Using Ansible authorized_key module to copy SSH key fails with sshpass needed erro

I am trying to copy my .pub key file located in ~/.ssh/mykey.pub to one of the remote hosts using Ansible.
I have a very simple playbook containing this task:
- name: SSH-copy-key to target
hosts: all
tasks:
- name: Copying local SSH key to target
ansible.posix.authorized_key:
user: '{{ ansible_user_id }}'
state: present
key: "{{ lookup('file', '~/.ssh/mykey.pub') }}"
Due to the fact that the host is 'new', I am adding a --ask-pass parameter to be asked for the SSH password on the first connection attempt.
However, I receive the error that I need to install the sshpass program.
The following is being returned:
➜ ansible ansible-playbook -i inventory.yaml ssh.yaml --ask-pass
SSH password:
PLAY [SSH-copy-key to target] ********************************************************************
TASK [Gathering Facts] ***************************************************************************
fatal: [debian]: FAILED! => {"msg": "to use the 'ssh' connection type with passwords or pkcs11_provider, you must install the sshpass program"}
PLAY RECAP ***************************************************************************************
debian : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
➜ ansible
I am executing Ansible from a MacBook. I tried replacing the 'key' key with the URL to my github account. The same error appears.
key: https://github.com/myuseraccount.keys
Any ideas?

Ansible ssh fails with error: Data could not be sent to remote host

I have an ansible playbook that executes a shell script on remote host "10.8.8.88" as many times as the number files provided as parameter
ansible-playbook test.yml -e files="file1,file2,file3,file4"
playbook looks like below:
- name: Call ssh
shell: ~./execute.sh {{ item }}
with_items: {{ files.split(',') }}
This works fine for fewer files say 10 to 15 files. But I happen to have 145 files in the argument.
This is when the execution broke and play failed mid-way with below error message:
TASK [shell] *******************************************************************
[WARNING]: conditional statements should not include jinja2 templating
delimiters such as {{ }} or {% %}. Found: entrycurrdb.stdout.find("{{ BASEPATH
}}/{{ vars[(item | splitext)[1].split('.')[1]] }}/{{ item | basename }}") == -1
and actualfile.stat.exists == True
[WARNING]: sftp transfer mechanism failed on [10.8.8.88]. Use ANSIBLE_DEBUG=1
to see detailed information
[WARNING]: scp transfer mechanism failed on [10.8.8.88]. Use ANSIBLE_DEBUG=1
to see detailed information
fatal: [10.8.8.88]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"10.8.8.88\". Make sure this host can be reached over ssh: ", "unreachable": true}
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
10.8.8.88 : ok=941 changed=220 unreachable=1 failed=0 skipped=145 rescued=0 ignored=0
localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I have the latest Ansible and the "pipeline" and "ssh" settings in ansible.cfg are as defaults.
I have the following questions.
How can I resolve the above issue?
I guess this could be due to a network issue. Is it possible to run infinite ssh ping to the remote server for testing purposes to see if the ansible command-line breaks? It will help me prove my case. A sample command that keeps pinging the remote using ssh is what I'm looking for.
It is possible to force ansible to retry ssh connection a few times in case of such failures so that it may connect in during retries. If so, I would appreciate where and how that can be set in ansible-playbook code as vars variable and not in ansible.cfg file?
https://docs.ansible.com/ansible/2.4/intro_configuration.html#retries
Something similar to:
vars:
ansible_ssh_private_key_file: "{{ key1 }}"
Many Thanks !!

Problems with ansible 2.7.9 using host_vars (undefined variable)

Ansible 2.7.9 is not using host_vars
Ive setup a very simple setup with 3 hosts, mainly for testing purposes . I have hosts :
- ansible1 (this is where I store the code)
- ansible2
- ansible3
My inventory :
[ansible#ansible1 ~]$ cat /etc/ansible/hosts
[common]
ansible1
ansible2
ansible3
My cfg looks like that :
[ansible#ansible1 ~]$ cat /etc/ansible/ansible.cfg
[defaults]
roles_path = /etc/ansible/roles
inventory = /etc/ansible/hosts
[privilege_escalation]
[paramiko_connection]
[ssh_connection]
pipelining = True
control_path = /tmp/ansible-ssh-%%h-%%p-%%r
pipelining = False
[accelerate]
[selinux]
[colors]
I have defined a master playbook called common which calls common :
[ansible#ansible1 ~]$ ls /etc/ansible/roles/
common common.retry common.yml
[ansible#ansible1 ~]$ cat /etc/ansible/roles/common.yml
--- # Playbook for webservers
- hosts: common
roles:
- common
[ansible#ansible1 ~]$
The task/main.yml :
[ansible#ansible1 ~]$ cat /etc/ansible/roles/common/tasks/main.yml
- name: test ansible1
lineinfile:
dest: /tmp/ansible.txt
create: yes
line: "{{ myvar }}"
- name: set ansible2
lineinfile:
dest: /tmp/ansible2.txt
create: yes
line: "hi"
[ansible#ansible1 ~]$
[ansible#ansible1 ~]$ cat /etc/ansible/roles/common/vars/main.yml
copyright_msg: "Copyrighta 2019"
myvar: "value of myvar from common/vars"
THen I placed some info at /etc/ansible/host_vars
[ansible#ansible1 ~]$ ls /etc/ansible/hosts_vars/
ansible2.yml
[ansible#ansible1 ~]$ cat /etc/ansible/hosts_vars/ansible2.yml
myvar: "myvar from host_vars"
[ansible#ansible1 ~]$
This works great with playbook :
[ansible#ansible1 ~]$ ansible-playbook /etc/ansible/roles/common.yml --limit ansible2
PLAY [common] ******************************************************************
TASK [Gathering Facts] *********************************************************
ok: [ansible2]
TASK [common : test ansible1] **************************************************
changed: [ansible2]
TASK [common : set ansible2] ***************************************************
changed: [ansible2]
PLAY RECAP *********************************************************************
ansible2 : ok=3 changed=2 unreachable=0 failed=0
I see the file with the content of myvar :
[root#ansible2 ~]# cat /tmp/ansible.txt
value of myvar from common/vars
[root#ansible2 ~]#
But then I dont understand why it is not taking the value from /etc/ansible/hosts_vars/ansible2.yml , in fact if I comment the line from /etc/ansible/roles/common/vars/main.yml it says undefined variable :
[ansible#ansible1 ansible]$ cat /etc/ansible/roles/common/vars/main.yml
copyright_msg: "Copyrighta 2019"
myvar: "value of myvar from common/vars"
This is as expected the main.yml will get sourced automatically while executing the playbook. consider this file as global variables.
The reason ansible2.yml is not getting sourced is because ansible expects you to source that explicitly while executing.
You can use below code for that(generic).
---
- name: play
hosts: "{{ hosts }}"
tasks:
- include_vars: "{{ hosts }}.yml"
trigger -->
ansible-playbook -i inventory --extra-vars "hosts=ansible2"
Ansible uses that priority for values from vars :
From least to most important
role defaults
inventory file or script group vars
inventory group_vars/all
playbook group_vars/all
inventory group_vars/*
playbook group_vars/*
inventory file or script host vars
inventory host_vars/*
playbook host_vars/*
host facts
play vars
play vars_prompt
play vars_files
role vars (defined in role/vars/main.yml)
block vars (only for tasks in block)
task vars (only for the task)
role (and include_role) params
include params
include_vars
set_facts / registered vars
extra vars (always win precedence)
So is better to forget about using roles/vars because it takes precedence over host_vars, so I should use instead roles/defaults, which has a lower priority .

Set different ORACLE_HOME and PATH environment variable using Ansible

Im currently querying multiple databases and capturing the results of the query
The way Im doing it is, Im writing a task which copies a shell script, something like below
#!/bin/bash
source $HOME/bin/gsd_xenv $1 &> /dev/null
sqlplus -s <<EOF
/ as sysdba
set heading off
select d.name||','||i.instance_name||','||i.host_name||';' from v\$database d,v\$instance i;
EOF
In the playbook, Im writing the task as below:
- name: List Query [Host and DB]
shell: "/tmp/sqlscript/sql_select.sh {{item}} >> /tmp/sqlscript/output.out"
become: yes
become_method: sudo
become_user: oracle
environment:
PATH: "/home/oracle/bin:/usr/orasys/12.1.0.2r10/bin:/usr/bin:/bin:/usr/ucb:/sbin:/usr/sbin:/etc:/usr/local/bin:/oradata/epdmat/goldengate/config/sys"
ORACLE_HOME: "/usr/orasys/12.1.0.2r10"
with_items: "{{ factor_dbs.split('\n') }}"
However I have noticed that the different hosts have different ORACLE_HOME and PATHS. How can I define those variables in the playbook, so that the task picks the right ORACLE_HOME and PATH variables and execute the task successfully
you can define host specific variables for each of the hosts. You can write your inventory file like:
[is_hosts]
greenhat ORACLE_HOME=/tmp
localhost ORACLE_HOME=/sbin
similarly for the PATH variable
then your task:
sample playbook that demonstrates the results:
- hosts: is_hosts
gather_facts: false
vars:
tasks:
- name: task 1
shell: "env | grep -e PATH -e ORACLE_HOME"
environment:
# PATH: "{{ hostvars[inventory_hostname]['PATH']}}"
ORACLE_HOME: "{{ hostvars[inventory_hostname]['ORACLE_HOME'] }}"
register: shell_output
- name: print results
debug:
var: shell_output.stdout_lines
sample output, you can see ORACLE_HOME variable was indeed changed, and as defined per host:
TASK [print results] ************************************************************************************************************************************************************************************************
ok: [greenhat] => {
"shell_output.stdout_lines": [
"ORACLE_HOME=/tmp",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
]
}
ok: [localhost] => {
"shell_output.stdout_lines": [
"ORACLE_HOME=/sbin",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
]
}