Im currently querying multiple databases and capturing the results of the query
The way Im doing it is, Im writing a task which copies a shell script, something like below
#!/bin/bash
source $HOME/bin/gsd_xenv $1 &> /dev/null
sqlplus -s <<EOF
/ as sysdba
set heading off
select d.name||','||i.instance_name||','||i.host_name||';' from v\$database d,v\$instance i;
EOF
In the playbook, Im writing the task as below:
- name: List Query [Host and DB]
shell: "/tmp/sqlscript/sql_select.sh {{item}} >> /tmp/sqlscript/output.out"
become: yes
become_method: sudo
become_user: oracle
environment:
PATH: "/home/oracle/bin:/usr/orasys/12.1.0.2r10/bin:/usr/bin:/bin:/usr/ucb:/sbin:/usr/sbin:/etc:/usr/local/bin:/oradata/epdmat/goldengate/config/sys"
ORACLE_HOME: "/usr/orasys/12.1.0.2r10"
with_items: "{{ factor_dbs.split('\n') }}"
However I have noticed that the different hosts have different ORACLE_HOME and PATHS. How can I define those variables in the playbook, so that the task picks the right ORACLE_HOME and PATH variables and execute the task successfully
you can define host specific variables for each of the hosts. You can write your inventory file like:
[is_hosts]
greenhat ORACLE_HOME=/tmp
localhost ORACLE_HOME=/sbin
similarly for the PATH variable
then your task:
sample playbook that demonstrates the results:
- hosts: is_hosts
gather_facts: false
vars:
tasks:
- name: task 1
shell: "env | grep -e PATH -e ORACLE_HOME"
environment:
# PATH: "{{ hostvars[inventory_hostname]['PATH']}}"
ORACLE_HOME: "{{ hostvars[inventory_hostname]['ORACLE_HOME'] }}"
register: shell_output
- name: print results
debug:
var: shell_output.stdout_lines
sample output, you can see ORACLE_HOME variable was indeed changed, and as defined per host:
TASK [print results] ************************************************************************************************************************************************************************************************
ok: [greenhat] => {
"shell_output.stdout_lines": [
"ORACLE_HOME=/tmp",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
]
}
ok: [localhost] => {
"shell_output.stdout_lines": [
"ORACLE_HOME=/sbin",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
]
}
Related
Concerning: Ansible-Playbooks
Is it possible to run a command on the remote machine and store the resulting output into a variable?
I am trying to get the kernel version and install the matching headers like this:
---
- hosts: all
tasks:
- name: install headers
become: yes
become_method: sudo
apt:
name: "{{ packages }}"
vars:
packages:
- linux-headers-`uname -r`
#- a lot of other packages here
Unfortunately uname -r is not executed here.
I am aware of this question: Ansible: Store command's stdout in new variable?
But it looks like this is another topic.
By definition:
Ansible facts are data related to your remote systems, including
operating systems, IP addresses, attached filesystems, and more.
In this link you can see all ansible facts that you can use.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_vars_facts.html
One of those variables is ansible_kernel, this is the version of the kernel of your remote system. By default ansible gets this variables, but if you want to be sure that ansible will get this variables you have to use gather_facts: yes.
---
- hosts: all
gather_facts: yes
tasks:
- name: install
become: yes
become_method: sudo
apt:
name: "{{ packages }}"
vars:
packages:
- linux-headers-{{ ansible_kernel }}
I found a solution but I am not sure if this is really elegant
---
- hosts: all
tasks:
- name: Getting Kernel Version
command: "uname -r"
register: kernel_version
- name: install
become: yes
become_method: sudo
apt:
name: "{{ packages }}"
vars:
packages:
- linux-headers-{{ kernel_version.stdout }}
Ansible 2.7.9 is not using host_vars
Ive setup a very simple setup with 3 hosts, mainly for testing purposes . I have hosts :
- ansible1 (this is where I store the code)
- ansible2
- ansible3
My inventory :
[ansible#ansible1 ~]$ cat /etc/ansible/hosts
[common]
ansible1
ansible2
ansible3
My cfg looks like that :
[ansible#ansible1 ~]$ cat /etc/ansible/ansible.cfg
[defaults]
roles_path = /etc/ansible/roles
inventory = /etc/ansible/hosts
[privilege_escalation]
[paramiko_connection]
[ssh_connection]
pipelining = True
control_path = /tmp/ansible-ssh-%%h-%%p-%%r
pipelining = False
[accelerate]
[selinux]
[colors]
I have defined a master playbook called common which calls common :
[ansible#ansible1 ~]$ ls /etc/ansible/roles/
common common.retry common.yml
[ansible#ansible1 ~]$ cat /etc/ansible/roles/common.yml
--- # Playbook for webservers
- hosts: common
roles:
- common
[ansible#ansible1 ~]$
The task/main.yml :
[ansible#ansible1 ~]$ cat /etc/ansible/roles/common/tasks/main.yml
- name: test ansible1
lineinfile:
dest: /tmp/ansible.txt
create: yes
line: "{{ myvar }}"
- name: set ansible2
lineinfile:
dest: /tmp/ansible2.txt
create: yes
line: "hi"
[ansible#ansible1 ~]$
[ansible#ansible1 ~]$ cat /etc/ansible/roles/common/vars/main.yml
copyright_msg: "Copyrighta 2019"
myvar: "value of myvar from common/vars"
THen I placed some info at /etc/ansible/host_vars
[ansible#ansible1 ~]$ ls /etc/ansible/hosts_vars/
ansible2.yml
[ansible#ansible1 ~]$ cat /etc/ansible/hosts_vars/ansible2.yml
myvar: "myvar from host_vars"
[ansible#ansible1 ~]$
This works great with playbook :
[ansible#ansible1 ~]$ ansible-playbook /etc/ansible/roles/common.yml --limit ansible2
PLAY [common] ******************************************************************
TASK [Gathering Facts] *********************************************************
ok: [ansible2]
TASK [common : test ansible1] **************************************************
changed: [ansible2]
TASK [common : set ansible2] ***************************************************
changed: [ansible2]
PLAY RECAP *********************************************************************
ansible2 : ok=3 changed=2 unreachable=0 failed=0
I see the file with the content of myvar :
[root#ansible2 ~]# cat /tmp/ansible.txt
value of myvar from common/vars
[root#ansible2 ~]#
But then I dont understand why it is not taking the value from /etc/ansible/hosts_vars/ansible2.yml , in fact if I comment the line from /etc/ansible/roles/common/vars/main.yml it says undefined variable :
[ansible#ansible1 ansible]$ cat /etc/ansible/roles/common/vars/main.yml
copyright_msg: "Copyrighta 2019"
myvar: "value of myvar from common/vars"
This is as expected the main.yml will get sourced automatically while executing the playbook. consider this file as global variables.
The reason ansible2.yml is not getting sourced is because ansible expects you to source that explicitly while executing.
You can use below code for that(generic).
---
- name: play
hosts: "{{ hosts }}"
tasks:
- include_vars: "{{ hosts }}.yml"
trigger -->
ansible-playbook -i inventory --extra-vars "hosts=ansible2"
Ansible uses that priority for values from vars :
From least to most important
role defaults
inventory file or script group vars
inventory group_vars/all
playbook group_vars/all
inventory group_vars/*
playbook group_vars/*
inventory file or script host vars
inventory host_vars/*
playbook host_vars/*
host facts
play vars
play vars_prompt
play vars_files
role vars (defined in role/vars/main.yml)
block vars (only for tasks in block)
task vars (only for the task)
role (and include_role) params
include params
include_vars
set_facts / registered vars
extra vars (always win precedence)
So is better to forget about using roles/vars because it takes precedence over host_vars, so I should use instead roles/defaults, which has a lower priority .
How can I set in Ansible block vars (visible only for tasks in block) ?
I tried:
---
- hosts: test
tasks:
- block:
- name: task 1
shell: "echo {{item}}"
with_items:
- one
- two
but it seems that it's a wrong way.
If you want to define a variable for a block:
- block:
- debug:
var: var_for_block
vars:
var_for_block: "value for var_for_block"
If you want to "loop over blocks" as your code suggests - you can't. It's not implemented in Ansible.
Follow this thread.
For now consider saving the tasks to a separate file and use include instead.
#Shasha99
you might be able to include a file with a block in it so you can still benefit from the try/catch
includeFile.yml :
- block
- name: Finding the package.
shell: rpm -qa | grep "{{pkgName}}"
register: package
- name: Uninstalling the package.
shell: rpm -e "{{package}}"
always:
- debug: msg="this always executes"
main.yml:
---
- hosts: all
vars:
- packageList : ["pkg1","pkg2","pkg3","pkg4"]
tasks:
- include: includeFile.yml pkgName="{{item}}"
with_items: packageList
How can one pass variable to ansible playbook in the command line?
The following command didn't work:
$ ansible-playbook -i '10.0.0.1,' yada-yada.yml --tags 'loaddata' django_fixtures="tile_colors"
Where django_fixtures is my variable.
Reading the docs I find the section Passing Variables On The Command Line, that gives this example:
ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo"
Others examples demonstrate how to load from JSON string (≥1.2) or file (≥1.3)
Other answers state how to pass in the command line variables but not how to access them, so if you do:
--extra-vars "version=1.23.45 other_variable=foo"
In your yml file you assign these to scoped ansible variables by doing something like:
vars:
my_version: "{{ version }}"
my_other_variable: {{ other_variable }}
An alternative to using command line args is to utilise environmental variables that are already defined within your session, you can reference these within your ansible yml files like this:
vars:
my_version: "{{ lookup('env', 'version') }}"
my_other_variable: {{ lookup('env', 'other_variable') }}
ansible-playbook release.yml -e "version=1.23.45 other_variable=foo"
For some reason none of the above Answers worked for me. As I need to pass several extra vars to my playbook in Ansbile 2.2.0, this is how I got it working (note the -e option before each var):
ansible-playbook site.yaml -i hostinv -e firstvar=false -e second_var=value2
You can use the --extra-vars option. See the docs
ansible-playbook test.yml --extra-vars "arg1=${var1} arg2=${var2}"
In the yml file you can use them like this
---
arg1: "{{ var1 }}"
arg2: "{{ var2 }}"
Also, --extra-vars and -e are the same, you can use one of them.
s3_sync:
bucket: ansible-harshika
file_root: "{{ pathoftsfiles }}"
validate_certs: false
mode: push
key_prefix: "{{ folder }}"
here the variables are being used named as 'pathoftsfiles' and 'folder'. Now the value to this variable can be given by the below command
sudo ansible-playbook multiadd.yml --extra-vars "pathoftsfiles=/opt/lampp/htdocs/video/uploads/tsfiles/$2 folder=nitesh"
Note: Don't use the inverted commas while passing the values to the variable in the shell command
This also worked for me if you want to use shell environment variables:
ansible-playbook -i "localhost," ldap.yaml --extra-vars="LDAP_HOST={{ lookup('env', 'LDAP_HOST') }} clustername=mycluster env=dev LDAP_USERNAME={{ lookup('env', 'LDAP_USERNAME') }} LDAP_PASSWORD={{ lookup('env', 'LDAP_PASSWORD') }}"
In Ansible, we can define variables when running our playbook by passing variables at the command line using the --extra-vars (or -e) argument.
Bellow are some ways to pass variables to an Ansible playbook in the command line:
Method 1: key=value format
ansible-playbook site.yml --extra-vars "arg1=demo1 arg2=demo2"
Method 2: JSON string format
ansible-playbook site.yml --extra-vars '{"arg1":"demo1","arg2":"demo2"}'
The site.yml playbook will be:
---
- name: ansible playbook to print external variables
hosts: localhost
connection: local
tasks:
- name: print values
ansible.builtin.debug:
msg: "variable1 = {{ arg1 }}, variable2 = {{ arg2 }}"
when: arg1 is defined and arg2 is defined
Method 3: Read from an external JSON file
If you have a lot of special characters, use a JSON or YAML file containing the variable definitions.
ansible-playbook site.yml --extra-vars "#vars.json"
The vars.json file:
{
arg1: "demo1",
arg2: "demo2"
}
ansible-playbook release.yml --extra-vars "username=hello password=bye"
#you can now use the above command anywhere in the playbook as an example below:
tasks:
- name: Create a new user in Linux
shell: useradd -m -p {{username}} {{password}}"
ansible-playbok -i <inventory> <playbook-name> -e "proc_name=sshd"
You can use the above command in below playbooks.
---
- name: Service Status
gather_facts: False
tasks:
- name: Check Service Status (Linux)
shell: pgrep "{{ proc_name }}"
register: service_status
ignore_errors: yes
debug: var=service_status.rc`
I am liitle new to ansible so bear with me if my questions are a bit basic.
Scenario:
I have a few group of Remote hosts such as [EPCs] [Clients] and [Testers]
I am able to configure them just the way I want them to be.
Problem:
I need to write a playbook, which when runs, asks the user for the inventory at run time.
As an example when a playbook is run the user should be prompted in the following way:
"Enter the number of EPCs you want to configure"
"Enter the number of clients you want to configure"
"Enter the number of testers you want to configure"
What should happen:
Now for instance the user enters 2,5 and 8 respectively.
Now the playbook should only address the first 2 nodes in the group [EPCs], the first 5 nodes in group [Clients] and the first 7 nodes in the group [Testers] .
I don't want to create a large number of sub-groups, for instance if I have 20 EPCs, then I don't want to define 20 groups for my EPCs, I want somewhat of a dynamic inventory, which should automatically configure the machines according to the user input at run time using the vars_prompt option or something similar to that
Let me post a partial part of my playbook for better understanding of what is to happen:
---
- hosts: epcs # Now this is the part where I need a lot of flexibility
vars_prompt:
name: "what is your name?"
quest: "what is your quest?"
gather_facts: no
tasks:
- name: Check if path exists
stat: path=/home/khan/Desktop/tobefetched/file1.txt
register: st
- name: It exists
debug: msg='Path existence verified!'
when: st.stat.exists
- name: It doesn't exist
debug: msg="Path does not exist"
when: st.stat.exists == false
- name: Copy file2 if it exists
fetch: src=/home/khan/Desktop/tobefetched/file2.txt dest=/home/khan/Desktop/fetched/ flat=yes
when: st.stat.exists
- name: Run remotescript.sh and save the output of script to output.txt on the Desktop
shell: cd /home/imran/Desktop; ./remotescript.sh > output.txt
- name: Find and replace a word in a file placed on the remote node using variables
shell: cd /home/imran/Desktop/tobefetched; sed -i 's/{{name}}/{{quest}}/g' file1.txt
tags:
- replace
#gli I tried your solution, I have a group in my inventory named test with two nodes in it. When I enter 0..1 I get:
TASK: [echo sequence] *********************************************************
changed: [vm2] => (item=some_prefix0)
changed: [vm1] => (item=some_prefix0)
changed: [vm1] => (item=some_prefix1)
changed: [vm2] => (item=some_prefix1)
Similarly when I enter 1..2 I get:
TASK: [echo sequence] *********************************************************
changed: [vm2] => (item=some_prefix1)
changed: [vm1] => (item=some_prefix1)
changed: [vm2] => (item=some_prefix2)
changed: [vm1] => (item=some_prefix2)
Likewise when I enter 4..5 (nodes not even present in the inventory, I get:
TASK: [echo sequence] *********************************************************
changed: [vm1] => (item=some_prefix4)
changed: [vm2] => (item=some_prefix4)
changed: [vm1] => (item=some_prefix5)
changed: [vm2] => (item=some_prefix5)
Any help would be really appreciated. Thanks!
You should use vars_prompt for getting information from user, add_host for updating hosts dynamically and with_sequence for loops:
$ cat aaa.yml
---
- hosts: localhost
gather_facts: False
vars_prompt:
- name: range
prompt: Enter range of EPCs (e.g. 1..5)
private: False
default: "1"
pre_tasks:
- name: Set node id variables
set_fact:
start: "{{ range.split('..')[0] }}"
stop: "{{ range.split('..')[-1] }}"
- name: "Add hosts:"
add_host: name="host_{{item}}" groups=just_created
with_sequence: "start={{start}} end={{stop}} "
- hosts: just_created
gather_facts: False
tasks:
- name: echo sequence
shell: echo "cmd"
The output will be:
$ ansible-playbook aaa.yml -i 'localhost,'
Enter range of EPCs (e.g. 1..5) [1]: 0..1
PLAY [localhost] **************************************************************
TASK: [Set node id variables] *************************************************
ok: [localhost]
TASK: [Add hosts:] ************************************************************
ok: [localhost] => (item=0)
ok: [localhost] => (item=1)
PLAY [just_created] ***********************************************************
TASK: [echo sequence] *********************************************************
fatal: [host_0] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
fatal: [host_1] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/gli/aaa.retry
host_0 : ok=0 changed=0 unreachable=1 failed=0
host_1 : ok=0 changed=0 unreachable=1 failed=0
localhost : ok=2 changed=0 unreachable=0 failed=0
Here, it failed as host_0 and host_1 are unreachable, for you it'll work fine.
btw, I used more powerful concept "range of nodes". If you don't need it, it is quite simple to have "start=0" and ask only for "stop" value in the prompt.
I don't think you can define an inventory at run time. One thing you can do is, write a wrapper script over Ansible which will first prompt user for the hosts and then dynamically structure an ansible-playbook command.
I would prefer doing this using python, but you can use any language of your choice.
$ cat ansible_wrapper.py
import ConfigParser
import os
nodes = ''
inv = {}
hosts = raw_input("Enter hosts: ")
hosts = hosts.split(",")
config = ConfigParser.ConfigParser(allow_no_value=True)
config.readfp(open('hosts'))
sections = config.sections()
for i in range(len(sections)):
inv[sections[i]] = hosts[i]
for key, value in inv.items():
for i in range(int(value)):
nodes = nodes + config.items(key)[i][0] + ";"
command = 'ansible-playbook -i hosts myplaybook.yml -e "nodes=%s"' % (nodes)
print "Running command: ", command
os.system(command)
Note: I've tried running this script only using python2.7