Ansible how use with_dict by extra_vars? - variables

how can I use with_dict by extra_vars?
I try I know everything but all output with_dict expects a dict :(
This is all files
# vars.yml
rd1:
Terry:
user_name:terry_liu
user_birth:1994/05/11
Cary:
user_name:cary_lin
user_birth:1992/02/19
rd6:
Jessie:
user_name:jessie_chen
user_birth:1996/11/20
Sherry:
user_name:sherry_hsu
user_birth:1989/07/23
-
# test.yml
- name: demo
hosts: test
vars_files:
- vars.yml
tasks:
- name: show data
debug:
msg: "{{ item }}"
with_dict: "{{ dep }}"
-
#command
ansible-playbook -i inventory test.yml --extra-vars 'dep=rd1'
-
Inventory's host is my test vm, just have an ip and it can be ssh.
When run command, it output: fatal: [172.16.1.227]: FAILED! => {"msg": "with_dict expects a dict"}
I think it's need var in var, but I try many different way, all fail.
My demand is send a float dep var and get correspond data from vars.yml.
Thanks all, have a good day!

The problem is that "{{ dep }}" evaluates to the string "rd1"
with_dict: "{{ dep }}"
This is the reason for the error "with_dict expects a dict".
Instead, you need lookup and vars plugin. For example
with_dict: "{{ lookup('vars', dep) }}"

Related

Ansible task includes undefined var, despite being defined in defaults/main.yml

I am trying to create a Galaxy role for our org's internal galaxy, which I am testing first locally. In our org we use a common list of defaults across all roles.
Ansible is throwing me a "The task includes an option with an undefined variable The error was: 'redis_download_url' is undefined" error when running my playbook, despite me having defined the variable in defaults/main.yml:
# Download
redis_version: "6.2.3"
redis_download_url: "https://download.redis.io/releases/redis-{{ redis_version }}.tar.gz"
When running my simple role/playbook.yml
---
- hosts: all
become: true
tasks:
- include: tasks/main.yml
Linked to tasks/main.yml
---
- name: Check ansible version
assert:
that: "ansible_version.full is version_compare('2.4', '>=')"
msg: "Please use Ansible 2.4 or later"
- include: download.yml
tags:
- download
- include: install.yml
tags:
- install
It should pull the tar file from tasks/download.yml as stated:
---
- name: Download Redis
get_url:
url: "{{ redis_download_url }}"
dest: /usr/local/src/redis-{{ redis_version }}.tar.gz
- name: Extract Redis tarball
unarchive:
src: /usr/local/src/redis-{{ redis_version }}.tar.gz
dest: /usr/local/src
creates: /usr/local/src/redis-{{ redis_version }}/Makefile
copy: no
The redis_download_url var is defined in defaults/main.yml which as I understand ansible should be able to locate there. I also have similar vars defined in defaults/task.yml eg.
redis_user: redis
redis_group: "{{ redis_user }}"
redis_port: "6379"
redis_root_dir: "/opt/redis"
redis_config_dir: "/etc/redis"
redis_conf_file: "{{ redis_config_dir }}/{{ redis_port }}.conf"
redis_password: "change-me"
redis_protected_mode: "yes"
and I assume they are also not able to be found/seen by ansible (but it does not get that far). I have also checked all file permissions and they seem to be fine.
Apologies in advance if the question is badly formatted.
As per documentation:
If you include a task file from a role, it will NOT trigger role behavior, this only happens when running as a role, include_role will work.
To get the role functionality of reading variables from defaults/main.yml, you'll need to use include_role or roles: [].
- hosts: all
become: true
tasks:
- include_role:
name: myrole
OR
- hosts: all
become: true
roles:
- myrole

store output of command in variable in ansible

Concerning: Ansible-Playbooks
Is it possible to run a command on the remote machine and store the resulting output into a variable?
I am trying to get the kernel version and install the matching headers like this:
---
- hosts: all
tasks:
- name: install headers
become: yes
become_method: sudo
apt:
name: "{{ packages }}"
vars:
packages:
- linux-headers-`uname -r`
#- a lot of other packages here
Unfortunately uname -r is not executed here.
I am aware of this question: Ansible: Store command's stdout in new variable?
But it looks like this is another topic.
By definition:
Ansible facts are data related to your remote systems, including
operating systems, IP addresses, attached filesystems, and more.
In this link you can see all ansible facts that you can use.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_vars_facts.html
One of those variables is ansible_kernel, this is the version of the kernel of your remote system. By default ansible gets this variables, but if you want to be sure that ansible will get this variables you have to use gather_facts: yes.
---
- hosts: all
gather_facts: yes
tasks:
- name: install
become: yes
become_method: sudo
apt:
name: "{{ packages }}"
vars:
packages:
- linux-headers-{{ ansible_kernel }}
I found a solution but I am not sure if this is really elegant
---
- hosts: all
tasks:
- name: Getting Kernel Version
command: "uname -r"
register: kernel_version
- name: install
become: yes
become_method: sudo
apt:
name: "{{ packages }}"
vars:
packages:
- linux-headers-{{ kernel_version.stdout }}

Set different ORACLE_HOME and PATH environment variable using Ansible

Im currently querying multiple databases and capturing the results of the query
The way Im doing it is, Im writing a task which copies a shell script, something like below
#!/bin/bash
source $HOME/bin/gsd_xenv $1 &> /dev/null
sqlplus -s <<EOF
/ as sysdba
set heading off
select d.name||','||i.instance_name||','||i.host_name||';' from v\$database d,v\$instance i;
EOF
In the playbook, Im writing the task as below:
- name: List Query [Host and DB]
shell: "/tmp/sqlscript/sql_select.sh {{item}} >> /tmp/sqlscript/output.out"
become: yes
become_method: sudo
become_user: oracle
environment:
PATH: "/home/oracle/bin:/usr/orasys/12.1.0.2r10/bin:/usr/bin:/bin:/usr/ucb:/sbin:/usr/sbin:/etc:/usr/local/bin:/oradata/epdmat/goldengate/config/sys"
ORACLE_HOME: "/usr/orasys/12.1.0.2r10"
with_items: "{{ factor_dbs.split('\n') }}"
However I have noticed that the different hosts have different ORACLE_HOME and PATHS. How can I define those variables in the playbook, so that the task picks the right ORACLE_HOME and PATH variables and execute the task successfully
you can define host specific variables for each of the hosts. You can write your inventory file like:
[is_hosts]
greenhat ORACLE_HOME=/tmp
localhost ORACLE_HOME=/sbin
similarly for the PATH variable
then your task:
sample playbook that demonstrates the results:
- hosts: is_hosts
gather_facts: false
vars:
tasks:
- name: task 1
shell: "env | grep -e PATH -e ORACLE_HOME"
environment:
# PATH: "{{ hostvars[inventory_hostname]['PATH']}}"
ORACLE_HOME: "{{ hostvars[inventory_hostname]['ORACLE_HOME'] }}"
register: shell_output
- name: print results
debug:
var: shell_output.stdout_lines
sample output, you can see ORACLE_HOME variable was indeed changed, and as defined per host:
TASK [print results] ************************************************************************************************************************************************************************************************
ok: [greenhat] => {
"shell_output.stdout_lines": [
"ORACLE_HOME=/tmp",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
]
}
ok: [localhost] => {
"shell_output.stdout_lines": [
"ORACLE_HOME=/sbin",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
]
}

Ansible: How to specify an ssh key for a single task?

I have a playbook that creates and ec2 instance, copies a few files over to the instance and then runs some shell commands on the instance.
The issue is that I want to be able to specify what ssh key ansible uses for the copy and shell tasks I am running and make sure it does not attempt to use this key for the other tasks, which run on the localhost. Here is my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
vars:
# CentOS 7 x86_64 Devel AtomicHost EBS HVM 20150306_01 (ami-07e6c437)
# for us-west-2
- ami: 'ami-07e6c437'
- key_pair: 'my-key'
tasks:
- name: Create a centos server
ec2:
region: 'us-west-2'
key_name: '{{ key_pair }}'
group: default
instance_type: t2.micro
image: '{{ ami }}'
wait: true
exact_count: 1
count_tag:
Name: my-instance
instance_tags:
Name: my-instance
register: ec2
# shows the json data for the instances created
- name: Show ec2 instance json data
debug:
msg: "{{ ec2['tagged_instances'] }}"
- name: Wait for SSH to come up
wait_for: host={{ ec2['tagged_instances'][0]['public_ip'] }} port=22 delay=1 timeout=480 state=started
- name: Accept new ssh fingerprints
shell: ssh-keyscan -H "{{ ec2['tagged_instances'][0]['public_ip'] }}" >> ~/.ssh/known_hosts
# THE TASKS I NEED HELP ON
- name: Copy files over to ec2 instance
remote_user: centos
copy: src={{ item }} dest=/home/centos/ mode=600
with_fileglob:
- my-files/*
delegate_to: "{{ ec2['tagged_instances'][0]['public_ip'] }}"
# THE TASKS I NEED HELP ON
- name: run commands
remote_user: centos
shell: "{{ item }}"
delegate_to: "{{ ec2['tagged_instances'][0]['public_ip'] }}"
with_items:
- "sudo yum update -y"
- "sudo yum install nmap ruby"
ignore_errors: true
Yeah, I agree with #techraf. But the answer to the question you posted is that you have to dynamically change your inventory for the new instance that you provisioned and then run remote ansible plays on that new host. So you would add this to the end of your first play:
- local_action:
module: add_host
hostname: newhost
ansible_host: "{{ ec2['tagged_instances'][0]['public_ip'] }}"
ansible_user: centos
ansible_ssh_private_key_file: /path/to/keyfile
###### New play
- name: Configure my new instance!
hosts: newhost
tasks:
# THE TASKS I NEED HELP ON
- name: Copy files over to ec2 instance
copy: src={{ item }} dest=/home/centos/ mode=600
with_fileglob:
- my-files/*
# Use the yum module here instead, much easier
- name: run commands
shell: "{{ item }}"
with_items:
- "sudo yum update -y"
- "sudo yum install nmap ruby"
ignore_errors: true
Edit: Adding, that you can always just set the ssh host key by using:
- set_fact: ansible_ssh_private_key_file=/path/to/keyfile
with the caveat that the above set_fact will only change the ssh private key file for the currently running host (e.g., for localhost on your example play above).

How can I pass variable to ansible playbook in the command line?

How can one pass variable to ansible playbook in the command line?
The following command didn't work:
$ ansible-playbook -i '10.0.0.1,' yada-yada.yml --tags 'loaddata' django_fixtures="tile_colors"
Where django_fixtures is my variable.
Reading the docs I find the section Passing Variables On The Command Line, that gives this example:
ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo"
Others examples demonstrate how to load from JSON string (≥1.2) or file (≥1.3)
Other answers state how to pass in the command line variables but not how to access them, so if you do:
--extra-vars "version=1.23.45 other_variable=foo"
In your yml file you assign these to scoped ansible variables by doing something like:
vars:
my_version: "{{ version }}"
my_other_variable: {{ other_variable }}
An alternative to using command line args is to utilise environmental variables that are already defined within your session, you can reference these within your ansible yml files like this:
vars:
my_version: "{{ lookup('env', 'version') }}"
my_other_variable: {{ lookup('env', 'other_variable') }}
ansible-playbook release.yml -e "version=1.23.45 other_variable=foo"
For some reason none of the above Answers worked for me. As I need to pass several extra vars to my playbook in Ansbile 2.2.0, this is how I got it working (note the -e option before each var):
ansible-playbook site.yaml -i hostinv -e firstvar=false -e second_var=value2
You can use the --extra-vars option. See the docs
ansible-playbook test.yml --extra-vars "arg1=${var1} arg2=${var2}"
In the yml file you can use them like this
---
arg1: "{{ var1 }}"
arg2: "{{ var2 }}"
Also, --extra-vars and -e are the same, you can use one of them.
s3_sync:
bucket: ansible-harshika
file_root: "{{ pathoftsfiles }}"
validate_certs: false
mode: push
key_prefix: "{{ folder }}"
here the variables are being used named as 'pathoftsfiles' and 'folder'. Now the value to this variable can be given by the below command
sudo ansible-playbook multiadd.yml --extra-vars "pathoftsfiles=/opt/lampp/htdocs/video/uploads/tsfiles/$2 folder=nitesh"
Note: Don't use the inverted commas while passing the values to the variable in the shell command
This also worked for me if you want to use shell environment variables:
ansible-playbook -i "localhost," ldap.yaml --extra-vars="LDAP_HOST={{ lookup('env', 'LDAP_HOST') }} clustername=mycluster env=dev LDAP_USERNAME={{ lookup('env', 'LDAP_USERNAME') }} LDAP_PASSWORD={{ lookup('env', 'LDAP_PASSWORD') }}"
In Ansible, we can define variables when running our playbook by passing variables at the command line using the --extra-vars (or -e) argument.
Bellow are some ways to pass variables to an Ansible playbook in the command line:
Method 1: key=value format
ansible-playbook site.yml --extra-vars "arg1=demo1 arg2=demo2"
Method 2: JSON string format
ansible-playbook site.yml --extra-vars '{"arg1":"demo1","arg2":"demo2"}'
The site.yml playbook will be:
---
- name: ansible playbook to print external variables
hosts: localhost
connection: local
tasks:
- name: print values
ansible.builtin.debug:
msg: "variable1 = {{ arg1 }}, variable2 = {{ arg2 }}"
when: arg1 is defined and arg2 is defined
Method 3: Read from an external JSON file
If you have a lot of special characters, use a JSON or YAML file containing the variable definitions.
ansible-playbook site.yml --extra-vars "#vars.json"
The vars.json file:
{
arg1: "demo1",
arg2: "demo2"
}
ansible-playbook release.yml --extra-vars "username=hello password=bye"
#you can now use the above command anywhere in the playbook as an example below:
tasks:
- name: Create a new user in Linux
shell: useradd -m -p {{username}} {{password}}"
ansible-playbok -i <inventory> <playbook-name> -e "proc_name=sshd"
You can use the above command in below playbooks.
---
- name: Service Status
gather_facts: False
tasks:
- name: Check Service Status (Linux)
shell: pgrep "{{ proc_name }}"
register: service_status
ignore_errors: yes
debug: var=service_status.rc`